aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1704.03058
2605455247
This work is about recognizing human activities occurring in videos at distinct semantic levels, including individual actions, interactions, and group activities. The recognition is realized using a two-level hierarchy of Long Short-Term Memory (LSTM) networks, forming a feed-forward deep architecture, which can be trained end-to-end. In comparison with existing architectures of LSTMs, we make two key contributions giving the name to our approach as Confidence-Energy Recurrent Network -- CERN. First, instead of using the common softmax layer for prediction, we specify a novel energy layer (EL) for estimating the energy of our predictions. Second, rather than finding the common minimum-energy class assignment, which may be numerically unstable under uncertainty, we specify that the EL additionally computes the p-values of the solutions, and in this way estimates the most confident energy minimum. The evaluation on the Collective Activity and Volleyball datasets demonstrates: (i) advantages of our two contributions relative to the common softmax and energy-minimization formulations and (ii) a superior performance relative to the state-of-the-art approaches.
Reliability of Recognition . Most energy-based models in computer vision have only focused on the energy minimization for various recognition problems. Our approach additionally estimates and regularizes inference with p-values. The p-values are specified within the framework of conformal prediction @cite_10 . This allows the selection of more reliable and numerically stable predictions.
{ "cite_N": [ "@cite_10" ], "mid": [ "2171585602" ], "abstract": [ "Conformal prediction uses past experience to determine precise levels of confidence in new predictions. Given an error probability e, together with a method that makes a prediction ŷ of a label y, it produces a set of labels, typically containing ŷ, that also contains y with probability 1 – e. Conformal prediction can be applied to any method for producing ŷ: a nearest-neighbor method, a support-vector machine, ridge regression, etc. Conformal prediction is designed for an on-line setting in which labels are predicted successively, each one being revealed before the next is predicted. The most novel and valuable feature of conformal prediction is that if the successive examples are sampled independently from the same distribution, then the successive predictions will be right 1 – e of the time, even though they are based on an accumulating data set rather than on independent data sets. In addition to the model under which successive examples are sampled independently, other on-line compression models can also use conformal prediction. The widely used Gaussian linear model is one of these. This tutorial presents a self-contained account of the theory of conformal prediction and works through several numerical examples. A more comprehensive treatment of the topic is provided in Algorithmic Learning in a Random World, by Vladimir Vovk, Alex Gammerman, and Glenn Shafer (Springer, 2005)." ] }
1704.02790
2612962789
Motivated by emerging vision-based intelligent services, we consider the problem of rate adaptation for high quality and low delay visual information delivery over wireless networks using scalable video coding. Rate adaptation in this setting is inherently challenging due to the interplay between the variability of the wireless channels, the queuing at the network nodes and the frame-based decoding and playback of the video content at the receiver at very short time scales. To address the problem, we propose a low-complexity, model-based rate adaptation algorithm for scalable video streaming systems, building on a novel performance model based on stochastic network calculus. We validate the model using extensive simulations. We show that it allows fast, near optimal rate adaptation for fixed transmission paths, as well as cross-layer optimized routing and video rate adaptation in mesh networks, with less than @math quality degradation compared to the best achievable performance.
Proposed rate adaptation methods for SVC are based on buffer content @cite_28 , transmission rate estimation @cite_25 @cite_27 , or both @cite_8 , with the advantage that detailed modeling of the network performance is not required. Low delay applications however can not build on buffer-content-based models. Results presented in the literature consider tens of seconds of playout delays. Similarly for low latency requirements, rate adaptation based on average transmission rate would be overly optimistic; it would result in queuing delays at the network nodes and late arrivals at the playout buffer. Therefore, in this paper we propose rate adaptation based on network performance modeling for low latency wireless applications.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_25", "@cite_8" ], "mid": [ "2294512426", "2017146017", "1976676487", "1976944900" ], "abstract": [ "Modern video players employ complex algorithms to adapt the bitrate of the video that is shown to the user. Bitrate adaptation requires a tradeoff between reducing the probability that the video freezes and enhancing the quality of the video shown to the user. A bitrate that is too high leads to frequent video freezes (i.e., rebuffering), while a bitrate that is too low leads to poor video quality. Video providers segment the video into short chunks and encode each chunk at multiple bitrates. The video player adaptively chooses the bitrate of each chunk that is downloaded, possibly choosing different bitrates for successive chunks. While bitrate adaptation holds the key to a good quality of experience for the user, current video players use ad-hoc algorithms that are poorly understood. We formulate bitrate adaptation as a utility maximization problem and devise an online control algorithm called BOLA that uses Lyapunov optimization techniques to minimize rebuffering and maximize video quality. We prove that BOLA achieves a time-average utility that is within an additive term O(1 V) of the optimal value, for a control parameter V related to the video buffer size. Further, unlike prior work, our algorithm does not require any prediction of available network bandwidth. We empirically validate our algorithm in a simulated network environment using an extensive collection of network traces. We show that our algorithm achieves near-optimal utility and in many cases significantly higher utility than current state-of-the-art algorithms. Our work has immediate impact on real-world video players and for the evolving DASH standard for video transmission.", "Today, the technology for video streaming over the Internet is converging towards a paradigm named HTTP-based adaptive streaming (HAS), which brings two new features. First, by using HTTP TCP, it leverages network-friendly TCP to achieve both firewall NAT traversal and bandwidth sharing. Second, by pre-encoding and storing the video in a number of discrete rate levels, it introduces video bitrate adaptivity in a scalable way so that the video encoding is excluded from the closed-loop adaptation. A conventional wisdom in HAS design is that since the TCP throughput observed by a client would indicate the available network bandwidth, it could be used as a reliable reference for video bitrate selection. We argue that this is no longer true when HAS becomes a substantial fraction of the total network traffic. We show that when multiple HAS clients compete at a network bottleneck, the discrete nature of the video bitrates results in difficulty for a client to correctly perceive its fair-share bandwidth. Through analysis and test bed experiments, we demonstrate that this fundamental limitation leads to video bitrate oscillation and other undesirable behaviors that negatively impact the video viewing experience. We therefore argue that it is necessary to design at the application layer using a \"probe and adapt\" principle for video bitrate adaptation (where \"probe\" refers to trial increment of the data rate, instead of sending auxiliary piggybacking traffic), which is akin, but also orthogonal to the transport-layer TCP congestion control. We present PANDA - a client-side rate adaptation algorithm for HAS - as a practical embodiment of this principle. Our test bed results show that compared to conventional algorithms, PANDA is able to reduce the instability of video bitrate selection by over 75 without increasing the risk of buffer underrun.", "Today, video distribution platforms use adaptive video streaming to deliver the maximum Quality of Experience to a wide range of devices connected to the Internet through different access networks. Among the techniques employed to implement video adaptivity, the stream-switching over HTTP is getting a wide acceptance due to its deployment and implementation simplicity. Recently it has been shown that the client-side algorithms proposed so far generate an on-off traffic pattern that may lead to unfairness and underutilization when many video flows share a bottleneck. In this paper we propose ELASTIC (fEedback Linearization Adaptive STreamIng Controller), a client-side controller designed using feedback control theory that does not generate an on-off traffic pattern. By employing a controlled testbed, allowing bandwidth capacity and delays to be set, we compare ELASTIC with other client-side controllers proposed in the literature. In particular, we have checked to what extent the considered algorithms are able to: 1) fully utilize the bottleneck, 2) fairly share the bottleneck, 3) obtain a fair share when TCP greedy flows share the bottleneck with video flows. The obtained results show that ELASTIC achieves a very high fairness and is able to get the fair share when coexisting with TCP greedy flows.", "User-perceived quality-of-experience (QoE) is critical in Internet video applications as it impacts revenues for content providers and delivery systems. Given that there is little support in the network for optimizing such measures, bottlenecks could occur anywhere in the delivery system. Consequently, a robust bitrate adaptation algorithm in client-side players is critical to ensure good user experience. Previous studies have shown key limitations of state-of-art commercial solutions and proposed a range of heuristic fixes. Despite the emergence of several proposals, there is still a distinct lack of consensus on: (1) How best to design this client-side bitrate adaptation logic (e.g., use rate estimates vs. buffer occupancy); (2) How well specific classes of approaches will perform under diverse operating regimes (e.g., high throughput variability); or (3) How do they actually balance different QoE objectives (e.g., startup delay vs. rebuffering). To this end, this paper makes three key technical contributions. First, to bring some rigor to this space, we develop a principled control-theoretic model to reason about a broad spectrum of strategies. Second, we propose a novel model predictive control algorithm that can optimally combine throughput and buffer occupancy information to outperform traditional approaches. Third, we present a practical implementation in a reference video player to validate our approach using realistic trace-driven emulations." ] }
1704.02790
2612962789
Motivated by emerging vision-based intelligent services, we consider the problem of rate adaptation for high quality and low delay visual information delivery over wireless networks using scalable video coding. Rate adaptation in this setting is inherently challenging due to the interplay between the variability of the wireless channels, the queuing at the network nodes and the frame-based decoding and playback of the video content at the receiver at very short time scales. To address the problem, we propose a low-complexity, model-based rate adaptation algorithm for scalable video streaming systems, building on a novel performance model based on stochastic network calculus. We validate the model using extensive simulations. We show that it allows fast, near optimal rate adaptation for fixed transmission paths, as well as cross-layer optimized routing and video rate adaptation in mesh networks, with less than @math quality degradation compared to the best achievable performance.
Performance modeling of adaptive video streaming in wireless networks has mostly been considered for a single wireless link. @cite_35 the effect of an unreliable wireless channel is modelled by an i.i.d packet loss process, and the video coding rate and the packet size are optimized under retransmission-based error correction. @cite_32 and @cite_1 adaptive media playout and adaptive layered coding is addressed respectively. Both papers define a queuing model on a video frame level, assuming that the wireless channel results in a Poisson frame arrival process at the receiving terminal, a simplification that may be reasonable if the buffering at the receiver side is significant, and therefore packet level delays do not need to be taken into account.
{ "cite_N": [ "@cite_35", "@cite_1", "@cite_32" ], "mid": [ "2083767059", "", "2409027520" ], "abstract": [ "In this paper, a cross-layer adaptation scheme is proposed for quality of service (QoS) provision in the scalable video streaming of high definition (HD) content. The cross-layer parameters, which contain the video rate, payload length of a packet, the mode of modulation and coding scheme (MCS), can be dynamically adapted to minimize distortion of a video streaming under the given delay bound. Based on the channel quality and rate-distortion parameters, the proposed scheme formulates the problem of parameter selection into an optimization problem. Simulation results show that our approach guarantees video quality under QoS constraints.", "", "Scalable Video Coding (SVC) has been raised as a promising technique to enable flexible video transmission for mobile users with heterogeneous terminals and varying channel capacities. In this paper, we design an adaptive layer switching algorithm for on-demand scalable video service based on receiver’s buffer underflow probability (BUP). Since the low quality of channel may lead to a low buffer fullness, the buffer fullness is an indicator for reflecting the channel condition and we define BUP for characterizing the mismatch between the video bitrate and the channel throughput. Accordingly, the adaptive SVC transmission problem is formulated as the adaptive adjustment of video layers based on BUP. This allows us to optimize the attainable video quality, while keeping BUP below a desired level. To estimate BUP, we derive an analytical model based on the large deviation principles. Then, an online layer switching algorithm is proposed using this estimation model, which is capable of accommodating different channel qualities without any prior knowledge of the channel variations and of the video characteristics. We further introduce a perturbation-based layer switching approach for reducing the quality fluctuating issue caused by frequent layer switches, thus improving the viewer’s quality of experience. A system prototype is implemented to evaluate the success of the proposed method. We also conduct simulations in multiuser scenarios with real video traces and the results demonstrate that the proposed algorithm is capable of improving the playback experience, while keeping a low playback interruption rate and quality variation." ] }
1704.02790
2612962789
Motivated by emerging vision-based intelligent services, we consider the problem of rate adaptation for high quality and low delay visual information delivery over wireless networks using scalable video coding. Rate adaptation in this setting is inherently challenging due to the interplay between the variability of the wireless channels, the queuing at the network nodes and the frame-based decoding and playback of the video content at the receiver at very short time scales. To address the problem, we propose a low-complexity, model-based rate adaptation algorithm for scalable video streaming systems, building on a novel performance model based on stochastic network calculus. We validate the model using extensive simulations. We show that it allows fast, near optimal rate adaptation for fixed transmission paths, as well as cross-layer optimized routing and video rate adaptation in mesh networks, with less than @math quality degradation compared to the best achievable performance.
Stochastic network calculus has been extended to capture the randomly varying channel capacity of wireless links, following different methods @cite_7 @cite_29 @cite_4 @cite_13 @cite_14 @cite_40 . Most of the existing work builds on an abstracted finite-state Markov channel (FSMC) model of the underlying fading channel, e.g., @cite_29 @cite_4 or uses moment generating function based network calculus @cite_40 . However, the complexity of the resulting models limits the applicability of these approaches in multi--hop wireless network analysis with more than a few state FSMC model and more than two hops. In this work, we follow the approach proposed by Al- @cite_18 , where a wireless network calculus based on the dioid algebra was developed. The main premise for this approach is that the channel capacity, and hence the offered service of fading channels is related to the instantaneous received SNR through the logarithmic function as expressed by the Shannon capacity, @math . Hence, an equivalent representation of the channel capacity in an isomorphic transform domain, obtained using the exponential function, would be @math . This simplifies the otherwise cumbersome computations of the end-to-end performance metrics.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_7", "@cite_29", "@cite_40", "@cite_13" ], "mid": [ "", "", "1508983854", "2049732373", "2142114607", "2116690616", "2098142032" ], "abstract": [ "", "", "The MIMO wireless channel offers a rich ground for quality of service analysis. In this work, we present a stochastic network calculus analysis of a MIMO system, operating in spatial multiplexing mode, using moment generating functions (MGF). We quantify the spatial multiplexing gain, achieved through multiple antennas, for flow level quality of service (QoS) performance. Specifically we use Gilbert-Elliot model to describe individual spatial paths between the antenna pairs and model the whole channel by an @math -State Markov Chain, where @math depends upon the degrees of freedom available in the MIMO system. We derive probabilistic delay bounds for the system and show the impact of increasing the number of antennas on the delay bounds under various conditions, such as channel burstiness, signal strength and fading speed. Further we present results for multi-hop scenarios under statistical independence.", "The class of Gupta-Kumar results, which predict the throughput capacity in wireless networks, is restricted to asymptotic regimes. This tutorial presents a methodology to address a corresponding non-asymptotic analysis based on the framework of the stochastic network calculus, in a rigorous mathematical manner. In particular, we derive explicit closed-form results on the distribution of the end-to-end capacity and delay, for a fixed source-destination pair, in a network with broad assumptions on its topology and degree of spatial correlations. The results are non-asymptotic in that they hold for finite time scales and network sizes, as well as bursty arrivals. The generality of the results enables the research of several interesting problems, concerning for instance the effects of time scales or randomness in topology on the network capacity.", "Network calculus is an established theory for deterministic quality of service analysis of fixed networks. Due to the failures inherent in fading channels it is, however, not applicable to radio systems. Emerging probabilistic equivalents allow closing this gap. Based on the recent network calculus with moment generating functions we present a methodology for performance analysis of fading channels. We use a service curve representation of radio links which facilitates an efficient analysis of radio networks. We investigate fading channels with memory and our results show that the fading speed impacts service guarantees significantly. Numerical performance bounds are provided for an example taken from cellular radio communications for which the effects of opportunistic scheduling are quantified. Simulation results are shown which confirm the efficiency of the approach.", "Network calculus is a min-plus system theory for performance evaluation of queuing networks. Its elegance steins from intuitive convolution formulas for concatenation of deterministic servers. Recent research dispenses with the worst-case assumptions of network calculus to develop a probabilistic equivalent that benefits from statistical multiplexing. Significant achievements have been made, owing for example to the theory of effective bandwidths; however, the outstanding scalability set up by concatenation of deterministic servers has not been shown. This paper establishes a concise, probabilistic network calculus with moment generating functions. The presented work features closed-form, end-to-end, probabilistic performance bounds that achieve the objective of scaling linearly in the number of servers in series. The consistent application of moment generating functions put forth in this paper utilizes independence beyond the scope of current statistical multiplexing of flows. A relevant additional gain is demonstrated for tandem servers with independent cross-traffic", "In this paper, we develop a method for analyzing time-varying wireless channels in the context of the modern theory of the stochastic network calculus. In particular, our technique is applicable to channels that can be modeled as Markov chains, which is the case of channels subject to Rayleigh fading. Our approach relies on theoretical results on the convergence time of reversible Markov processes and is applicable to chains with an arbitrary number of states. We provide two expressions for the delay tail distribution of traffic transmitted over a fading channel fed by a Markov source. The first expression is tighter and only requires a simple numerical minimization, the second expression is looser, but is in closed form." ] }
1704.02786
2607158040
The Web is replete with tutorial-style content on how to accomplish programming tasks. Unfortunately, even top-ranked tutorials suffer from severe security vulnerabilities, such as cross-site scripting (XSS), and SQL injection (SQLi). Assuming that these tutorials influence real-world software development, we hypothesize that code snippets from popular tutorials can be used to bootstrap vulnerability discovery at scale. To validate our hypothesis, we propose a semi-automated approach to find recurring vulnerabilities starting from a handful of top-ranked tutorials that contain vulnerable code snippets. We evaluate our approach by performing an analysis of tens of thousands of open-source web applications to check if vulnerabilities originating in the selected tutorials recur. Our analysis framework has been running on a standard PC, analyzed 64,415 PHP codebases hosted on GitHub thus far, and found a total of 117 vulnerabilities that have a strong syntactic similarity to vulnerable code snippets present in popular tutorials. In addition to shedding light on the anecdotal belief that programmers reuse web tutorial code in an ad hoc manner, our study finds disconcerting evidence of insufficiently reviewed tutorials compromising the security of open-source projects. Moreover, our findings testify to the feasibility of large-scale vulnerability discovery using poorly written tutorials as a starting point.
Despite modern software design processes and state of the art programming environments, real world software development accommodates ad hoc code re use. In their seminal work on code clone detection, @cite_1 , citing earlier work @cite_0 @cite_2 , state that 5-10 initial motivation for code clone detection was that ridding software of seemingly redundant code might achieve a performance gain. Thus, traditional code clone detection tools seek code replicas in a single codebase, or a set of codebases with the same provenance. This has guided the design of several code clone detection tools .
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_2" ], "mid": [ "1975394407", "2157532207", "" ], "abstract": [ "Software developers often duplicate source code to replicate functionality. This practice can hinder the maintenance of a software project: bugs may arise when two identical code segments are edited inconsistently. This paper presents DejaVu, a highly scalable system for detecting these general syntactic inconsistency bugs. DejaVu operates in two phases. Given a target code base, a parallel inconsistent clone analysis first enumerates all groups of source code fragments that are similar but not identical. Next, an extensible buggy change analysis framework refines these results, separating each group of inconsistent fragments into a fine-grained set of inconsistent changes and classifying each as benign or buggy. On a 75+ million line pre-production commercial code base, DejaVu executed in under five hours and produced a report of over 8,000 potential bugs. Our analysis of a sizable random sample suggests with high likelihood that at this report contains at least 2,000 true bugs and 1,000 code smells. These bugs draw from a diverse class of software defects and are often simple to correct: syntactic inconsistencies both indicate problems and suggest solutions.", "Existing research suggests that a considerable fraction (5-10 ) of the source code of large scale computer programs is duplicate code (\"clones\"). Detection and removal of such clones promises decreased software maintenance costs of possibly the same magnitude. Previous work was limited to detection of either near misses differing only in single lexems, or near misses only between complete functions. The paper presents simple and practical methods for detecting exact and near miss clones over arbitrary program fragments in program source code by using abstract syntax trees. Previous work also did not suggest practical means for removing detected clones. Since our methods operate in terms of the program structure, clones could be removed by mechanical methods producing in-lined procedures or standard preprocessor macros. A tool using these techniques is applied to a C production software system of some 400 K source lines, and the results confirm detected levels of duplication found by previous work. The tool produces macro bodies needed for clone removal, and macro invocations to replace the clones. The tool uses a variation of the well known compiler method for detecting common sub expressions. This method determines exact tree matches; a number of adjustments are needed to detect equivalent statement sequences, commutative operands, and nearly exact matches. We additionally suggest that clone detection could also be useful in producing more structured code, and in reverse engineering to discover domain concepts and their implementations.", "" ] }
1704.02708
2605665212
In Valiant's model of evolution, a class of representations is evolvable iff a polynomial-time process of random mutations guided by selection converges with high probability to a representation as @math -close as desired from the optimal one, for any required @math . Several previous positive results exist that can be related to evolving a vector space, but each former result imposes restrictions either on (re)initialisations, distributions, performance functions and or the mutator. In this paper, we show that all it takes to evolve a complete normed vector space is merely a set that generates the space. Furthermore, it takes only @math steps and it is essentially strictly monotonic, agnostic and handles target drifts that rival some proven in fairly restricted settings. In the context of the model, we bring to the fore new results not documented previously. Evolution appears to occur in a mean-divergence model reminiscent of Markowitz mean-variance model for portfolio selection, and the risk-return efficient frontier of evolution shows an interesting pattern: when far from the optimum, the mutator always has access to mutations close to the efficient frontier. Toy experiments in supervised and unsupervised learning display promising directions for this scheme to be used as a (new) provable gradient-free stochastic optimisation algorithm.
The second contribution of @cite_1 is more direct since it trades the complex mutator for a much simpler and randomized hill climber. However, the analysis is now significantly more restricted as evolution is proven only for the quadratic loss and the distribution is restricted to a ball on @math . Also, evolution still suffers downsides as the at each generation and the mutator is computationally quite ineffective and biologically unplausible: the neighborhood to find new mutants size is huge --- polynomial in @math and other factors --- and it resamples its stock of available mutations at each generation. Finally, neither of @cite_1 's schemes are known to be agnostic nor stable in any way --- we note that stability is an important notion in biology but is not a feature of Valiant's original evolvability model.
{ "cite_N": [ "@cite_1" ], "mid": [ "2130700086" ], "abstract": [ "We consider the problem of predicting a random variable X from observations, denoted by a random variable Z. It is well known that the conditional expectation E[X|Z] is the optimal L sup 2 predictor (also known as \"the least-mean-square error\" predictor) of X, among all (Borel measurable) functions of Z. In this orrespondence, we provide necessary and sufficient conditions for the general loss functions under which the conditional expectation is the unique optimal predictor. We show that E[X|Z] is the optimal predictor for all Bregman loss functions (BLFs), of which the L sup 2 loss function is a special case. Moreover, under mild conditions, we show that the BLFs are exhaustive, i.e., if for every random variable X, the infimum of E[F(X,y)] over all constants y is attained by the expectation E[X], then F is a BLF." ] }
1704.02708
2605665212
In Valiant's model of evolution, a class of representations is evolvable iff a polynomial-time process of random mutations guided by selection converges with high probability to a representation as @math -close as desired from the optimal one, for any required @math . Several previous positive results exist that can be related to evolving a vector space, but each former result imposes restrictions either on (re)initialisations, distributions, performance functions and or the mutator. In this paper, we show that all it takes to evolve a complete normed vector space is merely a set that generates the space. Furthermore, it takes only @math steps and it is essentially strictly monotonic, agnostic and handles target drifts that rival some proven in fairly restricted settings. In the context of the model, we bring to the fore new results not documented previously. Evolution appears to occur in a mean-divergence model reminiscent of Markowitz mean-variance model for portfolio selection, and the risk-return efficient frontier of evolution shows an interesting pattern: when far from the optimum, the mutator always has access to mutations close to the efficient frontier. Toy experiments in supervised and unsupervised learning display promising directions for this scheme to be used as a (new) provable gradient-free stochastic optimisation algorithm.
Our main result suffers none of these downsides: our mutator meets time and space optimality properties (Section ), we do not change the set of mutations (Section ), we do not do restart. Also, our evolvability scheme is agnostic (Section ), stable (Section ) and handles significant drift (Section ). Finally, instead of fixed-degree polynomials, we consider any finite-valued function @math . Thus, we can evolve functions with infinite Taylor expansion, something @cite_1 does not cover It is also not clear whether a simple trick to extend @cite_1 --- replacing variables by bounded functions --- is possible without endangering the distribution support assumption or the complexity parameters. . Finally, our mutator yields an extremely simple and provable evolutionary scheme, implementable using few lines of code as sketched in Algorithm ( ).
{ "cite_N": [ "@cite_1" ], "mid": [ "2130700086" ], "abstract": [ "We consider the problem of predicting a random variable X from observations, denoted by a random variable Z. It is well known that the conditional expectation E[X|Z] is the optimal L sup 2 predictor (also known as \"the least-mean-square error\" predictor) of X, among all (Borel measurable) functions of Z. In this orrespondence, we provide necessary and sufficient conditions for the general loss functions under which the conditional expectation is the unique optimal predictor. We show that E[X|Z] is the optimal predictor for all Bregman loss functions (BLFs), of which the L sup 2 loss function is a special case. Moreover, under mild conditions, we show that the BLFs are exhaustive, i.e., if for every random variable X, the infimum of E[F(X,y)] over all constants y is attained by the expectation E[X], then F is a BLF." ] }
1704.02906
2607448608
This paper describes an intuitive generalization to the Generative Adversarial Networks (GANs) to generate samples while capturing diverse modes of the true data distribution. Firstly, we propose a very simple and intuitive multi-agent GAN architecture that incorporates multiple generators capable of generating samples from high probability modes. Secondly, in order to enforce different generators to generate samples from diverse modes, we propose two extensions to the standard GAN objective function. (1) We augment the generator specific GAN objective function with a diversity enforcing term that encourage different generators to generate diverse samples using a user-defined similarity based function. (2) We modify the discriminator objective function where along with finding the real and fake samples, the discriminator has to predict the generator which generated the given fake sample. Intuitively, in order to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. Our framework is generalizable in the sense that it can be easily combined with other existing variants of GANs to produce diverse samples. Experimentally we show that our framework is able to produce high quality diverse samples for the challenging tasks such as image face generation and image-to-image translation. We also show that it is capable of learning a better feature representation in an unsupervised setting.
W-GAN @cite_22 is a recent technique which employs integral probability metrics based on the earth mover distance rather than the JS-divergences that the original GAN uses. BEGAN @cite_4 builds upon W-GAN using an autoencoder based equilibrium enforcing technique alongside the Wasserstein distance. DCGAN @cite_18 was an iconic technique which used fully convolutional generator and discriminator for the first time and was able to generate compelling generations along with the introduction of batch normalization thus stabilizing the training procedure. GoGAN @cite_16 introduced a training procedure for the training of the discriminator using a maximum margin formulation alongside the earth mover distance based on the Wasserstein-1 metric. @cite_9 introduced a technique and theoretical formulation stating the importance of multiple generators and discriminators in order to completely model the data distribution.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_9", "@cite_16" ], "mid": [ "2173520492", "2605195953", "", "2952745707", "2607491080" ], "abstract": [ "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.", "", "We show that training of generative adversarial network (GAN) may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. However, generalization does occur for a weaker metric called neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator generator game for a special class of generators with natural training objectives when generator capacity and training set sizes are moderate. This existence of equilibrium inspires MIX+GAN protocol, which can be combined with any existing GAN training, and empirically shown to improve some of them.", "Traditional generative adversarial networks (GAN) and many of its variants are trained by minimizing the KL or JS-divergence loss that measures how close the generated data distribution is from the true data distribution. A recent advance called the WGAN based on Wasserstein distance can improve on the KL and JS-divergence based GANs, and alleviate the gradient vanishing, instability, and mode collapse issues that are common in the GAN training. In this work, we aim at improving on the WGAN by first generalizing its discriminator loss to a margin-based one, which leads to a better discriminator, and in turn a better generator, and then carrying out a progressive training paradigm involving multiple GANs to contribute to the maximum margin ranking loss so that the GAN at later stages will improve upon early stages. We call this method Gang of GANs (GoGAN). We have shown theoretically that the proposed GoGAN can reduce the gap between the true data distribution and the generated data distribution by at least half in an optimally trained WGAN. We have also proposed a new way of measuring GAN quality which is based on image completion tasks. We have evaluated our method on four visual datasets: CelebA, LSUN Bedroom, CIFAR-10, and 50K-SSFF, and have seen both visual and quantitative improvement over baseline WGAN." ] }
1704.02906
2607448608
This paper describes an intuitive generalization to the Generative Adversarial Networks (GANs) to generate samples while capturing diverse modes of the true data distribution. Firstly, we propose a very simple and intuitive multi-agent GAN architecture that incorporates multiple generators capable of generating samples from high probability modes. Secondly, in order to enforce different generators to generate samples from diverse modes, we propose two extensions to the standard GAN objective function. (1) We augment the generator specific GAN objective function with a diversity enforcing term that encourage different generators to generate diverse samples using a user-defined similarity based function. (2) We modify the discriminator objective function where along with finding the real and fake samples, the discriminator has to predict the generator which generated the given fake sample. Intuitively, in order to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. Our framework is generalizable in the sense that it can be easily combined with other existing variants of GANs to produce diverse samples. Experimentally we show that our framework is able to produce high quality diverse samples for the challenging tasks such as image face generation and image-to-image translation. We also show that it is capable of learning a better feature representation in an unsupervised setting.
In terms of employing multiple generators, our work is closest to @cite_9 @cite_8 @cite_15 . However, while using multiple generators, our method explicitly enforces them to capture diverse modes.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_8" ], "mid": [ "2952745707", "2559978439", "2471149695" ], "abstract": [ "We show that training of generative adversarial network (GAN) may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. However, generalization does occur for a weaker metric called neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator generator game for a special class of generators with natural training objectives when generator capacity and training set sizes are moderate. This existence of equilibrium inspires MIX+GAN protocol, which can be combined with any existing GAN training, and empirically shown to improve some of them.", "Communicating and sharing intelligence among agents is an important facet of achieving Artificial General Intelligence. As a first step towards this challenge, we introduce a novel framework for image generation: Message Passing Multi-Agent Generative Adversarial Networks (MPM GANs). While GANs have recently been shown to be very effective for image generation and other tasks, these networks have been limited to mostly single generator-discriminator networks. We show that we can obtain multi-agent GANs that communicate through message passing to achieve better image generation. The objectives of the individual agents in this framework are two fold: a co-operation objective and a competing objective. The co-operation objective ensures that the message sharing mechanism guides the other generator to generate better than itself while the competing objective encourages each generator to generate better than its counterpart. We analyze and visualize the messages that these GANs share among themselves in various scenarios. We quantitatively show that the message sharing formulation serves as a regularizer for the adversarial training. Qualitatively, we show that the different generators capture different traits of the underlying data distribution.", "We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation." ] }
1704.02665
2606199710
In this paper, we present a new feature selection method that is suitable for both unsupervised and supervised problems. We build upon the recently proposed Infinite Feature Selection (IFS) method where feature subsets of all sizes (including infinity) are considered. We extend IFS in two ways. First, we propose a supervised version of it. Second, we propose new ways of forming the feature adjacency matrix that perform better for unsupervised problems. We extensively evaluate our methods on many benchmark datasets, including large image-classification datasets (PASCAL VOC), and show that our methods outperform both the IFS and the widely used "minimum-redundancy maximum-relevancy (mRMR)" feature selection algorithm.
In many practical machine learning and classification tasks, we encounter a very large feature space with thousands of irrelevant and or redundant features. Presence of such features causes high computational complexity, poor generalization performance and decreased learning accuracy @cite_15 @cite_0 . The task of feature selection is to identify a small subset of most important, i.e. representative and discriminative, features. Many feature selection algorithms have been proposed in the last three decades (e.g. @cite_0 @cite_30 @cite_13 @cite_27 ). Among them, have generated much interest, because they are simple, fast and not biased to any special learner. In these methods, each candidate feature subset is evaluated independent of the final learner, based on a diverse set of evaluation measures including mutual information @cite_32 @cite_26 , consistency @cite_17 , significance @cite_2 @cite_10 , etc.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_10", "@cite_32", "@cite_0", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "2017337590", "2156504490", "2153338628", "2154053567", "2119479037", "2056168656", "1849729440", "2175099382", "2119387367", "2169038408" ], "abstract": [ "Abstract In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and Naive-Bayes.", "We present a unifying framework for information theoretic feature selection, bringing almost two decades of research on heuristic filter criteria under a single theoretical interpretation. This is in response to the question: \"what are the implicit statistical assumptions of feature selection criteria based on mutual information?\". To answer this, we adopt a different strategy than is usual in the feature selection literature--instead of trying to define a criterion, we derive one, directly from a clearly specified objective function: the conditional likelihood of the training labels. While many hand-designed heuristic criteria try to optimize a definition of feature 'relevancy' and 'redundancy', our approach leads to a probabilistic framework which naturally incorporates these concepts. As a result we can unify the numerous criteria published over the last two decades, and show them to be low-order approximations to the exact (but intractable) optimisation problem. The primary contribution is to show that common heuristics for information based feature selection (including Markov Blanket algorithms as a special case) are approximate iterative maximisers of the conditional likelihood. A large empirical study provides strong evidence to favour certain classes of criteria, in particular those that balance the relative size of the relevancy redundancy terms. Overall we conclude that the JMI criterion (Yang and Moody, 1999; , 2008) provides the best tradeoff in terms of accuracy, stability, and flexibility with small data samples.", "We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.", "Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy.", "Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods.", "With the advent of high dimensionality, adequate identification of relevant features of the data has become indispensable in real-world scenarios. In this context, the importance of feature selection is beyond doubt and different methods have been developed. However, with such a vast body of algorithms available, choosing the adequate feature selection method is not an easy-to-solve question and it is necessary to check their effectiveness on different situations. Nevertheless, the assessment of relevant features is difficult in real datasets and so an interesting option is to use artificial data. In this paper, several synthetic datasets are employed for this purpose, aiming at reviewing the performance of feature selection methods in the presence of a crescent number or irrelevant features, noise in the data, redundancy and interaction between attributes, as well as a small ratio between number of samples and number of features. Seven filters, two embedded methods, and two wrappers are applied over eleven synthetic datasets, tested by four classifiers, so as to be able to choose a robust method, paving the way for its application to real datasets.", "In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computationally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively handles datasets with a very large number of features.", "Feature Selection (FS) is an important pre-processing step in data mining and classification tasks. The aim of FS is to select a small subset of most important and discriminative features. All the traditional feature selection methods assume that the entire input feature set is available from the beginning. However, online streaming features (OSF) are an integral part of many real-world applications. In OSF, the number of training examples is fixed while the number of features grows with time as new features stream in. A critical challenge for online streaming feature selection (OSFS) is the unavailability of the entire feature set before learning starts. Several efforts have been made to address the OSFS problem, however they all need some prior knowledge about the entire feature space to select informative features. In this paper, the OSFS problem is considered from the rough sets (RS) perspective and a new OSFS algorithm, called OS-NRRSAR-SA, is proposed. The main motivation for this consideration is that RS-based data mining does not require any domain knowledge other than the given dataset. The proposed algorithm uses the classical significance analysis concepts in RS theory to control the unknown feature space in OSFS problems. This algorithm is evaluated extensively on several high-dimensional datasets in terms of compactness, classification accuracy, run-time, and robustness against noises. Experimental results demonstrate that the algorithm achieves better results than existing OSFS algorithms, in every way.", "Feature selection techniques have become an apparent need in many bioinformatics applications. In addition to the large pool of techniques that have already been developed in the machine learning and data mining fields, specific applications in bioinformatics have led to a wealth of newly proposed techniques. In this article, we make the interested reader aware of the possibilities of feature selection, providing a basic taxonomy of feature selection techniques, and discussing their use, variety and potential in a number of both common as well as upcoming bioinformatics applications. Contact: yvan.saeys@psb.ugent.be Supplementary information: http: bioinformatics.psb.ugent.be supplementary_data yvsae fsreview", "Feature selection is an effective technique in dealing with dimensionality reduction. For classification, it is used to find an \"optimal\" subset of relevant features such that the overall accuracy of classification is increased while the data size is reduced and the comprehensibility is improved. Feature selection methods contain two important aspects: evaluation of a candidate feature subset and search through the feature space. Existing algorithms adopt various measures to evaluate the goodness of feature subsets. This work focuses on inconsistency measure according to which a feature subset is inconsistent if there exist at least two instances with same feature values but with different class labels. We compare inconsistency measure with other measures and study different search strategies such as exhaustive, complete, heuristic and random search, that can be applied to this measure. We conduct an empirical study to examine the pros and cons of these search methods, give some guidelines on choosing a search method, and compare the classifier error rates before and after feature selection." ] }
1704.02665
2606199710
In this paper, we present a new feature selection method that is suitable for both unsupervised and supervised problems. We build upon the recently proposed Infinite Feature Selection (IFS) method where feature subsets of all sizes (including infinity) are considered. We extend IFS in two ways. First, we propose a supervised version of it. Second, we propose new ways of forming the feature adjacency matrix that perform better for unsupervised problems. We extensively evaluate our methods on many benchmark datasets, including large image-classification datasets (PASCAL VOC), and show that our methods outperform both the IFS and the widely used "minimum-redundancy maximum-relevancy (mRMR)" feature selection algorithm.
Most filter methods rely on the concept of feature relevance @cite_26 @cite_2 @cite_14 . For a given learning task, a feature can be in one of the following three disjoint categories: strongly relevant, weakly relevant and irrelevant. Strongly relevant features contain information that is not present in any subset of other features and therefore they are always necessary for the underlying task. Weakly relevant features contains information which is already present in a subset of strongly or irrelevant features. These features can be unnecessary (redundant) or necessary (non-redundant) with certain conditions. Irrelevant features contain no useful information and are not necessary at all. An ideal feature selection algorithm should eliminate all the irrelevant features and weakly redundant features. However, constructing such an algorithm is computationally infeasible, as it requires to check exponentially many combinations of features to ascertain weak relevancy. Therefore, several heuristics are proposed in the literature, which consider limited combination sizes @cite_7 @cite_4 @cite_32 @cite_35 @cite_14 @cite_25 @cite_10 .
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_26", "@cite_4", "@cite_7", "@cite_32", "@cite_2", "@cite_10", "@cite_25" ], "mid": [ "2149454242", "2156571267", "2156504490", "2149772057", "2043772506", "2154053567", "1849729440", "2153338628", "1580817707" ], "abstract": [ "We propose in this paper a very fast feature selection technique based on conditional mutual information. By picking features which maximize their mutual information with the class to predict conditional to any feature already picked, it ensures the selection of features which are both individually informative and two-by-two weakly dependant. We show that this feature selection method outperforms other classical algorithms, and that a naive Bayesian classifier built with features selected that way achieves error rates similar to those of state-of-the-art methods such as boosting or SVMs. The implementation we propose selects 50 features among 40,000, based on a training set of 500 examples in a tenth of a second on a standard 1Ghz PC.", "Feature selection is applied to reduce the number of features in many applications where data has hundreds or thousands of features. Existing feature selection methods mainly focus on finding relevant features. In this paper, we show that feature relevance alone is insufficient for efficient feature selection of high-dimensional data. We define feature redundancy and propose to perform explicit redundancy analysis in feature selection. A new framework is introduced that decouples relevance analysis and redundancy analysis. We develop a correlation-based method for relevance and redundancy analysis, and conduct an empirical study of its efficiency and effectiveness comparing with representative methods.", "We present a unifying framework for information theoretic feature selection, bringing almost two decades of research on heuristic filter criteria under a single theoretical interpretation. This is in response to the question: \"what are the implicit statistical assumptions of feature selection criteria based on mutual information?\". To answer this, we adopt a different strategy than is usual in the feature selection literature--instead of trying to define a criterion, we derive one, directly from a clearly specified objective function: the conditional likelihood of the training labels. While many hand-designed heuristic criteria try to optimize a definition of feature 'relevancy' and 'redundancy', our approach leads to a probabilistic framework which naturally incorporates these concepts. As a result we can unify the numerous criteria published over the last two decades, and show them to be low-order approximations to the exact (but intractable) optimisation problem. The primary contribution is to show that common heuristics for information based feature selection (including Markov Blanket algorithms as a special case) are approximate iterative maximisers of the conditional likelihood. A large empirical study provides strong evidence to favour certain classes of criteria, in particular those that balance the relative size of the relevancy redundancy terms. Overall we conclude that the JMI criterion (Yang and Moody, 1999; , 2008) provides the best tradeoff in terms of accuracy, stability, and flexibility with small data samples.", "This paper investigates the application of the mutual information criterion to evaluate a set of candidate features and to select an informative subset to be used as input data for a neural network classifier. Because the mutual information measures arbitrary dependencies between random variables, it is suitable for assessing the \"information content\" of features in complex classification tasks, where methods bases on linear relations (like the correlation) are prone to mistakes. The fact that the mutual information is independent of the coordinates chosen permits a robust estimation. Nonetheless, the use of the mutual information for tasks characterized by high input dimensionality requires suitable approximations because of the prohibitive demands on computation and samples. An algorithm is proposed that is based on a \"greedy\" selection of the features and that takes both the mutual information with respect to the output class and with respect to the already-selected features into account. Finally the results of a series of experiments are discussed. >", "The effect of selecting varying numbers and kinds of features for use in predicting category membership was investigated on the Reuters and MUC-3 text categorization data sets. Good categorization performance was achieved using a statistical classifier and a proportional assignment strategy. The optimal feature set size for word-based indexing was found to be surprisingly low (10 to 15 features) despite the large training sets. The extraction of new text features by syntactic analysis and feature clustering was investigated on the Reuters data set. Syntactic indexing phrases, clusters of these phrases, and clusters of words were all found to provide less effective representations than individual words.", "Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy.", "In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computationally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively handles datasets with a very large number of features.", "We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.", "The paper presents an original filter approach for effective feature selection in classification tasks with a very large number of input variables. The approach is based on the use of a new information theoretic selection criterion: the double input symmetrical relevance (DISR). The rationale of the criterion is that a set of variables can return an information on the output class that is higher than the sum of the informations of each variable taken individually. This property will be made explicit by defining the measure of variable complementarity. A feature selection filter based on the DISR criterion is compared in theoretical and experimental terms to recently proposed information theoretic criteria. Experimental results on a set of eleven microarray classification tasks show that the proposed technique is competitive with existing filter selection methods." ] }
1704.02492
2608028431
Person re-identification is generally divided into two part: first how to represent a pedestrian by discriminative visual descriptors and second how to compare them by suitable distance metrics. Conventional methods isolate these two parts, the first part usually unsupervised and the second part supervised. The Bag-of-Words (BoW) model is a widely used image representing descriptor in part one. Its codebook is simply generated by clustering visual features in Euclidian space. In this paper, we propose to use part two metric learning techniques in the codebook generation phase of BoW. In particular, the proposed codebook is clustered under Mahalanobis distance which is learned supervised. Extensive experiments prove that our proposed method is effective. With several low level features extracted on superpixel and fused together, our method outperforms state-of-the-art on person re-identification benchmarks including VIPeR, PRID450S, and Market1501.
Generally speaking, person re-id include two basic parts: how to represent a pedestrian and how to compare them, and most efforts on person re-id could be roughly divided into these two categories @cite_55 .
{ "cite_N": [ "@cite_55" ], "mid": [ "2531440880" ], "abstract": [ "Person re-identification (re-ID) has become increasingly popular in the community due to its application and research significance. It aims at spotting a person of interest in other cameras. In the early days, hand-crafted algorithms and small-scale evaluation were predominantly reported. Recent years have witnessed the emergence of large-scale datasets and deep learning systems which make use of large data volumes. Considering different tasks, we classify most current re-ID methods into two classes, i.e., image-based and video-based; in both tasks, hand-crafted and deep learning systems will be reviewed. Moreover, two new re-ID tasks which are much closer to real-world applications are described and discussed, i.e., end-to-end re-ID and fast re-ID in very large galleries. This paper: 1) introduces the history of person re-ID and its relationship with image classification and instance retrieval; 2) surveys a broad selection of the hand-crafted systems and the large-scale methods in both image- and video-based re-ID; 3) describes critical future directions in end-to-end re-ID and fast retrieval in large galleries; and 4) finally briefs some important yet under-developed issues." ] }
1704.02797
2953059308
This paper investigates the performance of MIMO ad hoc networks that employ transmit diversity, as delivered by the Alamouti scheme, and or spatial multiplexing, according to the Vertical Bell Labs Layered Space-Time system (V-BLAST). Both techniques are implemented in a discrete-event network simulator by focusing on their overall effect on the resulting signal-to-interference-plus-noise ratio (SINR) at the intended receiver. Unlike previous works that have studied fully-connected scenarios or have assumed simple abstractions to represent MIMO behavior, this paper evaluates MIMO ad hoc networks that are not fully connected by taking into account the effects of multiple antennas on the clear channel assessment (CCA) mechanism of CSMA-like medium access control (MAC) protocols. In addition to presenting a performance evaluation of ad hoc networks operating according to each individual MIMO scheme, this paper proposes simple modifications to the IEEE 802.11 DCF MAC to allow the joint operation of both MIMO techniques. Hence, each pair of nodes is allowed to select the best MIMO configuration for the impending data transfer. The joint operation is based on three operation modes that are selected based on the estimated SINR at the intended receiver and its comparision with a set of threshold values. The performance of ad hoc networks operating with the joint MIMO scheme is compared with their operation using each individual MIMO scheme and the standard SISO IEEE 802.11. Performance results are presented based on MAC-level throughput per node, delay, and fairness under saturated traffic conditions.
As far as the application of MIMO systems to exploit diversity and or multiplexing gains, Stamoulis and Al-Dhahir @cite_18 have investigated the impact of space-time block codes (STBC) on IEEE 802.11a WLANs operating in ad hoc mode. They have used packet traces in the ns-2 simulator to evaluate the benefits of STBC on the performance of upper-layer protocols, such as TCP. They have assumed fully-connected networks with the simplest @math Alamouti scheme. Later, Hu and Zhang @cite_22 have attempted to model MIMO ad hoc networks by focusing on IEEE 802.11 with STBC. Their modeling approach disregards the impact of network topology by assuming that events experienced by one station are statistically the same as those of other stations. Therefore, in practice, each node is treated as surrounded by the same average number of nodes, and a multihop network is simplified to many single-hop networks, where interactions occur only with immediate neighbors.
{ "cite_N": [ "@cite_18", "@cite_22" ], "mid": [ "2164024501", "1976640996" ], "abstract": [ "By employing more than one antenna at the transmitter and by properly coding data across the transmit antennas, physical layers (PHYs) with space-time block codes (STBCs) promise increased data rates with minimal decoding complexity at the receiver. This paper presents a comprehensive study of how the STBC gains at the PHY translate to significant network performance improvement in 802.11a wireless local area networks. We base our study on a detailed, across-all-layers, simulation of an 802.11a system. We have extended the network simulator with an implementation of the 802.11a PHY, which allows us to assess the impact of STBC not only at the PHY layer, but at the higher layers as well. An extensive set of simulations illustrates the merits of transmit diversity (in the form of STBC) and sheds light on how performance can be improved for transmission control protocol (TCP) traffic. Essentially, STBC presents to TCP a \"smoother\" wireless channel; this is corroborated by a brief theoretical analysis as well.", "In this paper, we explore the utility of recently discovered multiple-antenna techniques (namely MIMO techniques) for medium access control (MAC) design and routing in mobile ad hoc networks. Specifically, we focus on ad hoc networks where the spatial diversity technique is used to combat fading and achieve robustness in the presence of user mobility. We first examine the impact of spatial diversity on the MAC design, and devise a MIMO MAC protocol accordingly. We then develop analytical methods to characterize the corresponding saturation throughput for MIMO multi-hop networks. Building on the throughout analysis, we study the impact of MIMO MAC on routing. We characterize the optimal hop distance that minimizes the end-to-end delay in a large network. For completeness, we also study MAC design using directional antennas for the case where the channel has a strong line of sight (LOS) component. Our results show that the spatial diversity technique and the directional antenna technique can enhance the performance of mobile ad hoc networks significantly." ] }
1704.02883
2606471276
It is practically impossible for users to memorize a large portfolio of strong and individual passwords for their online accounts. A solution is to generate passwords randomly and store them. Yet, storing passwords instead of memorizing them bears the risk of loss, e.g., in situations where the device on which the passwords are stored is damaged, lost, or stolen. This makes the creation of backups of the passwords indispensable. However, placing such backups at secure locations to protect them as well from loss and unauthorized access and keeping them up-to-date at the same time is an unsolved problem in practice. We present PASCO, a backup solution for passwords that solves this challenge. PASCO backups need not to be updated, even when the user's password portfolio is changed. PASCO backups can be revoked without having physical access to them. This prevents password leakage, even when a user loses control over a backup. Additionally, we show how to extend PASCO to enable a fully controllable emergency access. It allows a user to give someone else access to his passwords in urgent situations. We also present a security evaluation and an implementation of PASCO.
In 2007, Flor ^ e ncio and Herley @cite_26 reported that users on average have 25 accounts. Due to the growth in online services, the number has strongly increased over the last ten years, which makes the memorization of secure passwords for all accounts practically impossible. Studies have shown that users typically cope with this challenge by selecting passwords that are easy to remember and reuse passwords across accounts @cite_51 @cite_8 @cite_52 @cite_6 @cite_48 @cite_9 . This makes the passwords vulnerable to various attacks such as brute-force @cite_22 @cite_1 @cite_23 , dictionary @cite_43 @cite_47 , and social engineering @cite_21 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_8", "@cite_48", "@cite_9", "@cite_21", "@cite_1", "@cite_52", "@cite_6", "@cite_43", "@cite_23", "@cite_47", "@cite_51" ], "mid": [ "2171920515", "2054626033", "2104773223", "", "2413416220", "1546147126", "2538793708", "", "2394619600", "2048755632", "2135359429", "2097267243", "2073342447" ], "abstract": [ "We report the results of a large scale study of password use andpassword re-use habits. The study involved half a million users over athree month period. A client component on users' machines recorded a variety of password strength, usage and frequency metrics. This allows us to measure or estimate such quantities as the average number of passwords and average number of accounts each user has, how many passwords she types per day, how often passwords are shared among sites, and how often they are forgotten. We get extremely detailed data on password strength, the types and lengths of passwords chosen, and how they vary by site. The data is the first large scale study of its kind, and yields numerous other insights into the role the passwords play in users' online experience.", "Text-based passwords remain the dominant authentication method in computer systems, despite significant advancement in attackers' capabilities to perform password cracking. In response to this threat, password composition policies have grown increasingly complex. However, there is insufficient research defining metrics to characterize password strength and using them to evaluate password-composition policies. In this paper, we analyze 12,000 passwords collected under seven composition policies via an online study. We develop an efficient distributed method for calculating how effectively several heuristic password-guessing algorithms guess passwords. Leveraging this method, we investigate (a) the resistance of passwords created under different conditions to guessing, (b) the performance of guessing algorithms under different training sets, (c) the relationship between passwords explicitly created under a given composition policy and other passwords that happen to meet the same requirements, and (d) the relationship between guess ability, as measured with password-cracking algorithms, and entropy estimates. Our findings advance understanding of both password-composition policies and metrics for quantifying password security.", "Given the widespread use of password authentication in online correspondence, subscription services, and shopping, there is growing concern about identity theft. When people reuse their passwords across multiple accounts, they increase their vulnerability; compromising one password can help an attacker take over several accounts. Our study of 49 undergraduates quantifies how many passwords they had and how often they reused these passwords. The majority of users had three or fewer passwords and passwords were reused twice. Furthermore, over time, password reuse rates increased because people accumulated more accounts but did not create more passwords. Users justified their habits. While they wanted to protect financial data and personal communication, reusing passwords made passwords easier to manage. Users visualized threats from human attackers, particularly viewing those close to them as the most motivated and able attackers; however, participants did not separate the human attackers from their potentially automated tools. They sometimes failed to realize that personalized passwords such as phone numbers can be cracked given a large enough dictionary and enough tries. We discuss how current systems support poor password practices. We also present potential changes in website authentication systems and password managers.", "", "From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7-3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.", "Passwords are widely used for user authentication and, despite their weaknesses, will likely remain in use in the foreseeable future. Human-generated passwords typically have a rich structure, which makes them susceptible to guessing attacks. In this paper, we study the effectiveness of guessing attacks based on Markov models. Our contributions are two-fold. First, we propose a novel password cracker based on Markov models, which builds upon and extends ideas used by Narayanan and Shmatikov (CCS 2005). In extensive experiments we show that it can crack up to 69 of passwords at 10 billion guesses, more than all probabilistic password crackers we compared again t. Second, we systematically analyze the idea that additional personal information about a user helps in speeding up password guessing. We find that, on average and by carefully choosing parameters, we can guess up to 5 more passwords, especially when the number of attempts is low. Furthermore, we show that the gain can go up to 30 for passwords that are actually based on personal attributes. These passwords are clearly weaker and should be avoided. Our cracker could be used by an organization to detect and reject them. To the best of our knowledge, we are the first to systematically study the relationship between chosen passwords and users' personal information. We test and validate our results over a wide collection of leaked password databases.", "While trawling online offline password guessing has been intensively studied, only a few studies have examined targeted online guessing, where an attacker guesses a specific victim's password for a service, by exploiting the victim's personal information such as one sister password leaked from her another account and some personally identifiable information (PII). A key challenge for targeted online guessing is to choose the most effective password candidates, while the number of guess attempts allowed by a server's lockout or throttling mechanisms is typically very small. We propose TarGuess, a framework that systematically characterizes typical targeted guessing scenarios with seven sound mathematical models, each of which is based on varied kinds of data available to an attacker. These models allow us to design novel and efficient guessing algorithms. Extensive experiments on 10 large real-world password datasets show the effectiveness of TarGuess. Particularly, TarGuess I IV capture the four most representative scenarios and within 100 guesses: (1) TarGuess-I outperforms its foremost counterpart by 142 against security-savvy users and by 46 against normal users; (2) TarGuess-II outperforms its foremost counterpart by 169 on security-savvy users and by 72 against normal users; and (3) Both TarGuess-III and IV gain success rates over 73 against normal users and over 32 against security-savvy users. TarGuess-III and IV, for the first time, address the issue of cross-site online guessing when given the victim's one sister password and some PII.", "", "Although many users create predictable passwords, the extent to which users realize these passwords are predictable is not well understood. We investigate the relationship between users' perceptions of the strength of specific passwords and their actual strength. In this 165-participant online study, we ask participants to rate the comparative security of carefully juxtaposed pairs of passwords, as well as the security and memorability of both existing passwords and common password-creation strategies. Participants had serious misconceptions about the impact of basing passwords on common phrases and including digits and keyboard patterns in passwords. However, in most other cases, participants' perceptions of what characteristics make a password secure were consistent with the performance of current password-cracking tools. We find large variance in participants' understanding of how passwords may be attacked, potentially explaining why users nonetheless make predictable passwords. We conclude with design directions for helping users make better passwords.", "We report on the largest corpus of user-chosen passwords ever studied, consisting of anonymized password histograms representing almost 70 million Yahoo! users, mitigating privacy concerns while enabling analysis of dozens of subpopulations based on demographic factors and site usage characteristics. This large data set motivates a thorough statistical treatment of estimating guessing difficulty by sampling from a secret distribution. In place of previously used metrics such as Shannon entropy and guessing entropy, which cannot be estimated with any realistically sized sample, we develop partial guessing metrics including a new variant of guesswork parameterized by an attacker's desired success rate. Our new metric is comparatively easy to approximate and directly relevant for security engineering. By comparing password distributions with a uniform distribution which would provide equivalent security against different forms of guessing attack, we estimate that passwords provide fewer than 10 bits of security against an online, trawling attack, and only about 20 bits of security against an optimal offline dictionary attack. We find surprisingly little variation in guessing difficulty; every identifiable group of users generated a comparably weak password distribution. Security motivations such as the registration of a payment card have no greater impact than demographic factors such as age and nationality. Even proactive efforts to nudge users towards better password choices with graphical feedback make little difference. More surprisingly, even seemingly distant language communities choose the same weak passwords and an attacker never gains more than a factor of 2 efficiency gain by switching from the globally optimal dictionary to a population-specific lists.", "Choosing the most effective word-mangling rules to use when performing a dictionary-based password cracking attack can be a difficult task. In this paper we discuss a new method that generates password structures in highest probability order. We first automatically create a probabilistic context-free grammar based upon a training set of previously disclosed passwords. This grammar then allows us to generate word-mangling rules, and from them, password guesses to be used in password cracking. We will also show that this approach seems to provide a more effective way to crack passwords as compared to traditional methods by testing our tools and techniques on real password sets. In one series of experiments, training on a set of disclosed passwords, our approach was able to crack 28 to 129 more passwords than John the Ripper, a publicly available standard password cracking program.", "In this paper we attempt to determine the effectiveness of using entropy, as defined in NIST SP800-63, as a measurement of the security provided by various password creation policies. This is accomplished by modeling the success rate of current password cracking techniques against real user passwords. These data sets were collected from several different websites, the largest one containing over 32 million passwords. This focus on actual attack methodologies and real user passwords quite possibly makes this one of the largest studies on password security to date. In addition we examine what these results mean for standard password creation policies, such as minimum password length, and character set requirements.", "Today's Internet services rely heavily on text-based passwords for user authentication. The pervasiveness of these services coupled with the difficulty of remembering large numbers of secure passwords tempts users to reuse passwords at multiple sites. In this paper, we investigate for the first time how an attacker can leverage a known password from one site to more easily guess that user's password at other sites. We study several hundred thousand leaked passwords from eleven web sites and conduct a user survey on password reuse; we estimate that 43- 51 of users reuse the same password across multiple sites. We further identify a few simple tricks users often employ to transform a basic password between sites which can be used by an attacker to make password guessing vastly easier. We develop the first cross-site password-guessing algorithm, which is able to guess 30 of transformed passwords within 100 attempts compared to just 14 for a standard password-guessing algorithm without cross-site password knowledge." ] }
1704.02883
2606471276
It is practically impossible for users to memorize a large portfolio of strong and individual passwords for their online accounts. A solution is to generate passwords randomly and store them. Yet, storing passwords instead of memorizing them bears the risk of loss, e.g., in situations where the device on which the passwords are stored is damaged, lost, or stolen. This makes the creation of backups of the passwords indispensable. However, placing such backups at secure locations to protect them as well from loss and unauthorized access and keeping them up-to-date at the same time is an unsolved problem in practice. We present PASCO, a backup solution for passwords that solves this challenge. PASCO backups need not to be updated, even when the user's password portfolio is changed. PASCO backups can be revoked without having physical access to them. This prevents password leakage, even when a user loses control over a backup. Additionally, we show how to extend PASCO to enable a fully controllable emergency access. It allows a user to give someone else access to his passwords in urgent situations. We also present a security evaluation and an implementation of PASCO.
Beside many approaches to simplify the creation and memorization of passwords @cite_38 @cite_53 @cite_33 @cite_16 @cite_39 @cite_13 @cite_32 @cite_28 @cite_19 , storing passwords on user devices is the most common approach to solve the memorability problem. A prominent example are password managers @cite_14 @cite_37 @cite_24 . They store the user's passwords in a database, encrypted with a user-chosen master password. To synchronize the database between devices and prevent its loss, it is stored on a server. In emergency situations, a user can give someone else the master password, but then the person has access to all passwords. Moreover, a security breach at the server @cite_2 allows adversaries to steal the database and to perform offline brute-force attacks @cite_4 . This can be mitigated by databases using honey encryption @cite_0 @cite_34 . But, their design is challenging @cite_42 @cite_44 and a backup concept does not exist yet.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_14", "@cite_4", "@cite_33", "@cite_28", "@cite_53", "@cite_42", "@cite_32", "@cite_34", "@cite_39", "@cite_24", "@cite_19", "@cite_0", "@cite_44", "@cite_2", "@cite_16", "@cite_13" ], "mid": [ "2027670258", "", "", "2051617746", "2134080857", "2155873597", "1896997982", "1980697618", "2613407938", "2093397575", "2030993695", "", "2146270836", "1501932514", "", "", "", "2077711629" ], "abstract": [ "We report on a user study that provides evidence that spaced repetition and a specific mnemonic technique enable users to successfully recall multiple strong passwords over time. Remote research participants were asked to memorize 4 Person-Action-Object (PAO) stories where they chose a famous person from a drop-down list and were given machine-generated random action-object pairs. Users were also shown a photo of a scene and asked to imagine the PAO story taking place in the scene (e.g., Bill Gates---swallowing---bike on a beach). Subsequently, they were asked to recall the action-object pairs when prompted with the associated scene-person pairs following a spaced repetition schedule over a period of 127+ days. While we evaluated several spaced repetition schedules, the best results were obtained when users initially returned after 12 hours and then in @math increasing intervals: 77 of the participants successfully recalled all 4 stories in 10 tests over a period of 158 days. Much of the forgetting happened in the first test period (12 hours): 89 of participants who remembered their stories during the first test period successfully remembered them in every subsequent round. These findings, coupled with recent results on naturally rehearsing password schemes, suggest that 4 PAO stories could be used to create usable and strong passwords for 14 sensitive accounts following this spaced repetition schedule, possibly with a few extra upfront rehearsals. In addition, we find that there is an interference effect across multiple PAO stories: the recall rate of 100 (resp. 90 ) for participants who were asked to memorize 1 PAO story (resp. 2 PAO stories) is significantly better than the recall rate for participants who were asked to memorize 4 PAO stories. These findings yield concrete advice for improving constructions of password management schemes and future user studies.", "", "", "Many systems rely on passwords for authentication. Due to numerous accounts for different services, users have to choose and remember a significant number of passwords. Password-Manager applications address this issue by storing the user's passwords. They are especially useful on mobile devices, because of the ubiquitous access to the account passwords.", "Password meters tell users whether their passwords are \"weak\" or \"strong.\" We performed a laboratory experiment to examine whether these meters influenced users' password selections when they were forced to change their real passwords, and when they were not told that their passwords were the subject of a study. We observed that the presence of meters yielded significantly stronger passwords. We performed a followup field experiment to test a different scenario: creating a password for an unimportant account. In this scenario, we found that the meters made no observable difference: participants simply reused weak passwords that they used to protect similar low-risk accounts. We conclude that meters result in stronger passwords when users are forced to change existing passwords on \"important\" accounts and that individual meter design decisions likely have a marginal impact.", "Users often struggle to create passwords under strict requirements. To make this process easier, some providers present real-time feedback during password creation, indicating which requirements are not yet met. Other providers guide users through a multi-step password-creation process. Our 6,435-participant online study examines how feedback and guidance affect password security and usability. We find that real-time password-creation feedback can help users create strong passwords with fewer errors. We also find that although guiding participants through a three-step password-creation process can make creation easier, it may result in weaker passwords. Our results suggest that service providers should present password requirements with feedback to increase usability. However, the presentation of feedback and guidance must be carefully considered, since identical requirements can have different security and usability effects depending on presentation.", "Challenging the conventional wisdom that users cannot remember cryptographically-strong secrets, we test the hypothesis that users can learn randomly-assigned 56- bit codes (encoded as either 6 words or 12 characters) through spaced repetition. We asked remote research participants to perform a distractor task that required logging into a website 90 times, over up to two weeks, with a password of their choosing. After they entered their chosen password correctly we displayed a short code (4 letters or 2 words, 18.8 bits) that we required them to type. For subsequent logins we added an increasing delay prior to displaying the code, which participants could avoid by typing the code from memory. As participants learned, we added two more codes to comprise a 56.4- bit secret. Overall, 94 of participants eventually typed their entire secret from memory, learning it after a median of 36 logins. The learning component of our system added a median delay of just 6.9 s per login and a total of less than 12 minutes over an average of ten days. 88 were able to recall their codes exactly when asked at least three days later, with only 21 reporting having written their secret down. As one participant wrote with surprise, \"the words are branded into my brain.\"", "Recently, Juels and Rivest proposed honeywords (decoy passwords) to detect attacks against hashed password databases. For each user account, the legitimate password is stored with several honeywords in order to sense impersonation. If honeywords are selected properly, a cyber-attacker who steals a file of hashed passwords cannot be sure if it is the real password or a honeyword for any account. Moreover, entering with a honeyword to login will trigger an alarm notifying the administrator about a password file breach. At the expense of increasing the storage requirement by 20 times, the authors introduce a simple and effective solution to the detection of password file disclosure events. In this study, we scrutinize the honeyword system and present some remarks to highlight possible weak points. Also, we suggest an alternative approach that selects the honeywords from existing user passwords in the system in order to provide realistic honeywords—a perfectly flat honeyword generation method—and also to reduce storage cost of the honeyword scheme.", "", "We propose a simple method for improving the security of hashed passwords: the maintenance of additional honeywords'' (false passwords) associated with each user's account. An adversary who steals a file of hashed passwords and inverts the hash function cannot tell if he has found the password or a honeyword. The attempted use of a honeyword for login sets off an alarm. An auxiliary server (the honeychecker'') can distinguish the user password from honeywords for the login routine, and will set off an alarm if a honeyword is submitted.", "Computer users are asked to generate, keep secret, and recall an increasing number of passwords for uses including host accounts, email servers, e-commerce sites, and online financial services. Unfortunately, the password entropy that users can comfortably memorize seems insufficient to store unique, secure passwords for all these accounts, and it is likely to remain constant as the number of passwords (and the adversary's computational power) increases into the future. In this paper, we propose a technique that uses a strengthened cryptographic hash function to compute secure passwords for arbitrarily many accounts while requiring the user to memorize only a single short password. This mechanism functions entirely on the client; no server-side changes are needed. Unlike previous approaches, our design is both highly resistant to brute force attacks and nearly stateless, allowing users to retrieve their passwords from any location so long as they can execute our program and remember a short secret. This combination of security and convenience will, we believe, entice users to adopt our scheme. We discuss the construction of our algorithm in detail, compare its strengths and weaknesses to those of related approaches, and present Password Multiplier, an implementation in the form of an extension to the Mozilla Firefox web browser.", "", "To encourage strong passwords, system administrators employ password-composition policies, such as a traditional policy requiring that passwords have at least 8 characters from 4 character classes and pass a dictionary check. Recent research has suggested, however, that policies requiring longer passwords with fewer additional requirements can be more usable and in some cases more secure than this traditional policy. To explore long passwords in more detail, we conducted an online experiment with 8,143 participants. Using a cracking algorithm modified for longer passwords, we evaluate eight policies across a variety of metrics for strength and usability. Among the longer policies, we discover new evidence for a security usability tradeoff, with none being strictly better than another on both dimensions. However, several policies are both more usable and more secure that the traditional policy we tested. Our analyses additionally reveal common patterns and strings found in cracked passwords. We discuss how system administrators can use these results to improve password-composition policies.", "Password vaults are increasingly popular applications that store multiple passwords encrypted under a single master password that the user memorizes. A password vault can greatly reduce the burden on a user of remembering passwords, but introduces a single point of failure. An attacker that obtains a user's encrypted vault can mount offline brute-force attacks and, if successful, compromise all of the passwords in the vault. In this paper, we investigate the construction of encrypted vaults that resist such offline cracking attacks and force attackers instead to mount online attacks. Our contributions are as follows. We present an attack and supporting analysis showing that a previous design for cracking-resistant vaults -- the only one of which we are aware -- actually degrades security relative to conventional password-based approaches. We then introduce a new type of secure encoding scheme that we call a natural language encoder (NLE). An NLE permits the construction of vaults which, when decrypted with the wrong master password, produce plausible-looking decoy passwords. We show how to build NLEs using existing tools from natural language processing, such as n-gram models and probabilistic context-free grammars, and evaluate their ability to generate plausible decoys. Finally, we present, implement, and evaluate a full, NLE-based cracking-resistant vault system called No Crack.", "", "", "", "Abstract In this study, we propose a hierarchy of password importance, and we use an experiment to examine the degree of similarity between passwords for lower-level (e.g. news portal) and higher-level (e.g. banking) websites in this hierarchy. We asked subjects to construct passwords for websites at both levels. Leveraging the lower-level passwords along with a dictionary attack, we successfully cracked almost one-third of the subjects׳ higher-level passwords. In a survey, subjects reported frequently reusing higher-level passwords, with or without modifications, as well as using a similar process to construct both levels of passwords. We thus conclude that unsafely shared or leaked lower-level passwords can be used by attackers to crack higher-level passwords." ] }
1704.02902
2613183344
In this work we consider a two-user and a three-user slotted ALOHA network with multi-packet reception (MPR) capabilities. The nodes can adapt their transmission probabilities and their transmission parameters based on the status of the other nodes. Each user has external bursty arrivals that are stored in their infinite capacity queues. For the two- and the three-user cases we obtain the stability region of the system. For the two-user case we provide the conditions where the stability region is a convex set. We perform a detailed mathematical analysis in order to study the queueing delay by formulating two boundary value problems (a Dirichlet and a Riemann-Hilbert boundary value problem), the solution of which provides the generating function of the joint stationary probability distribution of the queue size at user nodes. Furthermore, for the two-user symmetric case with MPR we obtain a lower and an upper bound for the average delay without explicitly computing the generating function for the stationary joint queue length distribution. The bounds as it is seen in the numerical results appear to be tight. Explicit expressions for the average delay are obtained for the symmetrical model with capture effect which is a subclass of MPR models. We also provide the optimal transmission probability in closed form expression that minimizes the average delay in the symmetric capture case. Finally, we evaluate numerically the presented theoretical results.
Delay analysis of random access networks was studied in @cite_24 @cite_7 @cite_26 . More specifically, in @cite_24 @cite_22 a two-user network with MPR capabilities was considered and expressions for the average delay were obtained for the symmetric case. The papers @cite_27 @cite_26 considered collision channel model. In @cite_11 the delay performance of slotted ALOHA in a Poisson network was studied. Delay analysis of random access networks based on fluid models can be found in @cite_1 and in @cite_13 . The works @cite_29 and @cite_47 utilized techniques from statistical mechanics for throughput and delay analysis. The authors in @cite_8 proposed a service-martingale concept that enables the queueing analysis of a bursty source sharing a MAC channel.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_7", "@cite_8", "@cite_29", "@cite_1", "@cite_24", "@cite_27", "@cite_47", "@cite_13", "@cite_11" ], "mid": [ "", "2104191204", "", "1511241204", "2145044978", "1976407947", "", "", "2129499634", "2539643266", "2129959069" ], "abstract": [ "", "This paper considers cross-layer medium access control (MAC) protocol design in wireless networks. Taking a mutually interactive MAC-PHY perspective, we aim to design an MAC protocol that is in favor of the physical (PHY) layer information transmission, and the improved PHY, in turn, can improve the MAC performance. More specifically, we propose a novel MAC protocol, named hybrid ALOHA, which makes it possible for collision-free channel estimation and simultaneous multiuser transmission. The underlying argument is as follows: As long as good channel estimation can be achieved, advanced signal processing does allow effective signal separation given that the multiuser interference is limited to a certain degree. Comparing with traditional ALOHA, there are more than one pilot subslots in each hybrid ALOHA slot. Each user randomly selects a pilot subslot for training sequence transmission. Therefore, it is possible for different users to transmit their training sequences over nonoverlapping pilot subslots and achieving collision-free channel estimation. Relying mainly on the general multipacket reception (MPR) model, in this paper, quantitative analysis is conducted for the proposed hybrid ALOHA protocol in terms of throughput, stability, as well as delay behavior. It is observed that significant performance improvement can be achieved in comparison with the traditional ALOHA protocol based either on the collision model or the MPR model.", "", "This paper proposes a martingale extension of effective-capacity, a concept which has been instrumental in teletraffic theory to model the link-layer wireless channel and analyze QoS metrics. Together with a recently developed concept of an arrival-martingale, the proposed service-martingale concept enables the queueing analysis of a bursty source sharing a MAC channel. In particular, the paper derives the first rigorous and accurate stochastic delay bounds for a Markovian source sharing either an Aloha or CSMA CA channel, and further considers two extended scenarios accounting for 1) in-source scheduling and 2) spatial multiplexing MIMO. By leveraging the powerful martingale methodology, the obtained bounds are remarkably tight and improve state-of-the-art bounds by several orders of magnitude. Moreover, the obtained bounds indicate that MIMO spatial multiplexing is subject to the fundamental power-of-two phenomena.", "In ad hoc networks, performance objectives are often in contention with each other. Indeed, due to the transmission errors incurred over wireless channels, it is difficult to achieve a high rate of transmission in conjunction with reliable delivery of data and low latency. In order to obtain favorable throughput and delay performances, the system may choose to compromise on its reliability and have nodes forcibly dropping a small fraction of packets. The focus of this paper is on the characterization of tradeoffs between the achievable throughput, end-to-end delay and reliability in wireless networks with random access. We consider a multihop ad hoc network comprising several source-destination pairs communicating wirelessly via the slotted ALOHA channel access scheme. Employing ideas from statistical mechanics, we present an analytical framework for evaluating the throughput, end-to-end delay and reliability performances of the system. The main findings of this paper are (a) when the system is noise-limited, dropping a small fraction of packets in the network leads to a smaller end-to-end delay though the throughput suffers as well, and (b) when the system is interference-limited, however, there exist regimes where dropping a few packets in the network may actually reduce the end-to-end delay as well as increase the system throughput. We also present some empirical results which corroborate the results obtained analytically.", "We consider a cognitive radio network where multiple secondary users (SUs) contend for spectrum usage, using random access, over available primary user (PU) channels. Our focus is on SUs' queueing delay performance, for which a systematic understanding is lacking. We take a fluid queue approximation approach to study the steady-state delay performance of SUs, for cases with a single PU channel and multiple PU channels. Using stochastic fluid models, we represent the queue dynamics as Poisson driven stochastic differential equations, and characterize the moments of the SUs' queue lengths accordingly. Since in practical systems, a secondary user would have no knowledge of other users' activities, its contention probability has to be set based on local information. With this observation, we develop adaptive algorithms to find the optimal contention probability that minimizes the mean queue lengths. Moreover, we study the impact of multiple channels and multiple interfaces, on SUs' delay performance. As expected, the use of multiple channels and or multiple interfaces leads to significant delay reduction.", "", "", "Characterizing the performance of ad hoc networks is one of the most intricate open challenges; conventional ideas based on information-theoretic techniques and inequalities have not yet been able to successfully tackle this problem in its generality. Motivated thus, we promote the totally asymmetric simple exclusion process (TASEP), a particle flow model in statistical mechanics, as a useful analytical tool to study ad hoc networks with random access. Employing the TASEP framework, we first investigate the average end-to-end delay and throughput performance of a linear multihop flow of packets. Additionally, we analytically derive the distribution of delays incurred by packets at each node, as well as the joint distributions of the delays across adjacent hops along the flow. We then consider more complex wireless network models comprising intersecting flows, and propose the partial mean-field approximation (PMFA), a method that helps tightly approximate the throughput performance of the system. We finally demonstrate via a simple example that the PMFA procedure is quite general in that it may be used to accurately evaluate the performance of ad hoc networks with arbitrary topologies.", "", "We consider a Poisson network of sources, each with a destination at a given distance and a buffer of infinite capacity. Assuming independent Bernoulli arrivals, we characterize the stability region when one or two classes of users are present in the network. We then derive a fixed-point equation that determines the success probability of the typical source-destination link and evaluate the mean delay at each buffer." ] }
1704.02902
2613183344
In this work we consider a two-user and a three-user slotted ALOHA network with multi-packet reception (MPR) capabilities. The nodes can adapt their transmission probabilities and their transmission parameters based on the status of the other nodes. Each user has external bursty arrivals that are stored in their infinite capacity queues. For the two- and the three-user cases we obtain the stability region of the system. For the two-user case we provide the conditions where the stability region is a convex set. We perform a detailed mathematical analysis in order to study the queueing delay by formulating two boundary value problems (a Dirichlet and a Riemann-Hilbert boundary value problem), the solution of which provides the generating function of the joint stationary probability distribution of the queue size at user nodes. Furthermore, for the two-user symmetric case with MPR we obtain a lower and an upper bound for the average delay without explicitly computing the generating function for the stationary joint queue length distribution. The bounds as it is seen in the numerical results appear to be tight. Explicit expressions for the average delay are obtained for the symmetrical model with capture effect which is a subclass of MPR models. We also provide the optimal transmission probability in closed form expression that minimizes the average delay in the symmetric capture case. Finally, we evaluate numerically the presented theoretical results.
Below we present a recent set of papers that consider throughput and or delay characterization of general random access networks. The work in @cite_14 studied the impact of a full duplex relay in terms of throughput and delay in a multi-user network, where the users were assumed to have saturated traffic. The delay of a random access scheme in the Internet of Things concept was studied in @cite_45 . In @cite_43 throughput with delay constraints was studied in a shared access cognitive network. The delay characterization of larger networks was considered in @cite_25 @cite_5 . In @cite_31 the delay and the packet loss rate of a frame asynchronous coded slotted ALOHA system for an uncoordinated multiple access were also studied.
{ "cite_N": [ "@cite_14", "@cite_43", "@cite_45", "@cite_5", "@cite_31", "@cite_25" ], "mid": [ "2088499270", "2476113717", "2527221766", "2338470723", "", "2001829245" ], "abstract": [ "The effect of full-duplex cooperative relaying in a random access multiuser network is investigated here. First, we model the self-interference incurred due to full-duplex operation, assuming multi-packet reception capabilities for both the relay and the destination node. Traffic at the source nodes is considered saturated and the cooperative relay, which does not have packets of its own, stores a source packet that it receives successfully in its queue when the transmission to the destination has failed. We obtain analytical expressions for key performance metrics at the relay, such as arrival and service rates, stability conditions, and average queue length, as functions of the transmission probabilities, the self interference coefficient, and the links' outage probabilities. Furthermore, we study the impact of the relay node and the self-interference coefficient on the per-user and aggregate throughput, and the average delay per packet. We show that perfect self-interference cancelation plays a crucial role when the SINR threshold is small, since it may result to worse performance in throughput and delay comparing with the half-duplex case. This is because perfect self-interference cancelation can cause an unstable queue at the relay under some conditions.", "In this paper, we analyze a shared access network with one primary device and randomly distributed smart objects with secondary priority. Assuming random traffic at the primary device and saturated queues at the smart objects with secondary priority, an access protocol is employed to adjust the random access probabilities of the smart objects depending on the congestion level of the primary. We characterize the maximum throughput of the secondary network with respect to delay constraints on the primary. Our results highlight the impact of system design parameters on the delay and throughput behavior of the shared access network with massive number of connected objects.", "An innovative iterative process is proposed to acquire the dynamic process of multichannel slotted ALOHA (S-ALOHA). It reveals the direct relation between the number of contending devices that perform their jth random access (RA) attempt at the ith RA slot and the newly arrived devices before the ith RA slot. These results allow engineers to analytically derive the probability density function of RA delay of multichannel S-ALOHA, as well as its cumulative density function and average value. Under stable RA attempts assumption, simplified form of the above analysis is given, with which we prove the number of preamble transmissions follows truncated geometric distribution. Taking the two traffic models proposed for machine type communications as examples, numerical results are presented to verify the effectiveness of the proposed iterative process and the accuracy of its simplified form, and illustrate the delay characteristics of simplified long term evolution RA channel.", "In a wireless powered communication network, where user equipments (UEs) harvest radio frequency energy from an access point (AP) and send data to the AP, there exists the near-far problem with respect to energy harvesting efficiency due to UEs’ random locations. In this paper, we introduce the concept of delay-aware energy balancing by minimizing the average transmission delay while taking into account the issue of unbalanced harvested energy distribution. In particular, we propose an adaptive harvest-then-cooperate protocol, where every UE first harvests the energy emitted by the AP and then sends data to the AP directly or via other UEs acting as relays in a time-division multiplexing manner. In this protocol, the AP selects the combination of transmission power and routing topology by matching load and energy distributions in the network while minimizing the average transmission delay. Furthermore, we develop a method generating scheduling schemes for this protocol to avoid data overflow in the UE relay. To determine the combination with minimum delay, we approximate the average delay as a Markov decision process and propose a low-complexity sample path-based algorithm to obtain a near-optimal solution. Simulation results demonstrate that the proposed protocol is able to balance the energy distribution while minimizing the transmission delay.", "", "We evaluate the end-to-end delay of a multihop transmission scheme that includes a source, a number of relays, and a destination, in the presence of interferers located according to a Poisson point process. The medium access control (MAC) protocol considered is a combination of TDMA and ALOHA, according to which nodes located a certain number of hops apart are allowed to transmit with a certain probability. Based on an independent transmissions assumption, which decouples the queue evolutions, our analysis provides explicit expressions for the mean end-to-end delay and throughput, as well as scaling laws when the interferer density grows to infinity. If the source always has packets to transmit, we find that full spatial reuse, i.e., ALOHA, is asymptotically delay-optimal, but requires more hops than a TDMA-ALOHA protocol. The results of our analysis have applications in delay-minimizing joint MAC routing algorithms for networks with randomly located nodes.We simulate a network where sources and relays form a Poisson point process, and each source assembles a route to its destination by selecting the relays closest to the optimal locations. We assess both theoretically and via simulation the sensitivity of the end-to-end delay with respect to imperfect relay placements and route crossings." ] }
1704.02278
2047696272
Large industrial systems that combine services and applications, have become targets for cyber criminals and are challenging from the security, monitoring and auditing perspectives. Security log analysis is a key step for uncovering anomalies, detecting intrusion, and enabling incident response. The constant increase of link speeds, threats and users, produce large volumes of log data and become increasingly difficult to analyse on a Central Processing Unit (CPU). This paper presents a massively parallel Graphics Processing Unit (GPU) Log Processing (GLoP) library and can also be used for Deep Packet Inspection (DPI), using a prefix matching technique, harvesting the full power of off-the-shelf technologies. GLoP implements two different algorithm using different GPU memory and is compared against CPU counterpart implementations. The library can be used for processing nodes with single or multiple GPUs as well as GPU cloud farms. The results show throughput of 20 Gbps and demonstrate that modern GPUs can be utilised to increase the operational speed of large scale log processing scenarios, saving precious time before and after an intrusion has occurred.
Large scale log processing research has extensively studied the use of large scale data mining and big data scenarios using distributed frameworks analysis. have proposed a lightweight framework based on the Amazon Cloud Environment (EC2 and S3), using multiple nodes to speed up the log analysis processing, and harvesting the results using a map reduce implementation @cite_14 . demonstrated that by using Hadoop MapReduce, it was possible to decrease the processing time of log files by 89 proposed a theoretical logging framework dedicated to cloud infrastructures and software as a service (SaaS) running on a third party public cloud service @cite_1 .
{ "cite_N": [ "@cite_14", "@cite_1" ], "mid": [ "2036655376", "2055204629" ], "abstract": [ "Security log analysis is extremely useful for uncovering intrusions and anomalies. However, the sheer volume of log data demands new frameworks and techniques of computing and security. We present a lightweight distributed and parallel security log analysis framework that allows organizations to analyze a massive number of system, network, and transaction logs efficiently and scalably. Different from the general distributed frameworks, e.g., MapReduce, our framework is specifically designed for security log analysis. It features a minimum set of necessary properties, such as dynamic task scheduling for streaming logs. For prototyping, we implement our framework in Amazon cloud environments (EC2 and S3) with a basic analysis application. Our evaluation demonstrates the effectiveness of our design and shows the potential of our cloud-based distributed framework in large-scale log analysis scenarios.", "Logs are one of the most important pieces of analytical data in a cloud-based service infrastructure. At any point in time, service owners and operators need to understand the status of each infrastructure component for fault monitoring, to assess feature usage, and to monitor business processes. Application developers, as well as security personnel, need access to historic information for debugging and forensic investigations. This paper discusses a logging framework and guidelines that provide a proactive approach to logging to ensure that the data needed for forensic investigations has been generated and collected. The standardized framework eliminates the need for logging stakeholders to reinvent their own standards. These guidelines make sure that critical information associated with cloud infrastructure and software as a service (SaaS) use-cases are collected as part of a defense in depth strategy. In addition, they ensure that log consumers can effectively and easily analyze, process, and correlate the emitted log records. The theoretical foundations are emphasized in the second part of the paper that covers the implementation of the framework in an example SaaS offering running on a public cloud service. While the framework is targeted towards and requires the buy-in from application developers, the data collected is critical to enable comprehensive forensic investigations. In addition, it helps IT architects and technical evaluators of logging architectures build a business oriented logging framework." ] }
1704.02278
2047696272
Large industrial systems that combine services and applications, have become targets for cyber criminals and are challenging from the security, monitoring and auditing perspectives. Security log analysis is a key step for uncovering anomalies, detecting intrusion, and enabling incident response. The constant increase of link speeds, threats and users, produce large volumes of log data and become increasingly difficult to analyse on a Central Processing Unit (CPU). This paper presents a massively parallel Graphics Processing Unit (GPU) Log Processing (GLoP) library and can also be used for Deep Packet Inspection (DPI), using a prefix matching technique, harvesting the full power of off-the-shelf technologies. GLoP implements two different algorithm using different GPU memory and is compared against CPU counterpart implementations. The library can be used for processing nodes with single or multiple GPUs as well as GPU cloud farms. The results show throughput of 20 Gbps and demonstrate that modern GPUs can be utilised to increase the operational speed of large scale log processing scenarios, saving precious time before and after an intrusion has occurred.
described a fast filter virus detection engine running on GPUs based on eigenvalues with good performances @cite_15 . Our previous work @cite_11 has shown that a massively parallel pattern matching algorithm based on the Knuth-Morris-Pratt algorithm @cite_26 can achieve a 29 fold increase in processing speed over CPU counterparts. From the output of these works, it is clear that the processing capabilities of off-the-shelf hardware have a great potential not only to increase the speed on a single stand-alone processing server but also on GPU cloud deployments @cite_27 .
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_26", "@cite_11" ], "mid": [ "2143550112", "2038166047", "1985108724", "2406264764" ], "abstract": [ "MapReduce is an efficient distributed computing model for large-scale data processing. However, single-node performance is gradually to be the bottleneck in compute-intensive jobs. This paper presents an approach of MapReduce improvement with GPU acceleration, which is implemented by Hadoop and OpenCL. Different from other implementations, it targets at general and inexpensive hardware platform, and it is seamless-integrated with Apache Hadoop, a most widely used MapReduce framework. As a heterogeneous multi-machine and multicore architecture, it aims at both data- and compute-intensive applications. An almost 2 times performance improvement has been validated, without any farther optimization.", "For the virus signature matching time of traditional the eigenvalue is too long, which can’t meet the need of the information security, a method of fast virus signature matching on the GPU is compelled. The system uses the GPU as a fast filter to quickly identify possible virus signatures for thousands of data objects in parallel. The performance of their library suggests that the GPU is now a viable platform for cost-effective, high-performance network security processing. And it shows that the computing speed of the eigenvalue based on GPU is obviously higher than the eigenvalue based on CPU by the experiment.", "An algorithm is presented which finds all occurrences of one given string within another, in running time proportional to the sum of the lengths of the strings. The constant of proportionality is low enough to make this algorithm of practical use, and the procedure can also be extended to deal with some more general pattern-matching problems. A theoretical application of the algorithm shows that the set of concatenations of even palindromes, i.e., the language @math , can be recognized in linear time. Other algorithms which run even faster on the average are also considered.", "Graphics Processing Units (GPUs) have become the focus of much interest with the scientific community lately due to their highly parallel computing capabilities, and cost effectiveness. They have evolved from simple graphic rendering devices to extremely complex parallel processors, used in a plethora of scientific areas. This paper outlines experimental results of a comparison between GPUs and general purpose CPUs for exact pattern matching. Specifically, a comparison is conducted for the Knuth-Morris-Pratt algorithm using different string sizes, alphabet sizes and introduces different techniques such as loop unrolling, and shared memory using the Compute Unified Device Architecture framework. Empirical results demonstrate a 29 fold increase in processing speed where GPUs are used instead of CPUs." ] }
1704.02278
2047696272
Large industrial systems that combine services and applications, have become targets for cyber criminals and are challenging from the security, monitoring and auditing perspectives. Security log analysis is a key step for uncovering anomalies, detecting intrusion, and enabling incident response. The constant increase of link speeds, threats and users, produce large volumes of log data and become increasingly difficult to analyse on a Central Processing Unit (CPU). This paper presents a massively parallel Graphics Processing Unit (GPU) Log Processing (GLoP) library and can also be used for Deep Packet Inspection (DPI), using a prefix matching technique, harvesting the full power of off-the-shelf technologies. GLoP implements two different algorithm using different GPU memory and is compared against CPU counterpart implementations. The library can be used for processing nodes with single or multiple GPUs as well as GPU cloud farms. The results show throughput of 20 Gbps and demonstrate that modern GPUs can be utilised to increase the operational speed of large scale log processing scenarios, saving precious time before and after an intrusion has occurred.
String searching algorithms can be classified to single-pattern and multi-pattern matching. Single pattern matching algorithms search the complete string for a single pattern sequentially. The naive approach to search for a single pattern is to iteratively walk through the text string and every time there is a mismatch or a complete match, the algorithm rewinds back. Optimised algorithms such as the Knuth-Morris-Pratt (KMP) and Boyer-Moore (BM) avoid rewinding by introducing failure and backtracking tables respectively @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "2134826720" ], "abstract": [ "An algorithm is presented that searches for the location, “ i l” of the first occurrence of a character string, “ pat ,” in another string, “ string .” During the search operation, the characters of pat are matched starting with the last character of pat . The information gained by starting the match at the end of the pattern often allows the algorithm to proceed in large jumps through the text being searched. Thus the algorithm has the unusual property that, in most cases, not all of the first i characters of string are inspected. The number of characters actually inspected (on the average) decreases as a function of the length of pat . For a random English pattern of length 5, the algorithm will typically inspect i 4 characters of string before finding a match at i . Furthermore, the algorithm has been implemented so that (on the average) fewer than i + patlen machine instructions are executed. These conclusions are supported with empirical evidence and a theoretical analysis of the average behavior of the algorithm. The worst case behavior of the algorithm is linear in i + patlen , assuming the availability of array space for tables linear in patlen plus the size of the alphabet. 3" ] }
1704.02278
2047696272
Large industrial systems that combine services and applications, have become targets for cyber criminals and are challenging from the security, monitoring and auditing perspectives. Security log analysis is a key step for uncovering anomalies, detecting intrusion, and enabling incident response. The constant increase of link speeds, threats and users, produce large volumes of log data and become increasingly difficult to analyse on a Central Processing Unit (CPU). This paper presents a massively parallel Graphics Processing Unit (GPU) Log Processing (GLoP) library and can also be used for Deep Packet Inspection (DPI), using a prefix matching technique, harvesting the full power of off-the-shelf technologies. GLoP implements two different algorithm using different GPU memory and is compared against CPU counterpart implementations. The library can be used for processing nodes with single or multiple GPUs as well as GPU cloud farms. The results show throughput of 20 Gbps and demonstrate that modern GPUs can be utilised to increase the operational speed of large scale log processing scenarios, saving precious time before and after an intrusion has occurred.
On the other hand multi-pattern matching algorithms search simultaneously for multiple patterns in the text string. The most common multi-pattern algorithm is the Aho-Corasick (AC) @cite_7 , @cite_22 which has been implemented in a variety of hardware architectures such as FPGAs @cite_29 @cite_17 and GPUs @cite_19 .
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_29", "@cite_19", "@cite_17" ], "mid": [ "2106062486", "2099964107", "1911613893", "2116079772", "2049650913" ], "abstract": [ "Network Intrusion Detection and Prevention Systems have emerged as one of the most effective ways of providing security to those connected to the network, and at the heart of almost every modern intrusion detection system is a string matching algorithm. String matching is one of the most critical elements because it allows for the system to make decisions based not just on the headers, but the actual content flowing through the network. Unfortunately, checking every byte of every packet to see if it matches one of a set of ten thousand strings becomes a computationally intensive task as network speeds grow into the tens, and eventually hundreds, of gigabits second. To keep up with these speeds a specialized device is required, one that can maintain tight bounds on worst case performance, that can be updated with new rules without interrupting operation, and one that is efficient enough that it could be included on chip with existing network chips or even into wireless devices. We have developed an approach that relies on a special purpose architecture that executes novel string matching algorithms specially optimized for implementation in our design. We show how the problem can be solved by converting the large database of strings into many tiny state machines, each of which searches for a portion of the rules and a portion of the bits of each rule. Through the careful co-design and optimization of our our architecture with a new string matching algorithm we show that it is possible to build a system that is 10 times more efficient than the currently best known approaches.", "This paper describes a simple, efficient algorithm to locate all occurrences of any of a finite number of keywords in a string of text. The algorithm consists of constructing a finite state pattern matching machine from the keywords and then using the pattern matching machine to process the text string in a single pass. Construction of the pattern matching machine takes time proportional to the sum of the lengths of the keywords. The number of state transitions made by the pattern matching machine in processing the text string is independent of the number of keywords. The algorithm has been used to improve the speed of a library bibliographic search program by a factor of 5 to 10.", "Intrusion Detection Systems such as Snort scan incoming packets for evidence of security threats. The most computation-intensive part of these systems is a text search against hundreds of patterns, and must be performed at wire-speed. FPGAs are particularly well suited for this task and several such systems have been proposed. In this paper we expand on previous work, in order to achieve and exceed a processing bandwidth of 11Gbps. We employ a scalable, low-latency architecture, and use extensive fine-grain pipelining to tackle the fan-out, match, and encode bottlenecks and achieve operating frequencies in excess of 340MHz for fast Virtex devices. To increase throughput, we use multiple comparators and allow for parallel matching of multiple search strings. We evaluate the area and latency cost of our approach and find that the match cost per search pattern character is between 4 and 5 logic cells.", "We develop GPU adaptations of the Aho-Corasick and multipattern Boyer-Moore string matching algorithms for the two cases GPU-to-GPU (input to the algorithms is initially in GPU memory and the output is left in GPU memory) and host-to-host (input and output are in the memory of the host CPU). For the GPU-to-GPU case, we consider several refinements to a base GPU implementation and measure the performance gain from each refinement. For the host-to-host case, we analyze two strategies to communicate between the host and the GPU and show that one is optimal with respect to runtime while the other requires less device memory. This analysis is done for GPUs with one I O channel to the host as well as those with 2. Experiments conducted on an NVIDIA Tesla GT200 GPU that has 240 cores running off of a Xeon 2.8 GHz quad-core host CPU show that, for the GPU-to-GPU case, our Aho-Corasick GPU adaptation achieves a speedup between 8.5 and 9.5 relative to a single-thread CPU implementation and between 2.4 and 3.2 relative to the best multithreaded implementation. For the host-to-host case, the GPU AC code achieves a speedup of 3.1 relative to a single-threaded CPU implementation. However, the GPU is unable to deliver any speedup relative to the best multithreaded code running on the quad-core host. In fact, the measured speedups for the latter case ranged between 0.74 and 0.83. Early versions of our multipattern Boyer-Moore adaptations ran 7 to 10 percent slower than corresponding versions of the AC adaptations and we did not refine the multipattern Boyer-Moore codes further.", "The Aho-Corasick (AC) algorithm is a very flexible and efficient but memory-hungry pattern matching algorithm that can scan the existence of a query string among multiple test strings looking at each character exactly once, making it one of the main options for software-base intrusion detection systems such as SNORT. We present the Split-AC algorithm, which is a reconfigurable variation of the AC algorithm that exploits domain-specific characteristics of intrusion detection to reduce considerably the FSM memory requirements. SplitAC achieves an overall reduction between 28-75 compared to the best proposed implementation." ] }
1704.02224
2605462541
We propose a novel 3D neural network architecture for 3D hand pose estimation from a single depth image. Different from previous works that mostly run on 2D depth image domain and require intermediate or post process to bring in the supervision from 3D space, we convert the depth map to a 3D volumetric representation, and feed it into a 3D convolutional neural network(CNN) to directly produce the pose in 3D requiring no further process. Our system does not require the ground truth reference point for initialization, and our network architecture naturally integrates both local feature and global context in 3D space. To increase the coverage of the hand pose space of the training data, we render synthetic depth image by transferring hand pose from existing real image datasets. We evaluation our algorithm on two public benchmarks and achieve the state-of-the-art performance. The synthetic hand pose dataset will be available.
Hand pose estimation has been extensively studied in many previous works, and comprehensive review on color image and depth image based hand pose estimation are given in Erol al @cite_8 and Supancic al @cite_26 . With a plenty amount of training data, hand pose can be directly learned, e.g. using random forest @cite_11 , or retrieved, e.g. KNN @cite_27 . To handle the heavy occlusion, 3D hand model has been used to bring context regulation and refine the result @cite_17 @cite_14 @cite_12 . However, none of these works effectively took advantage of the large scale training data with the state of the art learning technique.
{ "cite_N": [ "@cite_26", "@cite_11", "@cite_14", "@cite_8", "@cite_27", "@cite_12", "@cite_17" ], "mid": [ "1517258739", "2093414253", "2153169563", "2137940226", "2162254475", "2114663654", "1980265110" ], "abstract": [ "Hand pose estimation has matured rapidly in recent years. The introduction of commodity depth sensors and a multitude of practical applications have spurred new advances. We provide an extensive analysis of the state-of-the-art, focusing on hand pose estimation from a single depth frame. To do so, we have implemented a considerable number of systems, and have released software and evaluation code. We summarize important conclusions here: (1) Coarse pose estimation appears viable for scenes with isolated hands. However, high precision pose estimation [required for immersive virtual reality and cluttered scenes (where hands may be interacting with nearby objects and surfaces) remain a challenge. To spur further progress we introduce a challenging new dataset with diverse, cluttered scenes. (2) Many methods evaluate themselves with disparate criteria, making comparisons difficult. We define a consistent evaluation criteria, rigorously motivated by human experiments. (3) We introduce a simple nearest-neighbor baseline that outperforms most existing systems. This implies that most systems do not generalize beyond their training sets. This also reinforces the under-appreciated point that training data is as important as the model itself. We conclude with directions for future progress.", "In this paper we present the Latent Regression Forest (LRF), a novel framework for real-time, 3D hand pose estimation from a single depth image. In contrast to prior forest-based methods, which take dense pixels as input, classify them independently and then estimate joint positions afterwards, our method can be considered as a structured coarse-to-fine search, starting from the centre of mass of a point cloud until locating all the skelet al joints. The searching process is guided by a learnt Latent Tree Model which reflects the hierarchical topology of the hand. Our main contributions can be summarised as follows: (i) Learning the topology of the hand in an unsupervised, data-driven manner. (ii) A new forest-based, discriminative framework for structured search in images, as well as an error regression step to avoid error accumulation. (iii) A new multi-view hand pose dataset containing 180K annotated images from 10 different subjects. Our experiments show that the LRF out-performs state-of-the-art methods in both accuracy and efficiency.", "Due to occlusions, the estimation of the full pose of a human hand interacting with an object is much more challenging than pose recovery of a hand observed in isolation. In this work we formulate an optimization problem whose solution is the 26-DOF hand pose together with the pose and model parameters of the manipulated object. Optimization seeks for the joint hand-object model that (a) best explains the incompleteness of observations resulting from occlusions due to hand-object interaction and (b) is physically plausible in the sense that the hand does not share the same physical space with the object. The proposed method is the first that solves efficiently the continuous, full-DOF, joint hand-object tracking problem based solely on markerless multicamera input. Additionally, it is the first to demonstrate how hand-object interaction can be exploited as a context that facilitates hand pose estimation, instead of being considered as a complicating factor. Extensive quantitative and qualitative experiments with simulated and real world image sequences as well as a comparative evaluation with a state-of-the-art method for pose estimation of isolated hands, support the above findings.", "Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI). Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glove-based sensing. This technology, however, has several drawbacks including that it hinders the ease and naturalness with which the user can interact with the computer-controlled environment, and it requires long calibration and setup procedures. Computer vision (CV) has the potential to provide more natural, non-contact solutions. As a result, there have been considerable research efforts to use the hand as an input device for HCI. In particular, two types of research directions have emerged. One is based on gesture classification and aims to extract high-level abstract information corresponding to motion patterns or postures of the hand. The second is based on pose estimation systems and aims to capture the real 3D motion of the hand. This paper presents a literature review on the latter research direction, which is a very challenging problem in the context of HCI.", "A method is proposed that can generate a ranked list of plausible three-dimensional hand configurations that best match an input image. Hand pose estimation is formulated as an image database indexing problem, where the closest matches for an input hand image are retrieved from a large database of synthetic hand images. In contrast to previous approaches, the system can function in the presence of clutter, thanks to two novel clutter-tolerant indexing methods. First, a computationally efficient approximation of the image-to-model chamfer distance is obtained by embedding binary edge images into a high-dimensional Euclidean space. Second, a general-purpose, probabilistic line matching method identifies those line segment correspondences between model and input images that are the least likely to have occurred by chance. The performance of this clutter tolerant approach is demonstrated in quantitative experiments with hundreds of real hand images.", "Articulated hand-tracking systems have been widely used in virtual reality but are rarely deployed in consumer applications due to their price and complexity. In this paper, we propose an easy-to-use and inexpensive system that facilitates 3-D articulated user-input using the hands. Our approach uses a single camera to track a hand wearing an ordinary cloth glove that is imprinted with a custom pattern. The pattern is designed to simplify the pose estimation problem, allowing us to employ a nearest-neighbor approach to track hands at interactive rates. We describe several proof-of-concept applications enabled by our system that we hope will provide a foundation for new interactions in modeling, animation control and augmented reality.", "This paper describes a new method for acquiring physically realistic hand manipulation data from multiple video streams. The key idea of our approach is to introduce a composite motion control to simultaneously model hand articulation, object movement, and subtle interaction between the hand and object. We formulate video-based hand manipulation capture in an optimization framework by maximizing the consistency between the simulated motion and the observed image data. We search an optimal motion control that drives the simulation to best match the observed image data. We demonstrate the effectiveness of our approach by capturing a wide range of high-fidelity dexterous manipulation data. We show the power of our recovered motion controllers by adapting the captured motion data to new objects with different properties. The system achieves superior performance against alternative methods such as marker-based motion capture and kinematic hand motion tracking." ] }
1704.02224
2605462541
We propose a novel 3D neural network architecture for 3D hand pose estimation from a single depth image. Different from previous works that mostly run on 2D depth image domain and require intermediate or post process to bring in the supervision from 3D space, we convert the depth map to a 3D volumetric representation, and feed it into a 3D convolutional neural network(CNN) to directly produce the pose in 3D requiring no further process. Our system does not require the ground truth reference point for initialization, and our network architecture naturally integrates both local feature and global context in 3D space. To increase the coverage of the hand pose space of the training data, we render synthetic depth image by transferring hand pose from existing real image datasets. We evaluation our algorithm on two public benchmarks and achieve the state-of-the-art performance. The synthetic hand pose dataset will be available.
Recently, convolutional neural network has been demonstrated to be effective in handle articulated pose estimation. The hand skeleton joint locations are estimated in the depth image domain as a heat map via classification @cite_4 @cite_13 , or directly by regression @cite_16 @cite_20 . However, to produce final result in 3D space, the intermediate result learned on 2D image domain has to be projected to 3D during the learning procedure or post process. DeepPrior @cite_16 used CNN to regress the hand skeleton joints by exploiting pose priors. DeepModel @cite_6 proposed a deep learning approach with a new forward kinematics based layer, which helps to ensure the geometric validity of estimated poses. Oberweger et. al. @cite_10 used a feedback loop for hand pose estimation and depth map synthesizing to refine hand pose iteratively. However, these regression models either require a careful initial alignment or being sensitive to the predefined bone length. In contrast, we directly convert the input to 3D volumetric and perform all computation in 3D to prevent potential error.
{ "cite_N": [ "@cite_4", "@cite_10", "@cite_6", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "2075156252", "2210697964", "", "1702419847", "2473634362", "" ], "abstract": [ "We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.", "We propose an entirely data-driven approach to estimating the 3D pose of a hand given a depth image. We show that we can correct the mistakes made by a Convolutional Neural Network trained to predict an estimate of the 3D pose by using a feedback loop. The components of this feedback loop are also Deep Networks, optimized using training data. They remove the need for fitting a 3D model to the input data, which requires both a carefully designed fitting function and algorithm. We show that our approach outperforms state-of-the-art methods, and is efficient as our implementation runs at over 400 fps on a single GPU.", "", "We introduce and evaluate several architectures for Convolutional Neural Networks to predict the 3D joint locations of a hand given a depth map. We first show that a prior on the 3D pose can be easily introduced and significantly improves the accuracy and reliability of the predictions. We also show how to use context efficiently to deal with ambiguities between fingers. These two contributions allow us to significantly outperform the state-of-the-art on several challenging benchmarks, both in terms of accuracy and computation times.", "Articulated hand pose estimation plays an important role in human-computer interaction. Despite the recent progress, the accuracy of existing methods is still not satisfactory, partially due to the difficulty of embedded high-dimensional and non-linear regression problem. Different from the existing discriminative methods that regress for the hand pose with a single depth image, we propose to first project the query depth image onto three orthogonal planes and utilize these multi-view projections to regress for 2D heat-maps which estimate the joint positions on each plane. These multi-view heat-maps are then fused to produce final 3D hand pose estimation with learned pose priors. Experiments show that the proposed method largely outperforms state-of-the-art on a challenging dataset. Moreover, a cross-dataset experiment also demonstrates the good generalization ability of the proposed method.", "" ] }
1704.02224
2605462541
We propose a novel 3D neural network architecture for 3D hand pose estimation from a single depth image. Different from previous works that mostly run on 2D depth image domain and require intermediate or post process to bring in the supervision from 3D space, we convert the depth map to a 3D volumetric representation, and feed it into a 3D convolutional neural network(CNN) to directly produce the pose in 3D requiring no further process. Our system does not require the ground truth reference point for initialization, and our network architecture naturally integrates both local feature and global context in 3D space. To increase the coverage of the hand pose space of the training data, we render synthetic depth image by transferring hand pose from existing real image datasets. We evaluation our algorithm on two public benchmarks and achieve the state-of-the-art performance. The synthetic hand pose dataset will be available.
3D deep learning has been used for 3D object detection @cite_19 and scene understanding @cite_2 . We extend the idea for 3D hand pose estimation. The most related work to our approach is @cite_26 , which proposed to directly estimate hand pose in 3D space using nearest neighbor search. We apply deep learning technique to better leverage 3D evidence.
{ "cite_N": [ "@cite_19", "@cite_26", "@cite_2" ], "mid": [ "2949768986", "1517258739", "2297454107" ], "abstract": [ "We focus on the task of amodal 3D object detection in RGB-D images, which aims to produce a 3D bounding box of an object in metric form at its full extent. We introduce Deep Sliding Shapes, a 3D ConvNet formulation that takes a 3D volumetric scene from a RGB-D image as input and outputs 3D object bounding boxes. In our approach, we propose the first 3D Region Proposal Network (RPN) to learn objectness from geometric shapes and the first joint Object Recognition Network (ORN) to extract geometric features in 3D and color features in 2D. In particular, we handle objects of various sizes by training an amodal RPN at two different scales and an ORN to regress 3D bounding boxes. Experiments show that our algorithm outperforms the state-of-the-art by 13.8 in mAP and is 200x faster than the original Sliding Shapes. All source code and pre-trained models will be available at GitHub.", "Hand pose estimation has matured rapidly in recent years. The introduction of commodity depth sensors and a multitude of practical applications have spurred new advances. We provide an extensive analysis of the state-of-the-art, focusing on hand pose estimation from a single depth frame. To do so, we have implemented a considerable number of systems, and have released software and evaluation code. We summarize important conclusions here: (1) Coarse pose estimation appears viable for scenes with isolated hands. However, high precision pose estimation [required for immersive virtual reality and cluttered scenes (where hands may be interacting with nearby objects and surfaces) remain a challenge. To spur further progress we introduce a challenging new dataset with diverse, cluttered scenes. (2) Many methods evaluate themselves with disparate criteria, making comparisons difficult. We define a consistent evaluation criteria, rigorously motivated by human experiments. (3) We introduce a simple nearest-neighbor baseline that outperforms most existing systems. This implies that most systems do not generalize beyond their training sets. This also reinforces the under-appreciated point that training data is as important as the model itself. We conclude with directions for future progress.", "While deep neural networks have led to human-level performance on computer vision tasks, they have yet to demonstrate similar gains for holistic scene understanding. In particular, 3D context has been shown to be an extremely important cue for scene understanding - yet very little research has been done on integrating context information with deep models. This paper presents an approach to embed 3D context into the topology of a neural network trained to perform holistic scene understanding. Given a depth image depicting a 3D scene, our network aligns the observed scene with a predefined 3D scene template, and then reasons about the existence and location of each object within the scene template. In doing so, our model recognizes multiple objects in a single forward pass of a 3D convolutional neural network, capturing both global scene and local object information simultaneously. To create training data for this 3D network, we generate partly hallucinated depth images which are rendered by replacing real objects with a repository of CAD models of the same object category. Extensive experiments demonstrate the effectiveness of our algorithm compared to the state-of-the-arts. Source code and data are available at this http URL" ] }
1704.02470
2950985258
Despite a rapid rise in the quality of built-in smartphone cameras, their physical limitations - small sensor size, compact lenses and the lack of specific hardware, - impede them to achieve the quality results of DSLR cameras. In this work we present an end-to-end deep learning approach that bridges this gap by translating ordinary photos into DSLR-quality images. We propose learning the translation function using a residual convolutional neural network that improves both color rendition and image sharpness. Since the standard mean squared loss is not well suited for measuring perceptual image quality, we introduce a composite perceptual error function that combines content, color and texture losses. The first two losses are defined analytically, while the texture loss is learned in an adversarial fashion. We also present DPED, a large-scale dataset that consists of real photos captured from three different phones and one high-end reflex camera. Our quantitative and qualitative assessments reveal that the enhanced image quality is comparable to that of DSLR-taken photos, while the methodology is generalized to any type of digital camera.
aims at restoring the original image from its downscaled version. In @cite_13 a CNN architecture and MSE loss are used for directly learning low to high resolution mapping. It is the first CNN-based solution to achieve top performance in single image super-resolution, comparable with non-CNN methods @cite_12 . The subsequent works developed deeper and more complex CNN architectures (e.g., @cite_17 @cite_9 @cite_14 ). Currently, the best photo-realistic results on this task are achieved using a VGG-based loss function @cite_19 and adversarial networks @cite_4 that turned out to be efficient at recovering plausible high-frequency components.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_19", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2520164769", "2523714292", "2476548250", "2950689937", "54257720", "935139217", "2951997238" ], "abstract": [ "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.", "We address the problem of image upscaling in the form of single image super-resolution based on a dictionary of low- and high-resolution exemplars. Two recently proposed methods, Anchored Neighborhood Regression (ANR) and Simple Functions (SF), provide state-of-the-art quality performance. Moreover, ANR is among the fastest known super-resolution methods. ANR learns sparse dictionaries and regressors anchored to the dictionary atoms. SF relies on clusters and corresponding learned functions. We propose A+, an improved variant of ANR, which combines the best qualities of ANR and SF. A+ builds on the features and anchored regressors from ANR but instead of learning the regressors on the dictionary it uses the full training material, similar to SF. We validate our method on standard images and compare with state-of-the-art methods. We obtain improved quality (i.e. 0.2–0.7 dB PSNR better than ANR) and excellent time complexity, rendering A+ the most efficient dictionary-based super-resolution method to date.", "We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification simonyan2015very . We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates ( @math times higher than SRCNN dong2015image ) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable." ] }
1704.02470
2950985258
Despite a rapid rise in the quality of built-in smartphone cameras, their physical limitations - small sensor size, compact lenses and the lack of specific hardware, - impede them to achieve the quality results of DSLR cameras. In this work we present an end-to-end deep learning approach that bridges this gap by translating ordinary photos into DSLR-quality images. We propose learning the translation function using a residual convolutional neural network that improves both color rendition and image sharpness. Since the standard mean squared loss is not well suited for measuring perceptual image quality, we introduce a composite perceptual error function that combines content, color and texture losses. The first two losses are defined analytically, while the texture loss is learned in an adversarial fashion. We also present DPED, a large-scale dataset that consists of real photos captured from three different phones and one high-end reflex camera. Our quantitative and qualitative assessments reveal that the enhanced image quality is comparable to that of DSLR-taken photos, while the methodology is generalized to any type of digital camera.
similarly targets removal of noise and artifacts from the pictures. @cite_27 the authors proposed weighted MSE together with a 3-layer CNN, while in @cite_21 it was shown that an 8-layer residual CNN performs better when using a standard mean square error. Among other solutions are a bi-channel CNN @cite_16 , a 17-layer CNN @cite_11 and a recurrent CNN @cite_0 that was reapplied several times to the produced results.
{ "cite_N": [ "@cite_21", "@cite_0", "@cite_27", "@cite_16", "@cite_11" ], "mid": [ "2345337169", "2525037006", "2402704303", "2201706299", "2508457857" ], "abstract": [ "This paper shows that it is possible to train large and deep convolutional neural networks (CNN) for JPEG compression artifacts reduction, and that such networks can provide significantly better reconstruction quality compared to previously used smaller networks as well as to any other state-of-the-art methods. We were able to train networks with 8 layers in a single step and in relatively short time by combining residual learning, skip architecture, and symmetric weight initialization. We provide further insights into convolution networks for JPEG artifact reduction by evaluating three different objectives, generalization with respect to training dataset size, and generalization with respect to JPEG quality level.", "In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain accumulation. Our core ideas lie in our new rain image models and a novel deep learning architecture. We first modify the commonly used model, which is a linear combination of a rain streak layer and a background layer, by adding a binary map that locates rain streak regions. Second, we create a model consisting of a component representing rain accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which normally happen in heavy rain. Based on the first model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. In many cases though, rain streaks can be dense and large in their size, thus to obtain the clean background, we need spatial contextual information. For this, we utilize the dilated convolution. To handle rain accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose an iterative information feedback (IIF) network that removes rain streaks and clears up the rain accumulation iteratively and progressively. Overall, this multi-task learning and iterative information feedback benefits each other and constitutes a network that is end-to-end trainable. Our extensive evaluation on real images, particularly on heavy rain, shows the effectiveness of our novel models and architecture, outperforming the state-of-the-art methods significantly.", "We propose a depth image denoising and enhancement framework using a light convolutional network. The network contains three layers for high dimension projection, missing data completion and image reconstruction. We jointly use both depth and visual images as inputs. For the gray image, we design a pre-processing procedure to enhance the edges and remove unnecessary detail. For the depth image, we propose a data augmentation strategy to regenerate and increase essential training data. Further, we propose a weighted loss function for network training to adaptively improve the learning efficiency. We tested our algorithm on benchmark data and obtained very promising visual and quantitative results at real-time speed.", "Face hallucination method is proposed to generate high-resolution images from low-resolution ones for better visualization. However, conventional hallucination methods are often designed for controlled settings and cannot handle varying conditions of pose, resolution degree, and blur. In this paper, we present a new method of face hallucination, which can consistently improve the resolution of face images even with large appearance variations. Our method is based on a novel network architecture called Bi-channel Convolutional Neural Network (Bi-channel CNN). It extracts robust face representations from raw input by using deep convolu-tional network, then adaptively integrates two channels of information (the raw input image and face representations) to predict the high-resolution image. Experimental results show our system outperforms the prior state-of-the-art methods.", "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing." ] }
1704.02470
2950985258
Despite a rapid rise in the quality of built-in smartphone cameras, their physical limitations - small sensor size, compact lenses and the lack of specific hardware, - impede them to achieve the quality results of DSLR cameras. In this work we present an end-to-end deep learning approach that bridges this gap by translating ordinary photos into DSLR-quality images. We propose learning the translation function using a residual convolutional neural network that improves both color rendition and image sharpness. Since the standard mean squared loss is not well suited for measuring perceptual image quality, we introduce a composite perceptual error function that combines content, color and texture losses. The first two losses are defined analytically, while the texture loss is learned in an adversarial fashion. We also present DPED, a large-scale dataset that consists of real photos captured from three different phones and one high-end reflex camera. Our quantitative and qualitative assessments reveal that the enhanced image quality is comparable to that of DSLR-taken photos, while the methodology is generalized to any type of digital camera.
Here the goal is to recover colors that were removed from the original image. The baseline approach for this problem is to predict new values for each pixel based on its local description that consists of various hand-crafted features @cite_5 . Considerably better performance on this task was obtained using generative adversarial networks @cite_15 or a 16-layer CNN with a multinomial cross-entropy loss function @cite_10 .
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "", "2552465644", "2326925005" ], "abstract": [ "", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.", "Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks." ] }
1704.02259
2952859598
The Breadth-First Search (BFS) algorithm is an important building block for graph analysis of large datasets. The BFS parallelisation has been shown to be challenging because of its inherent characteristics, including irregular memory access patterns, data dependencies and workload imbalance, that limit its scalability. We investigate the optimisation and vectorisation of the hybrid BFS (a combination of top-down and bottom-up approaches for BFS) on the Xeon Phi, which has advanced vector processing capabilities. The results show that our new implementation improves by 33 , for a one million vertices graph, compared to the state-of-the-art.
The key contribution of this work builds on the studies carried out by Gao in @cite_17 and @cite_8 . In the first study, they present the vectorisation of the BFS algorithm using vector intrinsic functions, which was outperformed by @cite_10 , which clarified the impact of , and vector unit usage rate. The second study is related with the vectorisation of the hybrid BFS algorithm. Similarly to the BFS implementation in the first study, Gao @cite_17 present the process of vectorising not only the but also the approach of the hybrid BFS algorithm. Again, little detail of their implementation is provided, so in Section we present our vectorisation of the approach of the hybrid BFS algorithm, including a systematic analysis of the vector unit utilisation. The results of our hybrid BFS algorithm are better in terms of performance compared against those presented in @cite_8 .
{ "cite_N": [ "@cite_8", "@cite_10", "@cite_17" ], "mid": [ "2104281071", "2952131645", "" ], "abstract": [ "Data-intensive applications have drawn more and more attention in the last few years. The basic graph traversal algorithm, the breadth-first search (BFS), a typical data-intensive application, is widely used and the Graph 500 benchmark uses it to rank the performance of supercomputers. The Intel Many Integrated Core (MIC) architecture, which is designed for highly parallel computing, has not been fully evaluated for graph traversal. In this paper, we discuss how to use the MIC to accelerate the BFS. We present some optimizations for native BFS algorithms and develop a heterogeneous BFS algorithm. For the native BFS algorithm, we mainly discuss how to exploit many cores and wide-vector processing units. The performance of our optimized native BFS implementation is 5.3 times that of the highest published performance for graphics processing units (GPU). For the heterogeneous BFS algorithm, the performance of the general processing unit (CPU) and MIC cooperative computing can gain an increase in speed of approximately 1.4 times than that of a CPU for graphs with 2M vertices. This work is valuable for using a MIC to accelerate the BFS. It is also a general guidance for a MIC used for data-intensive applications.", "Breadth First Search (BFS) is a building block for graph algorithms and has recently been used for large scale analysis of information in a variety of applications including social networks, graph databases and web searching. Due to its importance, a number of different parallel programming models and architectures have been exploited to optimize the BFS. However, due to the irregular memory access patterns and the unstructured nature of the large graphs, its efficient parallelization is a challenge. The Xeon Phi is a massively parallel architecture available as an off-the-shelf accelerator, which includes a powerful 512 bit vector unit with optimized scatter and gather functions. Given its potential benefits, work related to graph traversing on this architecture is an active area of research. We present a set of experiments in which we explore architectural features of the Xeon Phi and how best to exploit them in a top-down BFS algorithm but the techniques can be applied to the current state-of-the-art hybrid, top-down plus bottom-up, algorithms. We focus on the exploitation of the vector unit by developing an improved highly vectorized OpenMP parallel algorithm, using vector intrinsics, and understanding the use of data alignment and prefetching. In addition, we investigate the impact of hyperthreading and thread affinity on performance, a topic that appears under researched in the literature. As a result, we achieve what we believe is the fastest published top-down BFS algorithm on the version of Xeon Phi used in our experiments. The vectorized BFS topdown source code presented in this paper can be available on request as free-to-use software.", "" ] }
1704.02191
2606677068
We propose a new way to self-adjust the mutation rate in population-based evolutionary algorithms. Roughly speaking, it consists of creating half the offspring with a mutation rate that is twice the current mutation rate and the other half with half the current rate. The mutation rate is then updated to the rate used in that subpopulation which contains the best offspring. We analyze how the (1 + λ) evolutionary algorithm with this self-adjusting mutation rate optimizes the OneMax test function. We prove that this dynamic version of the (1 + λ) EA finds the optimum in an expected optimization time (number of fitness evaluations) of O(nλ log λ + n log n). This time is asymptotically smaller than the optimization time of the classic (1 + λ) EA. Previous work shows that this performance is best-possible among all λ-parallel mutation-based unbiased black-box algorithms. This result shows that the new way of adjusting the mutation rate can find optimal dynamic parameter values on the fly. Since our adjustment mechanism is simpler than the ones previously used for adjusting the mutation rate and does not have parameters itself we are optimistic that it will find other applications.
The first to conduct a rigorous runtime analysis of the were Jansen, De Jong, and Wegener @cite_36 . They proved, among other results, that when optimizing a linear speed-up exists up to a population size of @math , that is, for @math , finding the optimal solution takes an expected number of @math generations, whereas for larger @math at least @math generations are necessary. This picture was completed in @cite_7 with a proof that the expected number of generations taken to find the optimum is @math . The implicit constants were determined in @cite_35 , giving the bound of @math , for any constant @math , as mentioned in the introduction.
{ "cite_N": [ "@cite_36", "@cite_35", "@cite_7" ], "mid": [ "1970101133", "2520275545", "2015766439" ], "abstract": [ "Evolutionary algorithms (EAs) generally come with a large number of parameters that have to be set before the algorithm can be used. Finding appropriate settings is a difficult task. The influence of these parameters on the efficiency of the search performed by an evolutionary algorithm can be very high. But there is still a lack of theoretically justified guidelines to help the practitioner find good values for these parameters. One such parameter is the offspring population size. Using a simplified but still realistic evolutionary algorithm, a thorough analysis of the effects of the offspring population size is presented. The result is a much better understanding of the role of offspring population size in an EA and suggests a simple way to dynamically adapt this parameter when necessary.", "The ( @math 1+ź) EA with mutation probability c n, where @math c>0 is an arbitrary constant, is studied for the classical OneMax function. Its expected optimization time is analyzed exactly (up to lower order terms) as a function of c and @math ź. It turns out that 1 n is the only optimal mutation probability if @math ź=o(lnnlnlnn lnlnlnn), which is the cut-off point for linear speed-up. However, if @math ź is above this cut-off point then the standard mutation probability 1 n is no longer the only optimal choice. Instead, the expected number of generations is (up to lower order terms) independent of c, irrespectively of it being less than 1 or greater. The theoretical results are obtained by a careful study of order statistics of the binomial distribution and variable drift theorems for upper and lower bounds. Experimental supplements shed light on the optimal mutation probability for small problem sizes.", "Abstract We analyze how the ( 1 + λ ) evolutionary algorithm (EA) optimizes linear pseudo-Boolean functions. We prove that it finds the optimum of any linear function within an expected number of O ( 1 λ n log ⁡ n + n ) iterations. We also show that this bound is sharp for some linear functions, e.g., the binary value function. Since previous works shows an asymptotically smaller runtime for the special case of OneMax , it follows that for the ( 1 + λ ) EA different linear functions may have run-times of different asymptotic order. The proof of our upper bound heavily relies on a number of classic and recent drift analysis methods. In particular, we show how to analyze a process displaying different types of drifts in different phases. Our work corrects a wrongfully claimed better asymptotic runtime in an earlier work [13] . We also use our methods to analyze the runtime of the ( 1 + λ ) EA on the OneMax test function and obtain a new upper bound of O ( n log ⁡ log ⁡ λ log ⁡ λ ) for the case that λ is larger than O ( log ⁡ n log ⁡ log ⁡ n log ⁡ log ⁡ log ⁡ n ) ; this is the cut-off point where a linear speed-up ceases to exist. While our results are mostly spurred from a theory-driven interest, they also show that choosing the right size of the offspring population can be crucial. For both the binary value and the OneMax test function we observe that once a linear speed-up ceases to exist, in fact, the speed-up from a larger λ reduces to sub-logarithmic (still at the price of a linear increase of the cost of each generation)." ] }
1704.02191
2606677068
We propose a new way to self-adjust the mutation rate in population-based evolutionary algorithms. Roughly speaking, it consists of creating half the offspring with a mutation rate that is twice the current mutation rate and the other half with half the current rate. The mutation rate is then updated to the rate used in that subpopulation which contains the best offspring. We analyze how the (1 + λ) evolutionary algorithm with this self-adjusting mutation rate optimizes the OneMax test function. We prove that this dynamic version of the (1 + λ) EA finds the optimum in an expected optimization time (number of fitness evaluations) of O(nλ log λ + n log n). This time is asymptotically smaller than the optimization time of the classic (1 + λ) EA. Previous work shows that this performance is best-possible among all λ-parallel mutation-based unbiased black-box algorithms. This result shows that the new way of adjusting the mutation rate can find optimal dynamic parameter values on the fly. Since our adjustment mechanism is simpler than the ones previously used for adjusting the mutation rate and does not have parameters itself we are optimistic that it will find other applications.
Aside from the optimization behavior on , not too much is known for the , or is at least not made explicit (it is easy to see that waiting times for an improvement which are larger than @math reduce by a factor of @math compared to one-individual offspring populations). Results made explicit are the @math expected runtime (number of generations) on @cite_36 , the worst-case @math expected runtime on linear functions @cite_7 , and the @math runtime estimate for minimum spanning trees valid for @math @cite_43 , where @math denotes the number of vertices of the input graph, @math the number of edges, and @math the maximum of the integral edge weights.
{ "cite_N": [ "@cite_36", "@cite_43", "@cite_7" ], "mid": [ "1970101133", "2153401898", "2015766439" ], "abstract": [ "Evolutionary algorithms (EAs) generally come with a large number of parameters that have to be set before the algorithm can be used. Finding appropriate settings is a difficult task. The influence of these parameters on the efficiency of the search performed by an evolutionary algorithm can be very high. But there is still a lack of theoretically justified guidelines to help the practitioner find good values for these parameters. One such parameter is the offspring population size. Using a simplified but still realistic evolutionary algorithm, a thorough analysis of the effects of the offspring population size is presented. The result is a much better understanding of the role of offspring population size in an EA and suggests a simple way to dynamically adapt this parameter when necessary.", "Randomized search heuristics, among them randomized local search and evolutionary algorithms, are applied to problems whose structure is not well understood, as well as to problems in combinatorial optimization. The analysis of these randomized search heuristics has been started for some well-known problems, and this approach is followed here for the minimum spanning tree problem. After motivating this line of research, it is shown that randomized search heuristics find minimum spanning trees in expected polynomial time without employing the global technique of greedy algorithms.", "Abstract We analyze how the ( 1 + λ ) evolutionary algorithm (EA) optimizes linear pseudo-Boolean functions. We prove that it finds the optimum of any linear function within an expected number of O ( 1 λ n log ⁡ n + n ) iterations. We also show that this bound is sharp for some linear functions, e.g., the binary value function. Since previous works shows an asymptotically smaller runtime for the special case of OneMax , it follows that for the ( 1 + λ ) EA different linear functions may have run-times of different asymptotic order. The proof of our upper bound heavily relies on a number of classic and recent drift analysis methods. In particular, we show how to analyze a process displaying different types of drifts in different phases. Our work corrects a wrongfully claimed better asymptotic runtime in an earlier work [13] . We also use our methods to analyze the runtime of the ( 1 + λ ) EA on the OneMax test function and obtain a new upper bound of O ( n log ⁡ log ⁡ λ log ⁡ λ ) for the case that λ is larger than O ( log ⁡ n log ⁡ log ⁡ n log ⁡ log ⁡ log ⁡ n ) ; this is the cut-off point where a linear speed-up ceases to exist. While our results are mostly spurred from a theory-driven interest, they also show that choosing the right size of the offspring population can be crucial. For both the binary value and the OneMax test function we observe that once a linear speed-up ceases to exist, in fact, the speed-up from a larger λ reduces to sub-logarithmic (still at the price of a linear increase of the cost of each generation)." ] }
1704.02116
2950566557
Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on Deep Neural Network (DNN): The first learning stage is to generate separate representation for each modality, and the second learning stage is to get the cross-modal common representation. However, the existing methods have three limitations: (1) In the first learning stage, they only model intra-modality correlation, but ignore inter-modality correlation with rich complementary context. (2) In the second learning stage, they only adopt shallow networks with single-loss regularization, but ignore the intrinsic relevance of intra-modality and inter-modality correlation. (3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems, this paper proposes a cross-modal correlation learning (CCL) approach with multi-grained fusion by hierarchical network, and the contributions are as follows: (1) In the first learning stage, CCL exploits multi-level association with joint optimization to preserve the complementary context from intra-modality and inter-modality correlation simultaneously. (2) In the second learning stage, a multi-task learning strategy is designed to adaptively balance the intra-modality semantic category constraints and inter-modality pairwise similarity constraints. (3) CCL adopts multi-grained modeling, which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets, the experimental results show our CCL approach achieves the best performance.
More recently, some methods apply semi-supervised learning and graph regularization into cross-modal common representation learning. For example, Joint Graph Regularized Heterogeneous Metric Learning (JGRHML) @cite_4 proposed by adopt metric learning and graph regularization to learn the project matrices, which constructs a joint graph regularization term using the data in the learned metric space. Joint Representation Learning (JRL) @cite_47 is proposed to construct a separate graph for each modality to learn a common space, which uses semantic information with semi-supervised regularization and sparse regularization. @cite_32 adopt multimodal graph regularization term on the projected data with an iterative algorithm, which aims to preserve inter-modality and intra-modality similarity relationships.
{ "cite_N": [ "@cite_47", "@cite_4", "@cite_32" ], "mid": [ "2013535308", "2295088417", "2211092169" ], "abstract": [ "Cross-media retrieval has become a key problem in both research and application, in which users can search results across all of the media types (text, image, audio, video, and 3-D) by submitting a query of any media type. How to measure the content similarity among different media is the key challenge. Existing cross-media retrieval methods usually focus on modeling the pairwise correlation or semantic information separately. In fact, these two kinds of information are complementary to each other and optimizing them simultaneously can further improve the accuracy. In this paper, we propose a novel feature learning algorithm for cross-media data, called joint representation learning (JRL), which is able to explore jointly the correlation and semantic information in a unified optimization framework. JRL integrates the sparse and semisupervised regularization for different media types into one unified optimization problem, while existing feature learning methods generally focus on a single media type. On one hand, JRL learns sparse projection matrix for different media simultaneously, so different media can align with each other, which is robust to the noise. On the other hand, both the labeled data and unlabeled data of different media types are explored. Unlabeled examples of different media types increase the diversity of training data and boost the performance of joint representation learning. Furthermore, JRL can not only reduce the dimension of the original features, but also incorporate the cross-media correlation into the final representation, which further improves the performance of both cross-media retrieval and single-media retrieval. Experiments on two datasets with up to five media types show the effectiveness of our proposed approach, as compared with the state-of-the-art methods.", "As the major component of big data, unstructured heterogeneous multimedia content such as text, image, audio, video and 3D increasing rapidly on the Internet. User demand a new type of cross-media retrieval where user can search results across various media by submitting query of any media. Since the query and the retrieved results can be of different media, how to learn a heterogeneous metric is the key challenge. Most existing metric learning algorithms only focus on a single media where all of the media objects share the same data representation. In this paper, we propose a joint graph regularized heterogeneous metric learning (JGRHML) algorithm, which integrates the structure of different media into a joint graph regularization. In JGRHML, different media are complementary to each other and optimizing them simultaneously can make the solution smoother for both media and further improve the accuracy of the final metric. Based on the heterogeneous metric, we further learn a high-level semantic metric through label propagation. JGRHML is effective to explore the semantic relationship hidden across different modalities. The experimental results on two datasets with up to five media types show the effectiveness of our proposed approach.", "Cross-modal retrieval has recently drawn much attention due to the widespread existence of multimodal data. It takes one type of data as the query to retrieve relevant data objects of another type, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous methods just focus on solving the first problem. In this paper, we aim to deal with both problems in a novel joint learning framework. To address the first problem, we learn projection matrices to map multimodal data into a common subspace, in which the similarity between different modalities of data can be measured. In the learning procedure, the @math -norm penalties are imposed on the projection matrices separately to solve the second problem, which selects relevant and discriminative features from different feature spaces simultaneously. A multimodal graph regularization term is further imposed on the projected data,which preserves the inter-modality and intra-modality similarity relationships.An iterative algorithm is presented to solve the proposed joint learning problem, along with its convergence analysis. Experimental results on cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art subspace approaches." ] }
1704.02116
2950566557
Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on Deep Neural Network (DNN): The first learning stage is to generate separate representation for each modality, and the second learning stage is to get the cross-modal common representation. However, the existing methods have three limitations: (1) In the first learning stage, they only model intra-modality correlation, but ignore inter-modality correlation with rich complementary context. (2) In the second learning stage, they only adopt shallow networks with single-loss regularization, but ignore the intrinsic relevance of intra-modality and inter-modality correlation. (3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems, this paper proposes a cross-modal correlation learning (CCL) approach with multi-grained fusion by hierarchical network, and the contributions are as follows: (1) In the first learning stage, CCL exploits multi-level association with joint optimization to preserve the complementary context from intra-modality and inter-modality correlation simultaneously. (2) In the second learning stage, a multi-task learning strategy is designed to adaptively balance the intra-modality semantic category constraints and inter-modality pairwise similarity constraints. (3) CCL adopts multi-grained modeling, which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets, the experimental results show our CCL approach achieves the best performance.
Deep learning has shown its strong power in modeling non-linear correlation, and achieved state-of-the-art performance in some applications of single-modal scenario, such as object detection @cite_27 @cite_26 and image video classification @cite_0 @cite_20 . Inspired by this, researchers attempt to model the complex cross-modal correlation with DNN, and the existing methods can be divided into two learning stages. is to generate separate representation for each modality. And is to learn common representation, which is the main focus of most existing methods based on DNN @cite_29 @cite_25 . We briefly introduce some representative cross-modal retrieval methods based on DNN as follows:
{ "cite_N": [ "@cite_26", "@cite_29", "@cite_0", "@cite_27", "@cite_25", "@cite_20" ], "mid": [ "2950800384", "1964073652", "", "2949650786", "2557865186", "" ], "abstract": [ "We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: this https URL", "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter @math is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.", "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "We propose a Deep Belief Network architecture for learning a joint representation of multimodal data. The model denes a probability distribution over the space of multimodal inputs and allows sampling from the conditional distributions over each data modality. This makes it possible for the model to create a multimodal representation even when some data modalities are missing. Our experimental results on bi-modal data consisting of images and text show that the Multimodal DBN can learn a good generative model of the joint space of image and text inputs that is useful for lling in missing data so it can be used both for image annotation and image retrieval. We further demonstrate that using the representation discovered by the Multimodal DBN our model can significantly outperform SVMs and LDA on discriminative tasks.", "" ] }
1704.02116
2950566557
Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on Deep Neural Network (DNN): The first learning stage is to generate separate representation for each modality, and the second learning stage is to get the cross-modal common representation. However, the existing methods have three limitations: (1) In the first learning stage, they only model intra-modality correlation, but ignore inter-modality correlation with rich complementary context. (2) In the second learning stage, they only adopt shallow networks with single-loss regularization, but ignore the intrinsic relevance of intra-modality and inter-modality correlation. (3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems, this paper proposes a cross-modal correlation learning (CCL) approach with multi-grained fusion by hierarchical network, and the contributions are as follows: (1) In the first learning stage, CCL exploits multi-level association with joint optimization to preserve the complementary context from intra-modality and inter-modality correlation simultaneously. (2) In the second learning stage, a multi-task learning strategy is designed to adaptively balance the intra-modality semantic category constraints and inter-modality pairwise similarity constraints. (3) CCL adopts multi-grained modeling, which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets, the experimental results show our CCL approach achieves the best performance.
@cite_25 is proposed to learn common representation for the data of different modalities. In for separate representation, it adopts a two-layer DBN for each modality to model the distribution of original features, where Gaussian Restricted Boltzmann Machine (RBM) is adopted for image instances, while Replicated Softmax model @cite_45 is used for text instances. RBM has several visible units @math and hidden units @math , which is the basic component of DBN, and the energy function and joint distribution are defined as follows: where @math is the collection of three parameters @math ( @math are the bias parameters and @math is the weight parameter) and @math is the normalizing constant. Then in , multimodal DBN applies a joint RBM on top of the two separate DBNs and combines them by modeling the joint distribution of data with different modalities to get common representation.
{ "cite_N": [ "@cite_45", "@cite_25" ], "mid": [ "2100002341", "2557865186" ], "abstract": [ "We introduce a two-layer undirected graphical model, called a \"Replicated Softmax\", that can be used to model and automatically extract low-dimensional latent semantic representations from a large unstructured collection of documents. We present efficient learning and inference algorithms for this model, and show how a Monte-Carlo based method, Annealed Importance Sampling, can be used to produce an accurate estimate of the log-probability the model assigns to test data. This allows us to demonstrate that the proposed model is able to generalize much better compared to Latent Dirichlet Allocation in terms of both the log-probability of held-out documents and the retrieval accuracy.", "We propose a Deep Belief Network architecture for learning a joint representation of multimodal data. The model denes a probability distribution over the space of multimodal inputs and allows sampling from the conditional distributions over each data modality. This makes it possible for the model to create a multimodal representation even when some data modalities are missing. Our experimental results on bi-modal data consisting of images and text show that the Multimodal DBN can learn a good generative model of the joint space of image and text inputs that is useful for lling in missing data so it can be used both for image annotation and image retrieval. We further demonstrate that using the representation discovered by the Multimodal DBN our model can significantly outperform SVMs and LDA on discriminative tasks." ] }
1704.02116
2950566557
Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on Deep Neural Network (DNN): The first learning stage is to generate separate representation for each modality, and the second learning stage is to get the cross-modal common representation. However, the existing methods have three limitations: (1) In the first learning stage, they only model intra-modality correlation, but ignore inter-modality correlation with rich complementary context. (2) In the second learning stage, they only adopt shallow networks with single-loss regularization, but ignore the intrinsic relevance of intra-modality and inter-modality correlation. (3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems, this paper proposes a cross-modal correlation learning (CCL) approach with multi-grained fusion by hierarchical network, and the contributions are as follows: (1) In the first learning stage, CCL exploits multi-level association with joint optimization to preserve the complementary context from intra-modality and inter-modality correlation simultaneously. (2) In the second learning stage, a multi-task learning strategy is designed to adaptively balance the intra-modality semantic category constraints and inter-modality pairwise similarity constraints. (3) CCL adopts multi-grained modeling, which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets, the experimental results show our CCL approach achieves the best performance.
@cite_19 proposed by is based on deep autoencoder network, which is actually an extension of RBM for modeling multiple modalities. It has two subnetworks to learn separate representation in , and then the two subnetworks are linked at the shared joint layer to generate common representation in . Bimodal AE reconstructs different modalities such as image and text jointly by minimizing the reconstruction error between the original feature and reconstructed representation. Bimodal AE can learn high-order correlation between multiple modalities and preserve the reconstruction information at the same time.
{ "cite_N": [ "@cite_19" ], "mid": [ "2184188583" ], "abstract": [ "Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning." ] }
1704.02116
2950566557
Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on Deep Neural Network (DNN): The first learning stage is to generate separate representation for each modality, and the second learning stage is to get the cross-modal common representation. However, the existing methods have three limitations: (1) In the first learning stage, they only model intra-modality correlation, but ignore inter-modality correlation with rich complementary context. (2) In the second learning stage, they only adopt shallow networks with single-loss regularization, but ignore the intrinsic relevance of intra-modality and inter-modality correlation. (3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems, this paper proposes a cross-modal correlation learning (CCL) approach with multi-grained fusion by hierarchical network, and the contributions are as follows: (1) In the first learning stage, CCL exploits multi-level association with joint optimization to preserve the complementary context from intra-modality and inter-modality correlation simultaneously. (2) In the second learning stage, a multi-task learning strategy is designed to adaptively balance the intra-modality semantic category constraints and inter-modality pairwise similarity constraints. (3) CCL adopts multi-grained modeling, which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets, the experimental results show our CCL approach achieves the best performance.
@cite_29 first adopts DBN to generate separate representation in . And then in , it jointly models the correlation and reconstruction information with two subnetworks linked at the code layer, which minimizes a combination of representation learning error within each modality and correlation learning error between different modalities. Corr-AE, which only reconstructs the input itself, has two similar structures for extension: Corr-Cross-AE and Corr-Full-AE. Corr-Cross-AE attempts to reconstruct the input from different modalities, while Corr-Full-AE can reconstruct both the input itself and the input of different modalities.
{ "cite_N": [ "@cite_29" ], "mid": [ "1964073652" ], "abstract": [ "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter @math is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks." ] }
1704.02116
2950566557
Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on Deep Neural Network (DNN): The first learning stage is to generate separate representation for each modality, and the second learning stage is to get the cross-modal common representation. However, the existing methods have three limitations: (1) In the first learning stage, they only model intra-modality correlation, but ignore inter-modality correlation with rich complementary context. (2) In the second learning stage, they only adopt shallow networks with single-loss regularization, but ignore the intrinsic relevance of intra-modality and inter-modality correlation. (3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems, this paper proposes a cross-modal correlation learning (CCL) approach with multi-grained fusion by hierarchical network, and the contributions are as follows: (1) In the first learning stage, CCL exploits multi-level association with joint optimization to preserve the complementary context from intra-modality and inter-modality correlation simultaneously. (2) In the second learning stage, a multi-task learning strategy is designed to adaptively balance the intra-modality semantic category constraints and inter-modality pairwise similarity constraints. (3) CCL adopts multi-grained modeling, which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets, the experimental results show our CCL approach achieves the best performance.
(our previous conference paper @cite_41 ) jointly models the complementary intra-modality and inter-modality correlation between different modalities in . It should be noted that two independent networks are adopted in of CMDN. Specifically, Stacked Autoencoder (SAE) @cite_30 is used to model intra-modality correlation, while the Multimodal DBN is used to capture inter-modality correlation. In , a hierarchical learning strategy is adopted to learn the cross-modal correlation with a two-level network, and common representation is learned by a stacked network based on Bimodal AE. The above DNN-based methods have three limitations in summary as follows.
{ "cite_N": [ "@cite_41", "@cite_30" ], "mid": [ "2574447816", "2025768430" ], "abstract": [ "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.", "Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite." ] }
1704.02116
2950566557
Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on Deep Neural Network (DNN): The first learning stage is to generate separate representation for each modality, and the second learning stage is to get the cross-modal common representation. However, the existing methods have three limitations: (1) In the first learning stage, they only model intra-modality correlation, but ignore inter-modality correlation with rich complementary context. (2) In the second learning stage, they only adopt shallow networks with single-loss regularization, but ignore the intrinsic relevance of intra-modality and inter-modality correlation. (3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems, this paper proposes a cross-modal correlation learning (CCL) approach with multi-grained fusion by hierarchical network, and the contributions are as follows: (1) In the first learning stage, CCL exploits multi-level association with joint optimization to preserve the complementary context from intra-modality and inter-modality correlation simultaneously. (2) In the second learning stage, a multi-task learning strategy is designed to adaptively balance the intra-modality semantic category constraints and inter-modality pairwise similarity constraints. (3) CCL adopts multi-grained modeling, which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets, the experimental results show our CCL approach achieves the best performance.
In , the existing methods as @cite_29 @cite_25 only model intra-modality correlation to generate separate representation, but ignore the rich complementary context provided by inter-modality correlation, which should be preserved for learning better separate representation. Although our previous work @cite_41 also considers intra-modality and inter-modality correlation in the first learning stage, it adopts two independent networks to model each of them respectively, which cannot fully exploit the complex relationship between intra-modality and inter-modality correlation. While our proposed CCL approach models the two kinds of complementary information by jointly optimizing intra-modality reconstruction information and inter-modality pairwise similarity.
{ "cite_N": [ "@cite_41", "@cite_29", "@cite_25" ], "mid": [ "2574447816", "1964073652", "2557865186" ], "abstract": [ "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.", "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter @math is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.", "We propose a Deep Belief Network architecture for learning a joint representation of multimodal data. The model denes a probability distribution over the space of multimodal inputs and allows sampling from the conditional distributions over each data modality. This makes it possible for the model to create a multimodal representation even when some data modalities are missing. Our experimental results on bi-modal data consisting of images and text show that the Multimodal DBN can learn a good generative model of the joint space of image and text inputs that is useful for lling in missing data so it can be used both for image annotation and image retrieval. We further demonstrate that using the representation discovered by the Multimodal DBN our model can significantly outperform SVMs and LDA on discriminative tasks." ] }
1704.02116
2950566557
Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on Deep Neural Network (DNN): The first learning stage is to generate separate representation for each modality, and the second learning stage is to get the cross-modal common representation. However, the existing methods have three limitations: (1) In the first learning stage, they only model intra-modality correlation, but ignore inter-modality correlation with rich complementary context. (2) In the second learning stage, they only adopt shallow networks with single-loss regularization, but ignore the intrinsic relevance of intra-modality and inter-modality correlation. (3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems, this paper proposes a cross-modal correlation learning (CCL) approach with multi-grained fusion by hierarchical network, and the contributions are as follows: (1) In the first learning stage, CCL exploits multi-level association with joint optimization to preserve the complementary context from intra-modality and inter-modality correlation simultaneously. (2) In the second learning stage, a multi-task learning strategy is designed to adaptively balance the intra-modality semantic category constraints and inter-modality pairwise similarity constraints. (3) CCL adopts multi-grained modeling, which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets, the experimental results show our CCL approach achieves the best performance.
In , existing methods learn common representation by adopting shallow network architectures with single-loss regularization @cite_36 @cite_5 . However, the intra-modality and inter-modality correlation has intrinsic relevance, and such relevance is ignored by the single-loss regularization, which leads to inability for improving generalization performance. Multi-task learning (MTL) framework has been proposed to enhance the generalization ability by constructing a series of learning processes, which are relevant to each other and can mutually boost each other. Recently, extensive research works attempt to apply multi-task learning into deep architecture. DeepID2 @cite_24 simultaneously learns face identification and verification as two learning tasks to achieve better accuracy of face recognition. @cite_9 propose Faster R-CNN, which also consists of two learning tasks as the object bound and objectness score prediction, and boosts the object detection accuracy. Besides, a joint multi-task learning algorithm @cite_8 is proposed to predict attributes in images. However, most of the research efforts have focused on the single-modal scenario. Inspired by the above methods, we apply multi-task learning to perform common representation learning. It aims to balance intra-modality semantic category constraints and inter-modality pairwise similarity constraints to further improve the accuracy of cross-modal retrieval.
{ "cite_N": [ "@cite_8", "@cite_36", "@cite_9", "@cite_24", "@cite_5" ], "mid": [ "1907729166", "1523385540", "2613718673", "", "1949478088" ], "abstract": [ "This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model’s parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.", "We introduce Deep Canonical Correlation Analysis (DCCA), a method to learn complex nonlinear transformations of two views of data such that the resulting representations are highly linearly correlated. Parameters of both transformations are jointly learned to maximize the (regularized) total correlation. It can be viewed as a nonlinear extension of the linear method canonical correlation analysis (CCA). It is an alternative to the nonparametric method kernel canonical correlation analysis (KCCA) for learning correlated nonlinear transformations. Unlike KCCA, DCCA does not require an inner product, and has the advantages of a parametric method: training time scales well with data size and the training data need not be referenced when computing the representations of unseen instances. In experiments on two real-world datasets, we find that DCCA learns representations with significantly higher correlation than those learned by CCA and KCCA. We also introduce a novel non-saturating sigmoid function based on the cube root that may be useful more generally in feedforward neural networks.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "", "This paper addresses the problem of matching images and captions in a joint latent space learnt with deep canonical correlation analysis (DCCA). The image and caption data are represented by the outputs of the vision and text based deep neural networks. The high dimensionality of the features presents a great challenge in terms of memory and speed complexity when used in DCCA framework. We address these problems by a GPU implementation and propose methods to deal with overfitting. This makes it possible to evaluate DCCA approach on popular caption-image matching benchmarks. We compare our approach to other recently proposed techniques and present state of the art results on three datasets." ] }
1704.02116
2950566557
Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on Deep Neural Network (DNN): The first learning stage is to generate separate representation for each modality, and the second learning stage is to get the cross-modal common representation. However, the existing methods have three limitations: (1) In the first learning stage, they only model intra-modality correlation, but ignore inter-modality correlation with rich complementary context. (2) In the second learning stage, they only adopt shallow networks with single-loss regularization, but ignore the intrinsic relevance of intra-modality and inter-modality correlation. (3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems, this paper proposes a cross-modal correlation learning (CCL) approach with multi-grained fusion by hierarchical network, and the contributions are as follows: (1) In the first learning stage, CCL exploits multi-level association with joint optimization to preserve the complementary context from intra-modality and inter-modality correlation simultaneously. (2) In the second learning stage, a multi-task learning strategy is designed to adaptively balance the intra-modality semantic category constraints and inter-modality pairwise similarity constraints. (3) CCL adopts multi-grained modeling, which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets, the experimental results show our CCL approach achieves the best performance.
Furthermore, only the original instances are considered by the existing methods based on DNN @cite_29 @cite_41 @cite_19 . Although patches have been exploited in some traditional methods as @cite_6 , the accuracies of these methods are limited because of the traditional framework, which cannot effectively model the complex correlation between the patches with high non-linearity. Our proposed CCL approach can fully exploit the coarse-grained instances as well as the rich complementary fine-grained patches by DNN, and fuse the multi-grained information to capture the intrinsic correlation between different modalities.
{ "cite_N": [ "@cite_41", "@cite_19", "@cite_29", "@cite_6" ], "mid": [ "2574447816", "2184188583", "1964073652", "" ], "abstract": [ "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.", "Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.", "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter @math is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.", "" ] }
1704.02205
2606997881
Estimating correspondence between two images and extracting the foreground object are two challenges in computer vision. With dual-lens smart phones, such as iPhone 7Plus and Huawei P9, coming into the market, two images of slightly different views provide us new information to unify the two topics. We propose a joint method to tackle them simultaneously via a joint fully connected conditional random field (CRF) framework. The regional correspondence is used to handle textureless regions in matching and make our CRF system computationally efficient. Our method is evaluated over 2,000 new image pairs, and produces promising results on challenging portrait images.
The motion information is also applied to object segmentation as discussed in @cite_45 @cite_30 @cite_20 . However, these methods need many frames to produce a reasonable result.
{ "cite_N": [ "@cite_30", "@cite_45", "@cite_20" ], "mid": [ "1973536668", "2099330788", "2157130620" ], "abstract": [ "We propose a unified variational formulation for joint motion estimation and segmentation with explicit occlusion handling. This is done by a multi-label representation of the flow field, where each label corresponds to a parametric representation of the motion. We use a convex formulation of the multi-label Potts model with label costs and show that the asymmetric map-uniqueness criterion can be integrated into our formulation by means of convex constraints. Explicit occlusion handling eliminates errors otherwise created by the regularization. As occlusions can occur only at object boundaries, a large number of objects may be required. By using a fast primal-dual algorithm we are able to handle several hundred motion segments. Results are shown on several classical motion segmentation and optical flow examples.", "Describing a video sequence in terms of a small number of coherently moving segments is useful for tasks ranging from video compression to event perception. A promising approach is to view the motion segmentation problem in a mixture estimation framework. However, existing formulations generally use only the motion, data and thus fail to make use of static cues when segmenting the sequence. Furthermore, the number of models is either specified in advance or estimated outside the mixture model framework. In this work we address both of these issues. We show how to add spatial constraints to the mixture formulations and present a variant of the EM algorithm that males use of both the form and the motion constraints. Moreover this algorithm estimates the number of segments given knowledge about the level of model failure expected in the sequence. The algorithm's performance is illustrated on synthetic and real image sequences.", "Layered models allow scene segmentation and motion estimation to be formulated together and to inform one another. Traditional layered motion methods, however, employ fairly weak models of scene structure, relying on locally connected Ising Potts models which have limited ability to capture long-range correlations in natural scenes. To address this, we formulate a fully-connected layered model that enables global reasoning about the complicated segmentations of real objects. Optimization with fully-connected graphical models is challenging, and our inference algorithm leverages recent work on efficient mean field updates for fully-connected conditional random fields. These methods can be implemented efficiently using high-dimensional Gaussian filtering. We combine these ideas with a layered flow model, and find that the long-range connections greatly improve segmentation into figure-ground layers when compared with locally connected MRF models. Experiments on several benchmark datasets show that the method can recover fine structures and large occlusion regions, with good flow accuracy and much lower computational cost than previous locally-connected layered models." ] }
1704.02205
2606997881
Estimating correspondence between two images and extracting the foreground object are two challenges in computer vision. With dual-lens smart phones, such as iPhone 7Plus and Huawei P9, coming into the market, two images of slightly different views provide us new information to unify the two topics. We propose a joint method to tackle them simultaneously via a joint fully connected conditional random field (CRF) framework. The regional correspondence is used to handle textureless regions in matching and make our CRF system computationally efficient. Our method is evaluated over 2,000 new image pairs, and produces promising results on challenging portrait images.
Recently, deep convolutional neural networks (CNNs) achieve great success in semantic segmentation. CNNs are applied mainly in two ways. The first is to learn image features and apply pixel classification @cite_27 @cite_1 @cite_23 . The second line is to adopt an end-to-end trainable CNN model from input images to segmentation labels with the fully convolutional networks (FCN) @cite_7 .
{ "cite_N": [ "@cite_27", "@cite_1", "@cite_7", "@cite_23" ], "mid": [ "2115150266", "1938976761", "1903029394", "2022508996" ], "abstract": [ "We address the problem of segmenting and recognizing objects in real world images, focusing on challenging articulated categories such as humans and other animals. For this purpose, we propose a novel design for region-based object detectors that integrates efficiently top-down information from scanning-windows part models and global appearance cues. Our detectors produce class-specific scores for bottom-up regions, and then aggregate the votes of multiple overlapping candidates through pixel classification. We evaluate our approach on the PASCAL segmentation challenge, and report competitive performance with respect to current leading techniques. On VOC2010, our method obtains the best results in 6 20 categories and the highest performance on articulated objects.", "We introduce a purely feed-forward architecture for semantic segmentation. We map small image elements (superpixels) to rich feature representations extracted from a sequence of nested regions of increasing extent. These regions are obtained by “zooming out” from the superpixel all the way to scene-level resolution. This approach exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference. Instead superpixels are classified by a feedforward multilayer network. Our architecture achieves 69.6 average accuracy on the PASCAL VOC 2012 test set.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction." ] }
1704.02205
2606997881
Estimating correspondence between two images and extracting the foreground object are two challenges in computer vision. With dual-lens smart phones, such as iPhone 7Plus and Huawei P9, coming into the market, two images of slightly different views provide us new information to unify the two topics. We propose a joint method to tackle them simultaneously via a joint fully connected conditional random field (CRF) framework. The regional correspondence is used to handle textureless regions in matching and make our CRF system computationally efficient. Our method is evaluated over 2,000 new image pairs, and produces promising results on challenging portrait images.
To improve performance, DeepLab @cite_50 and CRFasRNN @cite_21 employed dense CRF to refine predicted score maps. Liu al @cite_16 extended the general CRFs to deep parsing networks, which achieve state-of-the-art accuracy in the VOC semantic segmentation task @cite_26 . Most CNNs are constructed hierarchically by convolution, pooling and rectification. They aim at challenging semantic segmentation with class labels. In terms of segmentation quality, interactive segmentation still perform better since users are involved.
{ "cite_N": [ "@cite_21", "@cite_26", "@cite_50", "@cite_16" ], "mid": [ "", "2031489346", "2964288706", "2111077768" ], "abstract": [ "", "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.", "Abstract: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "This paper addresses semantic image segmentation by incorporating rich information into Markov Random Field (MRF), including high-order relations and mixture of label contexts. Unlike previous works that optimized MRFs using iterative algorithm, we solve MRF by proposing a Convolutional Neural Network (CNN), namely Deep Parsing Network (DPN), which enables deterministic end-toend computation in a single forward pass. Specifically, DPN extends a contemporary CNN architecture to model unary terms and additional layers are carefully devised to approximate the mean field algorithm (MF) for pairwise terms. It has several appealing properties. First, different from the recent works that combined CNN and MRF, where many iterations of MF were required for each training image during back-propagation, DPN is able to achieve high performance by approximating one iteration of MF. Second, DPN represents various types of pairwise terms, making many existing works as its special cases. Third, DPN makes MF easier to be parallelized and speeded up in Graphical Processing Unit (GPU). DPN is thoroughly evaluated on the PASCAL VOC 2012 dataset, where a single DPN model yields a new state-of-the-art segmentation accuracy of 77.5 ." ] }
1704.02249
2606143959
Learned boundary maps are known to outperform hand- crafted ones as a basis for the watershed algorithm. We show, for the first time, how to train watershed computation jointly with boundary map prediction. The estimator for the merging priorities is cast as a neural network that is con- volutional (over space) and recurrent (over iterations). The latter allows learning of complex shape priors. The method gives the best known seeded segmentation results on the CREMI segmentation challenge.
Various authors demonstrated that learned boundary probabilities (or, more generally, boundary strengths) are superior to designed ones. In the most common setting, these probabilities are defined on the pixel grid, i.e. on the nodes of a grid graph, and serve as input of a node-based watershed algorithm. Training minimizes a suitable loss (e.g. squared or cross-entropy loss) between the predicted probabilities and manually generated ground truth boundary maps in an unstructured manner, i.e. over all pixels independently. This approach works especially well with powerful models like CNNs. In the important application of connectomis (see section ), this was first demonstrated by @cite_18 . A much deeper network @cite_30 was the winning entry of the ISBI 2012 Neuro-Segmentaion Challenge @cite_1 . Results could be improved further by progress in CNN architectures and more sophisticated data augmentation, using e.g. U-Nets @cite_4 , FusionNets @cite_14 or networks based on inception modules @cite_21 . Clustering of the resulting watershed superpixels by means of the GALA algorithm @cite_15 @cite_35 (using altitudes from @cite_1 resp. @cite_4 ) or the lifted multicut @cite_21 (using altitudes from their own CNN) lead to additional performance gains.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_14", "@cite_4", "@cite_21", "@cite_1", "@cite_15" ], "mid": [ "2167510172", "2555969222", "2129981175", "2582996697", "2952232639", "2584383026", "1898703532", "2080858319" ], "abstract": [ "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or non-membrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.", "Reconstructing a synaptic wiring diagram, or connectome, from electron microscopy (EM) images of brain tissue currently requires many hours of manual annotation or proofreading (Kasthuri and Lichtman, 2010; Lichtman and Sanes, 2008; Seung, 2009). The desire to reconstruct ever larger and more complex networks has pushed the collection of ever larger EM datasets. A cubic millimeter of raw imaging data would take up 1 PB of storage and present an annotation project that would be impractical without relying heavily on automatic segmentation methods. The RhoanaNet image processing pipeline was developed to automatically segment large volumes of EM data and ease the burden of manual proofreading and annotation. Based on (, 2015), we updated every stage of the software pipeline to provide better throughput performance and higher quality segmentation results. We used state of the art deep learning techniques to generate improved membrane probability maps, and Gala (Nunez-, 2014) was used to agglomerate 2D segments into 3D objects. We applied the RhoanaNet pipeline to four densely annotated EM datasets, two from mouse cortex, one from cerebellum and one from mouse lateral geniculate nucleus (LGN). All training and test data is made available for benchmark comparisons. The best segmentation results obtained gave @math scores of 0.9054 and 09182 for the cortex datasets, 0.9438 for LGN, and 0.9150 for Cerebellum. The RhoanaNet pipeline is open source software. All source code, training data, test data, and annotations for all four benchmark datasets are available at this http URL.", "Convolutional networks have achieved a great deal of success in high-level vision problems such as object recognition. Here we show that they can also be used as a general method for low-level image processing. As an example of our approach, convolutional networks are trained using gradient learning to solve the problem of restoring noisy or degraded images. For our training data, we have used electron microscopic images of neural circuitry with ground truth restorations provided by human experts. On this dataset, Markov random field (MRF), conditional random field (CRF), and anisotropic diffusion algorithms perform about the same as simple thresholding, but superior performance is obtained with a convolutional network containing over 34,000 adjustable parameters. When restored by this convolutional network, the images are clean enough to be used for segmentation, whereas the other approaches fail in this respect. We do not believe that convolutional networks are fundamentally superior to MRFs as a representation for image processing algorithms. On the contrary, the two approaches are closely related. But in practice, it is possible to train complex convolutional networks, while even simple MRF models are hindered by problems with Bayesian learning and inference procedures. Our results suggest that high model complexity is the single most important factor for good performance, and this is possible with convolutional networks.", "Electron microscopic connectomics is an ambitious research direction with the goal of studying comprehensive brain connectivity maps by using high-throughput, nano-scale microscopy. One of the main challenges in connectomics research is developing scalable image analysis algorithms that require minimal user intervention. Recently, deep learning has drawn much attention in computer vision because of its exceptional performance in image classification tasks. For this reason, its application to connectomic analyses holds great promise, as well. In this paper, we introduce a novel deep neural network architecture, FusionNet, for the automatic segmentation of neuronal structures in connectomics data. FusionNet leverages the latest advances in machine learning, such as semantic segmentation and residual neural networks, with the novel introduction of summation-based skip connections to allow a much deeper network architecture for a more accurate segmentation. We demonstrate the performance of the proposed method by comparing it with state-of-the-art electron microscopy (EM) segmentation methods from the ISBI EM segmentation challenge. We also show the segmentation results on two different tasks including cell membrane and cell body segmentation and a statistical analysis of cell morphology.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .", "Reference EPFL-ARTICLE-226946doi:10.1038 nmeth.4151View record in Web of Science Record created on 2017-03-27, modified on 2017-07-13", "To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with ground truth from human experts. The winning team had no prior experience with EM images, and employed a convolutional network. This deep learning'' approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.", "We aim to improve segmentation through the use of machine learning tools during region agglomeration. We propose an active learning approach for performing hierarchical agglomerative segmentation from superpixels. Our method combines multiple features at all scales of the agglomerative process, works for data with an arbitrary number of dimensions, and scales to very large datasets. We advocate the use of variation of information to measure segmentation accuracy, particularly in 3D electron microscopy (EM) images of neural tissue, and using this metric demonstrate an improvement over competing algorithms in EM and natural images." ] }
1704.02249
2606143959
Learned boundary maps are known to outperform hand- crafted ones as a basis for the watershed algorithm. We show, for the first time, how to train watershed computation jointly with boundary map prediction. The estimator for the merging priorities is cast as a neural network that is con- volutional (over space) and recurrent (over iterations). The latter allows learning of complex shape priors. The method gives the best known seeded segmentation results on the CREMI segmentation challenge.
When ground truth is provided in terms of region labels rather than boundary maps, a suitable boundary map must be created first. Simple morphological operations were found sufficient in @cite_4 , while @cite_21 preferred smooth probabilities derived from a distance transform starting at the true boundaries. Outside connectomics, @cite_22 achieved superior results by defining the ground truth altitude map in terms of the vector distance transform, which allows optimizing the prediction's gradient direction and height separately.
{ "cite_N": [ "@cite_21", "@cite_4", "@cite_22" ], "mid": [ "2584383026", "2952232639", "2557889580" ], "abstract": [ "Reference EPFL-ARTICLE-226946doi:10.1038 nmeth.4151View record in Web of Science Record created on 2017-03-27, modified on 2017-07-13", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .", "Most contemporary approaches to instance segmentation use complex pipelines involving conditional random fields, recurrent neural networks, object proposals, or template matching schemes. In this paper, we present a simple yet powerful end-to-end convolutional neural network to tackle this task. Our approach combines intuitions from the classical watershed transform and modern deep learning to produce an energy map of the image where object instances are unambiguously represented as energy basins. We then perform a cut at a single energy level to directly yield connected components corresponding to object instances. Our model achieves more than double the performance over the state-of-the-art on the challenging Cityscapes Instance Level Segmentation task." ] }
1704.02249
2606143959
Learned boundary maps are known to outperform hand- crafted ones as a basis for the watershed algorithm. We show, for the first time, how to train watershed computation jointly with boundary map prediction. The estimator for the merging priorities is cast as a neural network that is con- volutional (over space) and recurrent (over iterations). The latter allows learning of complex shape priors. The method gives the best known seeded segmentation results on the CREMI segmentation challenge.
Alternatively, one can employ the edge-based watershed algorithm and learn boundary probabilities for the grid graph's edges. The corresponding ground truth simply indicates if the end points of each edge are supposed to be in different segments or not. From a theoretical perspective, the distinction between node- and edge-based watersheds is not very significant because both can be transformed into each other @cite_25 . However, the algorithmic details differ considerably. Edge-based altitude learning was first proposed in @cite_26 , who used hand-crafted features and logistic regression. Subsequently, @cite_32 employed a CNN to learn features and boundary probabilities simultaneously. Watershed superpixel generation and clustering on the basis of these altitudes was investigated in @cite_2 .
{ "cite_N": [ "@cite_26", "@cite_32", "@cite_25", "@cite_2" ], "mid": [ "2137276306", "2169805405", "2006227270", "1507090658" ], "abstract": [ "The paper studies the problem of combining region and boundary cues for natural image segmentation. We employ a large database of manually segmented images in order to learn an optimal affinity function between pairs of pixels. These pairwise affinities can then be used to cluster the pixels into visually coherent groups. Region cues are computed as the similarity in brightness, color, and texture between image patches. Boundary cues are incorporated by looking for the presence of an \"intervening contour\", a large gradient along a straight line connecting two pixels. We first use the dataset of human segmentations to individually optimize parameters of the patch and gradient features for brightness, color, and texture cues. We then quantitatively measure the power of different feature combinations by computing the precision and recall of classifiers trained using those features. The mutual information between the output of the classifiers and the same-segment indicator function provides an alternative evaluation technique that yields identical conclusions. As expected, the best classifier makes use of brightness, color, and texture features, in both patch and gradient forms. We find that for brightness, the gradient cue outperforms the patch similarity. In contrast, using color patch similarity yields better results than using color gradients. Texture is the most powerful of the three channels, with both patches and gradients carrying significant independent information. Interestingly, the proximity of the two pixels does not add any information beyond that provided by the similarity cues. We also find that the convexity assumptions made by the intervening contour approach are supported by the ecological statistics of the dataset.", "Many image segmentation algorithms first generate an affinity graph and then partition it. We present a machine learning approach to computing an affinity graph using a convolutional network (CN) trained using ground truth provided by human experts. The CN affinity graph can be paired with any standard partitioning algorithm and improves segmentation accuracy significantly compared to standard hand-designed affinity functions. We apply our algorithm to the challenging 3D segmentation problem of reconstructing neuronal processes from volumetric electron microscopy (EM) and show that we are able to learn a good affinity graph directly from the raw EM images. Further, we show that our affinity graph improves the segmentation accuracy of both simple and sophisticated graph partitioning algorithms. In contrast to previous work, we do not rely on prior knowledge in the form of hand-designed image features or image preprocessing. Thus, we expect our algorithm to generalize effectively to arbitrary image types.", "The watershed is an efficient and versatile segmentation tool, as it partitions the images into disjoint catchment basins. We study the watershed on node or edge weighted graphs. We do not aim at constructing a partition of the nodes but consider the catchment zones, i.e., the attraction zones of a drop of water. Often, such zones largely overlap. In a first part, we show how to derive from a node or edge weighted graph a flooding graph with the same trajectories of a drop of water, whether one considers its node weights alone or its edge weights alone. In a second part we show how to reduce the number of possible trajectories of a drop of water in order to generate watershed partitions.", "We present a method for hierarchical image segmentation that defines a disaffinity graph on the image, over-segments it into watershed basins, defines a new graph on the basins, and then merges basins with a modified, size-dependent version of single linkage clustering. The quasilinear runtime of the method makes it suitable for segmenting large images. We illustrate the method on the challenging problem of segmenting 3D electron microscopic brain images." ] }
1704.02071
2607372796
We propose a principled convolutional neural pyramid (CNP) framework for general low-level vision and image processing tasks. It is based on the essential finding that many applications require large receptive fields for structure understanding. But corresponding neural networks for regression either stack many layers or apply large kernels to achieve it, which is computationally very costly. Our pyramid structure can greatly enlarge the field while not sacrificing computation efficiency. Extra benefit includes adaptive network depth and progressive upsampling for quasi-realtime testing on VGA-size input. Our method profits a broad set of applications, such as depth RGB image restoration, completion, noise artifact removal, edge refinement, image filtering, image enhancement and colorization.
Another important topic is image deconvolution, which needs image priors, such as gradient distribution, to constrain the solution. Xu al @cite_51 demonstrated that simple CNNs can well approximate the inverse kernel and achieved decent results. As shown in Figure , large kernels for convolution require heavy computation.
{ "cite_N": [ "@cite_51" ], "mid": [ "2124964692" ], "abstract": [ "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods." ] }
1704.02071
2607372796
We propose a principled convolutional neural pyramid (CNP) framework for general low-level vision and image processing tasks. It is based on the essential finding that many applications require large receptive fields for structure understanding. But corresponding neural networks for regression either stack many layers or apply large kernels to achieve it, which is computationally very costly. Our pyramid structure can greatly enlarge the field while not sacrificing computation efficiency. Extra benefit includes adaptive network depth and progressive upsampling for quasi-realtime testing on VGA-size input. Our method profits a broad set of applications, such as depth RGB image restoration, completion, noise artifact removal, edge refinement, image filtering, image enhancement and colorization.
Besides filtering, CNNs were also used for compression artifact removal @cite_44 , dirt rain removal @cite_1 , denoising @cite_28 @cite_45 , Simply put, these frameworks employ CNNs to regress image structure. We show that our deep convolutional pyramid can better address global-optimization-like decomposition problems because of our large receptive fields with information fusion.
{ "cite_N": [ "@cite_44", "@cite_28", "@cite_1", "@cite_45" ], "mid": [ "2142683286", "2146337213", "2154815154", "2471801048" ], "abstract": [ "Lossy compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restores sharpened images that are accompanied with ringing effects. Inspired by the deep convolutional networks (DCN) on super-resolution, we formulate a compact and efficient network for seamless attenuation of different compression artifacts. We also demonstrate that a deeper model can be effectively trained with the features learned in a shallow network. Following a similar \"easy to hard\" idea, we systematically investigate several practical transfer settings and show the effectiveness of transfer learning in low level vision problems. Our method shows superior performance than the state-of-the-arts both on the benchmark datasets and the real-world use cases (i.e. Twitter).", "We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method's performance in the image denoising task is comparable to that of KSVD which is a widely used sparse coding technique. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning.", "Photographs taken through a window are often compromised by dirt or rain present on the window surface. Common cases of this include pictures taken from inside a vehicle, or outdoor security cameras mounted inside a protective enclosure. At capture time, defocus can be used to remove the artifacts, but this relies on achieving a shallow depth-of-field and placement of the camera close to the window. Instead, we present a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image. We collect a dataset of clean corrupted image pairs which are then used to train a specialized form of convolutional neural network. This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of dirt and water droplets in natural images. Our models demonstrate effective removal of dirt and rain in outdoor test conditions.", "Image restoration, including image denoising, super resolution, inpainting, and so on, is a well-studied problem in computer vision and image processing, as well as a test bed for low-level image modeling algorithms. In this work, we propose a very deep fully convolutional auto-encoder network for image restoration, which is a encoding-decoding framework with symmetric convolutional-deconvolutional layers. In other words, the network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers capture the abstraction of image contents while eliminating corruptions. Deconvolutional layers have the capability to upsample the feature maps and recover the image details. To deal with the problem that deeper networks tend to be more difficult to train, we propose to symmetrically link convolutional and deconvolutional layers with skip-layer connections, with which the training converges much faster and attains better results." ] }
1704.02071
2607372796
We propose a principled convolutional neural pyramid (CNP) framework for general low-level vision and image processing tasks. It is based on the essential finding that many applications require large receptive fields for structure understanding. But corresponding neural networks for regression either stack many layers or apply large kernels to achieve it, which is computationally very costly. Our pyramid structure can greatly enlarge the field while not sacrificing computation efficiency. Extra benefit includes adaptive network depth and progressive upsampling for quasi-realtime testing on VGA-size input. Our method profits a broad set of applications, such as depth RGB image restoration, completion, noise artifact removal, edge refinement, image filtering, image enhancement and colorization.
Considering sparse coding and CNNs, an auto-encoder framework for blind image inpainting was proposed in @cite_28 . K " o hler al @cite_13 directly trained a CNN taking a specified mask and origin image as input. Ren al @cite_23 enhanced the convolution layer to Shepard convolution. These methods also produce limited receptive fields. Recently, a semantic inpainting framework @cite_29 incorporates perceptual and contextual loss. They achieved good results even with large holes for similar-content images.
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_13", "@cite_23" ], "mid": [ "2146337213", "2479644247", "", "2184016288" ], "abstract": [ "We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method's performance in the image denoising task is comparable to that of KSVD which is a widely used sparse coding technique. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning.", "In this paper, we propose a novel method for image inpainting based on a Deep Convolutional Generative Adversarial Network (DCGAN). We define a loss function consisting of two parts: (1) a contextual loss that preserves similarity between the input corrupted image and the recovered image, and (2) a perceptual loss that ensures a perceptually realistic output image. Given a corrupted image with missing values, we use back-propagation on this loss to map the corrupted image to a smaller latent space. The mapped vector is then passed through the generative model to predict the missing content. The proposed framework is evaluated on the CelebA and SVHN datasets for two challenging inpainting tasks with random 80 corruption and large blocky corruption. Experiments show that our method can successfully predict semantic information in the missing region and achieve pixel-level photorealism, which is impossible by almost all existing methods.", "", "Deep learning has recently been introduced to the field of low-level computer vision and image processing. Promising results have been obtained in a number of tasks including super-resolution, inpainting, deconvolution, filtering, etc. However, previously adopted neural network approaches such as convolutional neural networks and sparse auto-encoders are inherently with translation invariant operators. We found this property prevents the deep learning approaches from outperforming the state-of-the-art if the task itself requires translation variant interpolation (TVI). In this paper, we draw on Shepard interpolation and design Shepard Convolutional Neural Networks (ShCNN) which efficiently realizes end-to-end trainable TVI operators in the network. We show that by adding only a few feature maps in the new Shepard layers, the network is able to achieve stronger results than a much deeper architecture. Superior performance on both image in-painting and super-resolution is obtained where our system outperforms previous ones while keeping the running time competitive." ] }
1704.02071
2607372796
We propose a principled convolutional neural pyramid (CNP) framework for general low-level vision and image processing tasks. It is based on the essential finding that many applications require large receptive fields for structure understanding. But corresponding neural networks for regression either stack many layers or apply large kernels to achieve it, which is computationally very costly. Our pyramid structure can greatly enlarge the field while not sacrificing computation efficiency. Extra benefit includes adaptive network depth and progressive upsampling for quasi-realtime testing on VGA-size input. Our method profits a broad set of applications, such as depth RGB image restoration, completion, noise artifact removal, edge refinement, image filtering, image enhancement and colorization.
Many CNNs are proposed for high-level recognition tasks. Famous ones include AlexNet @cite_15 , VGG @cite_17 , GoogleLeNet @cite_32 and ResNet @cite_46 . Based on these frameworks, lots of networks are proposed to solve semantic image segmentation as pixel classification. They include FCN @cite_31 , DeepLab @cite_10 , SegNet @cite_11 , U-Net @cite_35 and those of @cite_38 @cite_4 @cite_25 @cite_42 . Compared with these image-to-label or image-to-label-map frameworks, our CNP aims for image processing directly by image-to-image regression.
{ "cite_N": [ "@cite_35", "@cite_31", "@cite_11", "@cite_38", "@cite_4", "@cite_15", "@cite_42", "@cite_32", "@cite_46", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2952232639", "2952632681", "360623563", "2286929393", "", "", "2950668883", "2950179405", "", "2952865063", "2508741746", "1686810756" ], "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.", "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.", "", "", "Semantic image segmentation is an essential component of modern autonomous driving systems, as an accurate understanding of the surrounding scene is crucial to navigation and action planning. Current state-of-the-art approaches in semantic image segmentation rely on pre-trained networks that were initially developed for classifying images as a whole. While these networks exhibit outstanding recognition performance (i.e., what is visible?), they lack localization accuracy (i.e., where precisely is something located?). Therefore, additional processing steps have to be performed in order to obtain pixel-accurate segmentation masks at the full image resolution. To alleviate this problem we propose a novel ResNet-like architecture that exhibits strong localization and recognition performance. We combine multi-scale context with pixel-level accuracy by using two processing streams within our network: One stream carries information at the full image resolution, enabling precise adherence to segment boundaries. The other stream undergoes a sequence of pooling operations to obtain robust features for recognition. The two streams are coupled at the full image resolution using residuals. Without additional processing steps and without pre-training, our approach achieves an intersection-over-union score of 71.8 on the Cityscapes dataset.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \"DeepLab\" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "CNN architectures have terrific recognition performance but rely on spatial pooling which makes it difficult to adapt them to tasks that require dense, pixel-accurate labeling. This paper makes two contributions: (1) We demonstrate that while the apparent spatial resolution of convolutional feature maps is low, the high-dimensional feature representation contains significant sub-pixel localization information. (2) We describe a multi-resolution reconstruction architecture based on a Laplacian pyramid that uses skip connections from higher resolution feature maps and multiplicative gating to successively refine segment boundaries reconstructed from lower-resolution maps. This approach yields state-of-the-art semantic segmentation results on the PASCAL VOC and Cityscapes segmentation benchmarks without resorting to more complex random-field inference or instance detection driven architectures.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1704.02312
2606676075
Sentence simplification reduces semantic complexity to benefit people with language impairments. Previous simplification studies on the sentence level and word level have achieved promising results but also meet great challenges. For sentence-level studies, sentences after simplification are fluent but sometimes are not really simplified. For word-level studies, words are simplified but also have potential grammar errors due to different usages of words before and after simplification. In this paper, we propose a two-step simplification framework by combining both the word-level and the sentence-level simplifications, making use of their corresponding advantages. Based on the two-step framework, we implement a novel constrained neural generation model to simplify sentences given simplified words. The final results on Wikipedia and Simple Wikipedia aligned datasets indicate that our method yields better performance than various baselines.
As for word-level simplification, there are impressive results as well. extract over 30,000 paraphrase rules for lexical simplification by identifying aligned words in English Wikipedia and Simple English Wikipedia. Glava employ GloVe @cite_14 to generate synonyms for the complex words. Instead of using the parallel datasets, their approach only requires a single corpus. propose a new word embeddings model to deal with the limitation that the traditional models do not accommodate ambiguous lexical semantics. Pavlick release about 4,500,000 simple paraphrase rules by extracting normal paraphrases rules from a bilingual corpus and reranking the simplicity scores of these rules by a supervised model. Thanks to their efforts, there is a large number of effective methods for identifying complex words, finding corresponding simple synonyms and selecting qualified substitutions. However, sometimes simplifying complicated words directly with simple synonyms violates grammar rules and usages.
{ "cite_N": [ "@cite_14" ], "mid": [ "2250539671" ], "abstract": [ "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition." ] }
1704.02264
2606021713
We consider MultiCriteria Decision Analysis models which are defined over discrete attributes, taking a finite number of values. We do not assume that the model is monotonically increasing with respect to the attributes values. Our aim is to define an importance index for such general models, considering that they are equivalent to @math -ary games (multichoice games). We show that classical solutions like the Shapley value are not suitable for such models, essentially because of the efficiency axiom which does not make sense in this context. We propose an importance index which is a kind of average variation of the model along the attributes. We give an axiomatic characterization of it.
Grabisch and Lange @cite_13 did not use unanimity games, but took an axiomatic approach to define a Shapley value in a more general context for games over lattices. They define the Shapley value for multichoice game as follows, @math
{ "cite_N": [ "@cite_13" ], "mid": [ "2114828263" ], "abstract": [ "Multichoice games have been introduced by Hsiao and Raghavan as a generalization of classical cooperative games. An important notion in cooperative game theory is the core of the game, as it contains the rational imputations for players. We propose two definitions for the core of a multichoice game, the first one is called the precore and is a direct generalization of the classical definition. We show that the precore coincides with the definition proposed by Faigle, and that it contains unbounded imputations, which makes its application questionable. A second definition is proposed, imposing normalization at each level, causing the core to be a convex closed set. We study its properties, introducing balancedness and marginal worth vectors, and defining the Weber set and the pre-Weber set. We show that the classical properties of inclusion of the (pre)core into the (pre)-Weber set as well as their equality remain valid. A last section makes a comparison with the core defined by van den" ] }
1704.02431
2952424733
This paper presents a novel method for detecting pedestrians under adverse illumination conditions. Our approach relies on a novel cross-modality learning framework and it is based on two main phases. First, given a multimodal dataset, a deep convolutional network is employed to learn a non-linear mapping, modeling the relations between RGB and thermal data. Then, the learned feature representations are transferred to a second deep network, which receives as input an RGB image and outputs the detection results. In this way, features which are both discriminative and robust to bad illumination conditions are learned. Importantly, at test time, only the second pipeline is considered and no thermal data are required. Our extensive evaluation demonstrates that the proposed approach outperforms the state-of- the-art on the challenging KAIST multispectral pedestrian dataset and it is competitive with previous methods on the popular Caltech dataset.
Due to its relevance in many fields, such as robotics and video surveillance, the problem of pedestrian detection has received considerable interests in the research community. Over the years, a large variety of features and algorithms have been proposed for improving detection systems, both with respect to speed @cite_39 @cite_36 @cite_35 @cite_27 and accuracy @cite_38 @cite_41 @cite_16 @cite_1 @cite_43 @cite_46 .
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_36", "@cite_41", "@cite_1", "@cite_39", "@cite_43", "@cite_27", "@cite_46", "@cite_16" ], "mid": [ "2265127172", "", "2136724559", "2098064689", "2950561226", "2133986780", "", "1882819926", "2953327122", "2081021369" ], "abstract": [ "We present a new real-time approach to object detection that exploits the efficiency of cascade classifiers with the accuracy of deep neural networks. Deep networks have been shown to excel at classification tasks, and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both very fast and very accurate. We apply it to the challenging task of pedestrian detection. Our algorithm runs in real-time at 15 frames per second. The resulting approach achieves a 26.2 average miss rate on the Caltech Pedestrian detection benchmark, which is competitive with the very best reported results. It is the first work we are aware of that achieves very high accuracy while running in real-time.", "", "We present a new pedestrian detector that improves both in speed and quality over state-of-the-art. By efficiently handling different scales and transferring computation from test time to training time, detection speed is improved. When processing monocular images, our system provides high quality detections at 50 fps. We also propose a new method for exploiting geometric context extracted from stereo images. On a single CPU+GPU desktop machine, we reach 135 fps, when processing street scenes, from rectified input to detections output.", "We propose a simple yet effective approach to the problem of pedestrian detection which outperforms the current state-of-the-art. Our new features are built on the basis of low-level visual features and spatial pooling. Incorporating spatial pooling improves the translational invariance and thus the robustness of the detection process. We then directly optimise the partial area under the ROC curve (pAUC) measure, which concentrates detection performance in the range of most practical importance. The combination of these factors leads to a pedestrian detector which outperforms all competitors on all of the standard benchmark datasets. We advance state-of-the-art results by lowering the average miss rate from 13 to 11 on the INRIA benchmark, 41 to 37 on the ETH benchmark, 51 to 42 on the TUD-Brussels benchmark and 36 to 29 on the Caltech-USA benchmark.", "This paper starts from the observation that multiple top performing pedestrian detectors can be modelled by using an intermediate layer filtering low-level features in combination with a boosted decision forest. Based on this observation we propose a unifying framework and experimentally explore different filter families. We report extensive results enabling a systematic analysis. Using filtered channel features we obtain top performance on the challenging Caltech and KITTI datasets, while using only HOG+LUV as low-level features. When adding optical flow features we further improve detection quality and report the best known results on the Caltech dataset, reaching 93 recall at 1 FPPI.", "This paper describes a pedestrian detection system that integrates image intensity information with motion information. We use a detection style algorithm that scans a detector over two consecutive frames of a video sequence. The detector is trained (using AdaBoost) to take advantage of both motion and appearance information to detect a walking person. Past approaches have built detectors based on motion information or detectors based on appearance information, but ours is the first to combine both sources of information in a single detector. The implementation described runs at about 4 frames second, detects pedestrians at very small scales (as small as 20 × 15 pixels), and has a very low false positive rate. Our approach builds on the detection work of Viola and Jones. Novel contributions of this paper include: (i) development of a representation of image motion which is extremely efficient, and (ii) implementation of a state of the art pedestrian detection system which operates on low resolution images under difficult conditions (such as rain and snow).", "", "In this work, we consider the problem of pedestrian detection in natural scenes. Intuitively, instances of pedestrians with different spatial scales may exhibit dramatically different features. Thus, large variance in instance scales, which results in undesirable large intra-category variance in features, may severely hurt the performance of modern object instance detection methods. We argue that this issue can be substantially alleviated by the divide-and-conquer philosophy. Taking pedestrian detection as an example, we illustrate how we can leverage this philosophy to develop a Scale-Aware Fast R-CNN (SAF R-CNN) framework. The model introduces multiple built-in sub-networks which detect pedestrians with scales from disjoint ranges. Outputs from all the sub-networks are then adaptively combined to generate the final detection results that are shown to be robust to large variance in instance scales, via a gate function defined over the sizes of object proposals. Extensive evaluations on several challenging pedestrian detection datasets well demonstrate the effectiveness of the proposed SAF R-CNN. Particularly, our method achieves state-of-the-art performance on Caltech, INRIA, and ETH, and obtains competitive results on KITTI.", "Deep learning methods have achieved great success in pedestrian detection, owing to its ability to learn features from raw pixels. However, they mainly capture middle-level representations, such as pose of pedestrian, but confuse positive with hard negative samples, which have large ambiguity, e.g. the shape and appearance of tree trunk' or wire pole' are similar to pedestrian in certain viewpoint. This ambiguity can be distinguished by high-level representation. To this end, this work jointly optimizes pedestrian detection with semantic tasks, including pedestrian attributes (e.g. carrying backpack') and scene attributes (e.g. road', tree', and horizontal'). Rather than expensively annotating scene attributes, we transfer attributes information from existing scene segmentation datasets to the pedestrian dataset, by proposing a novel deep model to learn high-level features from multiple tasks and multiple data sources. Since distinct tasks have distinct convergence rates and data from different datasets have different distributions, a multi-task objective function is carefully designed to coordinate tasks and reduce discrepancies among datasets. The importance coefficients of tasks and network parameters in this objective function can be iteratively estimated. Extensive evaluations show that the proposed approach outperforms the state-of-the-art on the challenging Caltech and ETH datasets, where it reduces the miss rates of previous deep models by 17 and 5.5 percent, respectively.", "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions." ] }
1704.02431
2952424733
This paper presents a novel method for detecting pedestrians under adverse illumination conditions. Our approach relies on a novel cross-modality learning framework and it is based on two main phases. First, given a multimodal dataset, a deep convolutional network is employed to learn a non-linear mapping, modeling the relations between RGB and thermal data. Then, the learned feature representations are transferred to a second deep network, which receives as input an RGB image and outputs the detection results. In this way, features which are both discriminative and robust to bad illumination conditions are learned. Importantly, at test time, only the second pipeline is considered and no thermal data are required. Our extensive evaluation demonstrates that the proposed approach outperforms the state-of- the-art on the challenging KAIST multispectral pedestrian dataset and it is competitive with previous methods on the popular Caltech dataset.
Recently, notable performance gains have been achieved with the adoption of powerful deep networks @cite_3 @cite_35 , thanks to their ability to learn discriminative features directly from raw pixels. In @cite_26 , a CNN pre-trained with an unsupervised method based on convolutional sparse coding was presented. The occlusion problem was addressed in @cite_42 , where a deep belief net was employed to learn the visibility masks for different body parts. This work was extended in @cite_10 to model relations among multiple targets. More recently, in @cite_15 DeepParts, a robust framework for handling severe occlusions, was presented. Differently from previous deep learning models addressing the occlusion problem, DeepParts does not rely on a single detector but it is based on multiple part detectors. Tian al @cite_46 learned discriminative representations for pedestrian detection by considering semantic attributes of people and scenes. Cai al @cite_19 introduced Complexity-Aware Cascade Training (CompACT), successfully integrating many heterogeneous features, both hand crafted and derived from CNNs. Zhang al @cite_2 presented an approach based on the Region Proposal Network (RPN) @cite_4 and boosted forests.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_4", "@cite_46", "@cite_42", "@cite_3", "@cite_19", "@cite_2", "@cite_15", "@cite_10" ], "mid": [ "2265127172", "2949966521", "2953106684", "2953327122", "1986905809", "2151454023", "2950167387", "2497039038", "2200528286", "2152945944" ], "abstract": [ "We present a new real-time approach to object detection that exploits the efficiency of cascade classifiers with the accuracy of deep neural networks. Deep networks have been shown to excel at classification tasks, and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both very fast and very accurate. We apply it to the challenging task of pedestrian detection. Our algorithm runs in real-time at 15 frames per second. The resulting approach achieves a 26.2 average miss rate on the Caltech Pedestrian detection benchmark, which is competitive with the very best reported results. It is the first work we are aware of that achieves very high accuracy while running in real-time.", "Pedestrian detection is a problem of considerable practical interest. Adding to the list of successful applications of deep learning methods to vision, we report state-of-the-art and competitive results on all major pedestrian datasets with a convolutional network model. The model uses a few new twists, such as multi-stage features, connections that skip layers to integrate global shape information with local distinctive motif information, and an unsupervised method based on convolutional sparse coding to pre-train the filters at each stage.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "Deep learning methods have achieved great success in pedestrian detection, owing to its ability to learn features from raw pixels. However, they mainly capture middle-level representations, such as pose of pedestrian, but confuse positive with hard negative samples, which have large ambiguity, e.g. the shape and appearance of tree trunk' or wire pole' are similar to pedestrian in certain viewpoint. This ambiguity can be distinguished by high-level representation. To this end, this work jointly optimizes pedestrian detection with semantic tasks, including pedestrian attributes (e.g. carrying backpack') and scene attributes (e.g. road', tree', and horizontal'). Rather than expensively annotating scene attributes, we transfer attributes information from existing scene segmentation datasets to the pedestrian dataset, by proposing a novel deep model to learn high-level features from multiple tasks and multiple data sources. Since distinct tasks have distinct convergence rates and data from different datasets have different distributions, a multi-task objective function is carefully designed to coordinate tasks and reduce discrepancies among datasets. The importance coefficients of tasks and network parameters in this objective function can be iteratively estimated. Extensive evaluations show that the proposed approach outperforms the state-of-the-art on the challenging Caltech and ETH datasets, where it reduces the miss rates of previous deep models by 17 and 5.5 percent, respectively.", "Part-based models have demonstrated their merit in object detection. However, there is a key issue to be solved on how to integrate the inaccurate scores of part detectors when there are occlusions or large deformations. To handle the imperfectness of part detectors, this paper presents a probabilistic pedestrian detection framework. In this framework, a deformable part-based model is used to obtain the scores of part detectors and the visibilities of parts are modeled as hidden variables. Unlike previous occlusion handling approaches that assume independence among visibility probabilities of parts or manually define rules for the visibility relationship, a discriminative deep model is used in this paper for learning the visibility relationship among overlapping parts at multiple layers. Experimental results on three public datasets (Caltech, ETH and Daimler) and a new CUHK occlusion dataset1 specially designed for the evaluation of occlusion handling approaches show the effectiveness of the proposed approach.", "Detecting pedestrians in cluttered scenes is a challenging problem in computer vision. The difficulty is added when several pedestrians overlap in images and occlude each other. We observe, however, that the occlusion visibility statuses of overlapping pedestrians provide useful mutual relationship for visibility estimation - the visibility estimation of one pedestrian facilitates the visibility estimation of another. In this paper, we propose a mutual visibility deep model that jointly estimates the visibility statuses of overlapping pedestrians. The visibility relationship among pedestrians is learned from the deep model for recognizing co-existing pedestrians. Experimental results show that the mutual visibility deep model effectively improves the pedestrian detection results. Compared with existing image-based pedestrian detection approaches, our approach has the lowest average miss rate on the Caltech-Train dataset, the Caltech-Test dataset and the ETH dataset. Including mutual visibility leads to 4 - 8 improvements on multiple benchmark datasets.", "The design of complexity-aware cascaded detectors, combining features of very different complexities, is considered. A new cascade design procedure is introduced, by formulating cascade learning as the Lagrangian optimization of a risk that accounts for both accuracy and complexity. A boosting algorithm, denoted as complexity aware cascade training (CompACT), is then derived to solve this optimization. CompACT cascades are shown to seek an optimal trade-off between accuracy and complexity by pushing features of higher complexity to the later cascade stages, where only a few difficult candidate patches remain to be classified. This enables the use of features of vastly different complexities in a single detector. In result, the feature pool can be expanded to features previously impractical for cascade design, such as the responses of a deep convolutional neural network (CNN). This is demonstrated through the design of a pedestrian detector with a pool of features whose complexities span orders of magnitude. The resulting cascade generalizes the combination of a CNN with an object proposal mechanism: rather than a pre-processing stage, CompACT cascades seamlessly integrate CNNs in their stages. This enables state of the art performance on the Caltech and KITTI datasets, at fairly fast speeds.", "Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast Faster R-CNN have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available.", "Recent advances in pedestrian detection are attained by transferring the learned features of Convolutional Neural Network (ConvNet) to pedestrians. This ConvNet is typically pre-trained with massive general object categories (e.g. ImageNet). Although these features are able to handle variations such as poses, viewpoints, and lightings, they may fail when pedestrian images with complex occlusions are present. Occlusion handling is one of the most important problem in pedestrian detection. Unlike previous deep models that directly learned a single detector for pedestrian detection, we propose DeepParts, which consists of extensive part detectors. DeepParts has several appealing properties. First, DeepParts can be trained on weakly labeled data, i.e. only pedestrian bounding boxes without part annotations are provided. Second, DeepParts is able to handle low IoU positive proposals that shift away from ground truth. Third, each part detector in DeepParts is a strong detector that can detect pedestrian by observing only a part of a proposal. Extensive experiments in Caltech dataset demonstrate the effectiveness of DeepParts, which yields a new state-of-the-art miss rate of 11:89 , outperforming the second best method by 10 .", "In this paper, we address the challenging problem of detecting pedestrians who appear in groups and have interaction. A new approach is proposed for single-pedestrian detection aided by multi-pedestrian detection. A mixture model of multi-pedestrian detectors is designed to capture the unique visual cues which are formed by nearby multiple pedestrians but cannot be captured by single-pedestrian detectors. A probabilistic framework is proposed to model the relationship between the configurations estimated by single-and multi-pedestrian detectors, and to refine the single-pedestrian detection result with multi-pedestrian detection. It can integrate with any single-pedestrian detector without significantly increasing the computation load. 15 state-of-the-art single-pedestrian detection approaches are investigated on three widely used public datasets: Caltech, TUD-Brussels and ETH. Experimental results show that our framework significantly improves all these approaches. The average improvement is 9 on the Caltech-Test dataset, 11 on the TUD-Brussels dataset and 17 on the ETH dataset in terms of average miss rate. The lowest average miss rate is reduced from 48 to 43 on the Caltech-Test dataset, from 55 to 50 on the TUD-Brussels dataset and from 51 to 41 on the ETH dataset." ] }
1704.02431
2952424733
This paper presents a novel method for detecting pedestrians under adverse illumination conditions. Our approach relies on a novel cross-modality learning framework and it is based on two main phases. First, given a multimodal dataset, a deep convolutional network is employed to learn a non-linear mapping, modeling the relations between RGB and thermal data. Then, the learned feature representations are transferred to a second deep network, which receives as input an RGB image and outputs the detection results. In this way, features which are both discriminative and robust to bad illumination conditions are learned. Importantly, at test time, only the second pipeline is considered and no thermal data are required. Our extensive evaluation demonstrates that the proposed approach outperforms the state-of- the-art on the challenging KAIST multispectral pedestrian dataset and it is competitive with previous methods on the popular Caltech dataset.
Other works focused on improving the computational times of CNN-based pedestrian detectors. For instance, Angelova al @cite_35 proposed the DeepCascade method, a cascade of deep neural networks, and demonstrated a considerable gain in terms of detection speed. An in-depth analysis of different deep networks architectural choices for pedestrian detection was provided in @cite_28 . To our knowledge, none of these previous works considers multi-modal data or tackles the problem of pedestrian detection under adverse illumination conditions.
{ "cite_N": [ "@cite_28", "@cite_35" ], "mid": [ "2949493420", "2265127172" ], "abstract": [ "In this paper we study the use of convolutional neural networks (convnets) for the task of pedestrian detection. Despite their recent diverse successes, convnets historically underperform compared to other pedestrian detectors. We deliberately omit explicitly modelling the problem into the network (e.g. parts or occlusion modelling) and show that we can reach competitive performance without bells and whistles. In a wide range of experiments we analyse small and big convnets, their architectural choices, parameters, and the influence of different training data, including pre-training on surrogate tasks. We present the best convnet detectors on the Caltech and KITTI dataset. On Caltech our convnets reach top performance both for the Caltech1x and Caltech10x training setup. Using additional data at training time our strongest convnet model is competitive even to detectors that use additional data (optical flow) at test time.", "We present a new real-time approach to object detection that exploits the efficiency of cascade classifiers with the accuracy of deep neural networks. Deep networks have been shown to excel at classification tasks, and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both very fast and very accurate. We apply it to the challenging task of pedestrian detection. Our algorithm runs in real-time at 15 frames per second. The resulting approach achieves a 26.2 average miss rate on the Caltech Pedestrian detection benchmark, which is competitive with the very best reported results. It is the first work we are aware of that achieves very high accuracy while running in real-time." ] }
1704.02431
2952424733
This paper presents a novel method for detecting pedestrians under adverse illumination conditions. Our approach relies on a novel cross-modality learning framework and it is based on two main phases. First, given a multimodal dataset, a deep convolutional network is employed to learn a non-linear mapping, modeling the relations between RGB and thermal data. Then, the learned feature representations are transferred to a second deep network, which receives as input an RGB image and outputs the detection results. In this way, features which are both discriminative and robust to bad illumination conditions are learned. Importantly, at test time, only the second pipeline is considered and no thermal data are required. Our extensive evaluation demonstrates that the proposed approach outperforms the state-of- the-art on the challenging KAIST multispectral pedestrian dataset and it is competitive with previous methods on the popular Caltech dataset.
Previous works have considered transferring information from other domains for constructing scene-specific pedestrian detectors. Wang al @cite_5 proposed an unsupervised approach where target samples are collected by exploiting contextual cues, such as motions and scene geometry. Then, a pedestrian detector is built by re-weighting labeled source samples, by assigning more importance to samples more similar to target data. This approach was later extended in @cite_20 to learn deep feature representations. Similarly, in @cite_7 a sample selection scheme to reduce the discrepancy between source and target distributions was presented. Our approach is substantially different, as we do not restrict our attention to adapt a generic model to a specific scene and we tackle the problem of transferring knowledge among different modalities.
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_20" ], "mid": [ "2105557521", "1996303439", "" ], "abstract": [ "The performance of a generic pedestrian detector may drop significantly when it is applied to a specific scene due to mismatch between the source dataset used to train the detector and samples in the target scene. In this paper, we investigate how to automatically train a scene-specific pedestrian detector starting with a generic detector in video surveillance without further manually labeling any samples under a novel transfer learning framework. It tackles the problem from three aspects. (1) With a graphical representation and through exploring the indegrees from target samples to source samples, the source samples are properly re-weighted. The indegrees detect the boundary between the distributions of the source dataset and the target dataset. The re-weighted source dataset better matches the target scene. (2) It takes the context information from motions, scene structures and scene geometry as the confidence scores of samples from the target scene to guide transfer learning. (3) The confidence scores propagate among samples on a graph according to the underlying visual structures of samples. All these considerations are formulated under a single objective function called Confidence-Encoded SVM. At the test stage, only the appearance-based detector is used without the context cues. The effectiveness of the proposed framework is demonstrated through experiments on two video surveillance datasets. Compared with a generic pedestrian detector, it significantly improves the detection rate by 48 and 36 at one false positive per image on the two datasets respectively.", "Most of the existing methods for pedestrian detection work well, only when the following assumption is satisfied: the features extracted from the training dataset and the testing dataset have very similar distributions in the feature space. However, in practice, this assumption does not hold because of the scene complexity and variation. In this paper, a new method is proposed for detecting pedestrians in various scenes based on the transfer learning technique. Our proposed method employs the following two strategies for improving the pedestrian detection performance. First, a new sample screening method based on manifold learning is proposed. The basic idea is to choose samples from the training set, which may be similar to the samples from the unseen scene, and then merge the selected samples into the unseen set. Second, a new classification model based on transfer learning is proposed. The advantage of the classification model is that only a small number of samples need to be used from the unseen scenes. Most of the training samples are still obtained from the training scene, which take up to 90 of the entire training samples. Compared to the traditional pedestrian detection methods, the proposed algorithm can adapt to different scenes for detecting pedestrians. Experiments on two pedestrian detection benchmark datasets, DC and NICTA, showed that the method can obtain better performance as compared to other previous methods.", "" ] }
1704.02431
2952424733
This paper presents a novel method for detecting pedestrians under adverse illumination conditions. Our approach relies on a novel cross-modality learning framework and it is based on two main phases. First, given a multimodal dataset, a deep convolutional network is employed to learn a non-linear mapping, modeling the relations between RGB and thermal data. Then, the learned feature representations are transferred to a second deep network, which receives as input an RGB image and outputs the detection results. In this way, features which are both discriminative and robust to bad illumination conditions are learned. Importantly, at test time, only the second pipeline is considered and no thermal data are required. Our extensive evaluation demonstrates that the proposed approach outperforms the state-of- the-art on the challenging KAIST multispectral pedestrian dataset and it is competitive with previous methods on the popular Caltech dataset.
In the last few years deep networks have been successfully applied to learning feature representations from multi-modal data @cite_18 @cite_21 @cite_32 . However, the problem of both learning and transferring cross-modal features has been rarely investigated. Notable exceptions are the works in @cite_37 @cite_31 @cite_23 @cite_44 @cite_13 . Among these, the most similar to ours are @cite_37 @cite_31 @cite_13 . In @cite_37 @cite_31 the idea of hallucinating data from other modalities was also exploited. However, our CNN-based approach is substantially different, since the work in @cite_31 considered Deep Boltzmann Machines, while in @cite_37 the mapping between different modalities was learned with Gaussian Processes. In @cite_13 the problem of object detection from RGB data was addressed and depth images were used as additional information available only at training time. Similarly to @cite_13 , our detection network simultaneously use cross-modal features learned from a source domain and representations specific of the target scenario. However, in @cite_13 labeled data were available in the original domain. Oppositely, in our framework we learn cross-modal features in an unsupervised setting, we do not require any annotation in the thermal domain. In this way, it is possible to exploit huge multispectral datasets.
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_21", "@cite_32", "@cite_44", "@cite_23", "@cite_31", "@cite_13" ], "mid": [ "1663639025", "2953276893", "2540481276", "2194550927", "2951874610", "2950276680", "154472438", "2463402750" ], "abstract": [ "In this paper we investigate the problem of exploiting multiple sources of information for object recognition tasks when additional modalities that are not present in the labeled training set are available for inference. This scenario is common to many robotics sensing applications and is in contrast with the assumption made by existing approaches that require at least some labeled examples for each modality. To leverage the previously unseen features, we make use of the unlabeled data to learn a mapping from the existing modalities to the new ones. This allows us to predict the missing data for the labeled examples and exploit all modalities using multiple kernel learning. We demonstrate the effectiveness of our approach on several multi-modal tasks including object recognition from multi-resolution imagery, grayscale and color images, as well as images and text. Our approach outperforms multiple kernel learning on the original modalities, as well as nearest-neighbor and bootstrapping schemes.", "We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.", "Abstract Anomalous event detection is of utmost importance in intelligent video surveillance. Currently, most approaches for the automatic analysis of complex video scenes typically rely on hand-crafted appearance and motion features. However, adopting user defined representations is clearly suboptimal, as it is desirable to learn descriptors specific to the scene of interest. To cope with this need, in this paper we propose Appearance and Motion DeepNet (AMDN), a novel approach based on deep neural networks to automatically learn feature representations. To exploit the complementary information of both appearance and motion patterns, we introduce a novel double fusion framework, combining the benefits of traditional early fusion and late fusion strategies. Specifically, stacked denoising autoencoders are proposed to separately learn both appearance and motion features as well as a joint representation ( early fusion ). Then, based on the learned features, multiple one-class SVM models are used to predict the anomaly scores of each input. Finally, a novel late fusion strategy is proposed to combine the computed scores and detect abnormal events. The proposed ADMN is extensively evaluated on publicly available video surveillance datasets including UCSD pedestian, Subway, and Train, showing competitive performance with respect to state of the art approaches.", "We present a novel unsupervised deep learning framework for anomalous event detection in complex video scenes. While most existing works merely use hand-crafted appearance and motion features, we propose Appearance and Motion DeepNet (AMDN) which utilizes deep neural networks to automatically learn feature representations. To exploit the complementary information of both appearance and motion patterns, we introduce a novel double fusion framework, combining both the benefits of traditional early fusion and late fusion strategies. Specifically, stacked denoising autoencoders are proposed to separately learn both appearance and motion features as well as a joint representation (early fusion). Based on the learned representations, multiple one-class SVM models are used to predict the anomaly scores of each input, which are then integrated with a late fusion strategy for final anomaly detection. We evaluate the proposed method on two publicly available video surveillance datasets, showing competitive performance with respect to state of the art approaches.", "In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as a supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We show experimental results where we transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers. Code, data and pre-trained models are available at this https URL", "This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.", "Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time.", "We present a modality hallucination architecture for training an RGB object detection model which incorporates depth side information at training time. Our convolutional hallucination network learns a new and complementary RGB image representation which is taught to mimic convolutional mid-level features from a depth network. At test time images are processed jointly through the RGB and hallucination networks to produce improved detection performance. Thus, our method transfers information commonly extracted from depth training data to a network which can extract that information from the RGB counterpart. We present results on the standard NYUDv2 dataset and report improvement on the RGB detection task." ] }
1704.01927
2951371116
We consider the problem of topology recognition in wireless (radio) networks modeled as undirected graphs. Topology recognition is a fundamental task in which every node of the network has to output a map of the underlying graph i.e., an isomorphic copy of it, and situate itself in this map. In wireless networks, nodes communicate in synchronous rounds. In each round a node can either transmit a message to all its neighbors, or stay silent and listen. At the receiving end, a node @math hears a message from a neighbor @math in a given round, if @math listens in this round, and if @math is its only neighbor that transmits in this round. Nodes have labels which are (not necessarily different) binary strings. The length of a labeling scheme is the largest length of a label. We concentrate on wireless networks modeled by trees, and we investigate two problems. What is the shortest labeling scheme that permits topology recognition in all wireless tree networks of diameter @math and maximum degree @math ? What is the fastest topology recognition algorithm working for all wireless tree networks of diameter @math and maximum degree @math , using such a short labeling scheme? We are interested in deterministic topology recognition algorithms. For the first problem, we show that the minimum length of a labeling scheme allowing topology recognition in all trees of maximum degree @math is @math . For such short schemes, used by an algorithm working for the class of trees of diameter @math and maximum degree @math , we show almost matching bounds on the time of topology recognition: an upper bound @math , and a lower bound @math , for any constant @math .
Algorithmic problems in radio networks modeled as graphs were studied for such tasks as broadcasting @cite_3 @cite_5 , gossiping @cite_3 @cite_14 and leader election @cite_11 . In some cases @cite_3 @cite_14 the topology of the network was unknown, in others @cite_5 nodes were assumed to have a labeled map of the network and could situate themselves in it.
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_3", "@cite_11" ], "mid": [ "2048454711", "2127694687", "2134516390", "2003002196" ], "abstract": [ "This paper concerns the communication primitives of broadcasting (one-to-all communication) and gossiping (all-to-all communication) in radio networks with known topology, i.e., where for each primitive the schedule of transmissions is precomputed based on full knowledge about the size and the topology of the network.The first part of the paper examines the two communication primitives in general graphs. In particular, it proposes a new (efficiently computable) deterministic schedule that uses O(D+Δ log n) time units to complete the gossiping task in any radio network with size n, diameter D and max-degree Δ. Our new schedule improves and simplifies the currently best known gossiping schedule, requiring time O(D+√[i+2]DΔ logi+1 n), for any network with the diameter D=Ω(logi+4n), where i is an arbitrary integer constant i ≥ 0, see [17]. For the broadcast task we deliver two new results: a deterministic efficient algorithm for computing a radio schedule of length D+O(log3 n), and a randomized algorithm for computing a radio schedule of length D+O(log2 n). These results improve on the best currently known D+O(log4 n) time schedule due to Elkin and Kortsarz [12].The second part of the paper focuses on radio communication in planar graphs, devising a new broadcasting schedule using fewer than 3D time slots. This result improves, for small values of D, on currently best known D+O(log3n) time schedule proposed by Elkin and Kortsarz in [12]. Our new algorithm should be also seen as the separation result between the planar and the general graphs with a small diameter due to the polylogarithmic inapproximability result in general graphs due to Elkin and Kortsarz, see [11].", "We study deterministic gossiping in ad hoc radio networks with large node labels. The labels (identifiers) of the nodes come from a domain of size N which may be much larger than the size n of the network (the number of nodes). Most of the work on deterministic communication has been done for the model with small labels which assumes N = O(n). A notable exception is Peleg's paper, where the problem of deterministic communication in ad hoc radio networks with large labels is raised and a deterministic broadcasting algorithm is proposed, which runs in O(n2log n) time for N polynomially large in n. The O(nlog2n)-time deterministic broadcasting algorithm for networks with small labels given by implies deterministic O(n log N log n)-time broadcasting and O(n2log2N log n)-time gossiping in networks with large labels. We propose two new deterministic gossiping algorithms for ad hoc radio networks with large labels, which are the first such algorithms with subquadratic time for polynomially large N. More specifically, we propose: a deterministic O(n3 2log2N log n)-time gossiping algorithm for directed networks; and a deterministic O(n log2N log2n)-time gossiping algorithm for undirected networks.", "We establish an O(nlog2n) upper bound on the time for deterministic distributed broadcasting in multi-hop radio networks with unknown topology. This nearly matches the known lower bound of Ω(n log n). The fastest previously known algorithm for this problem works in time O(n3 2). Using our broadcasting algorithm, we develop an O(n3 2log2n) algorithm for gossiping in the same network model.", "Abstract We address the fundamental distributed problem of leader election in ad hoc radio networks modeled as undirected graphs. A signal from a transmitting node reaches all neighbors but a message is received successfully by a node, if and only if exactly one of its neighbors transmits in this round. If two neighbors of a node transmit simultaneously in a given round, we say that a collision occurred at this node. Collision detection is the ability of nodes to distinguish a collision from silence. We show that collision detection speeds up leader election in arbitrary radio networks. Our main result is a deterministic leader election algorithm working in time O ( n ) in all n -node networks, if collision detection is available, while it is known that deterministic leader election requires time Ω ( n log n ) , even for complete networks, if there is no collision detection." ] }
1704.01806
2574014992
Significant efforts have been made to understand and document knowledge related to scientific measurements. Many of those efforts resulted in one or more high-quality ontologies that describe some aspects of scientific measurements, but not in a comprehensive and coherently integrated manner. For instance, we note that many of these high-quality ontologies are not properly aligned, and more challenging, that they have different and often conflicting concepts and approaches for encoding knowledge about empirical measurements. As a result of this lack of an integrated view, it is often challenging for scientists to determine whether any two scientific measurements were taken in semantically compatible manners, thus making it difficult to decide whether measurements should be analyzed in combination or not. In this paper, we present the Human-Aware Sensor Network Ontology that is a comprehensive alignment and integration of a sensing infrastructure ontology and a provenance ontology. HASNetO has been under development for more than one year, and has been reviewed, shared and used by multiple scientific communities. The ontology has been in use to support the data management of a number of large-scale ecological monitoring activities (observations) and empirical experiments.
The concept of Observation data is treated in the literature @cite_3 @cite_15 @cite_9 as data that are obtained while sensing some property of an entity from the real world. The result of an observation is a value for that property @cite_5 . Content annotation is crucial when dealing with observation data (do they talk about data quality, and more specifically, how to differentiate measurements when they are from distinct data collections, i.e., distinct calibrations, setting, etc.?). It enables some level of interoperability and discoverability, making the data easier to be used. To leverage this potential, several approaches exist to both model the infrastructure that generates the data and to describe data content and context.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_3", "@cite_5" ], "mid": [ "1598795547", "1570393125", "1494718765", "281617609" ], "abstract": [ "Geographic information is based on observations or measurements. The Open Geospatial Consortium (OGC) has developed an implementation specification for observations and measurements (O&M). It specifies precisely how to encode information. Yet, the O&M conceptual model does not specify precisely which real-world entities are denoted by the specified information objects. We provide formal semantics for the central O&M terms by aligning them to the foundational ontology DOLCE. The alignment to a foundational ontology restricts the possible interpretations of the central elements in the O&M model and establishes explicit relations between categories of real world entities and classes of information objects. These relations are essential for assessing semantic interoperability between geospatial information sources.", "The understanding of complex environmental phenomena, such as deforestation and epidemics, requires observations at multiple scales. This scale dependency is not handled well by today's rather technical sensor definitions. Geosensor networks are normally defined as distributed ad-hoc wireless networks of computing platforms serving to monitor phenomena in geographic space. Such definitions also do not admit animals as sensors. Consequently, they exclude human sensors, which are the key to volunteered geographic information, and they fail to support connections between phenomena observed at multiple scales. We propose definitions of sensors as information sources at multiple aggregation levels, relating physical stimuli to observations. An algebraic formalization shows their behavior as well as their aggregations and generalizations. It is intended as a basis for defining consistent application programming interfaces to sense the environment at multiple scales of observations and with different types of sensors.", "Days of Yore Naturalism Reification Checkpoints and Empirical Content Logic and Mathematics Denotation and Truth Semantic Agreement Things of the Mind Appendix: Predicate Functors References Index", "Being a part of the Information Age, users are challenged with a tremendously growing amount of Web data which generates a need for more sophisticated information retrieval systems. The Semantic Web provides necessary procedures to augment the highly unstructured Web with suitable metadata in order to leverage search quality and user experience. In this article, we will outline an approach for creating a web-scale, precise and efficient information system capable of understanding keyword, entity and natural language queries. By using Semantic Web methods and Linked Data the doctoral work will present how the underlying knowledge is created and elaborated searches can be performed on top." ] }
1704.01733
2950740990
The notion of entropy is shared between statistics and thermodynamics, and is fundamental to both disciplines. This makes statistical problems particularly suitable for reaction network implementations. In this paper we show how to perform a statistical operation known as Information Projection or E projection with stochastic mass-action kinetics. Our scheme encodes desired conditional distributions as the equilibrium distributions of reaction systems. To our knowledge this is a first scheme to exploit the inherent stochasticity of reaction networks for information processing. We apply this to the problem of an artificial cell trying to infer its environment from partial observations.
The link between statistics machine learning and reaction networks has been explored before by Napp and Adams @cite_8 . They propose a deterministic mass-action based reaction network scheme to compute single-variable marginals from a joint distribution given as a factor graph, drawing on message-passing'' schemes. Our work is in the same spirit of finding more connections between machine learning and reaction networks, but the nature of the problem we are trying to solve is different. We are trying to estimate a full distribution from partial observations. In doing so, we exploit the inherent stochasticity of reaction networks to represent correlations and do Bayesian inference.
{ "cite_N": [ "@cite_8" ], "mid": [ "2166918926" ], "abstract": [ "Recent work on molecular programming has explored new possibilities for computational abstractions with biomolecules, including logic gates, neural networks, and linear systems. In the future such abstractions might enable nanoscale devices that can sense and control the world at a molecular scale. Just as in macroscale robotics, it is critical that such devices can learn about their environment and reason under uncertainty. At this small scale, systems are typically modeled as chemical reaction networks. In this work, we develop a procedure that can take arbitrary probabilistic graphical models, represented as factor graphs over discrete random variables, and compile them into chemical reaction networks that implement inference. In particular, we show that marginalization based on sum-product message passing can be implemented in terms of reactions between chemical species whose concentrations represent probabilities. We show algebraically that the steady state concentration of these species correspond to the marginal distributions of the random variables in the graph and validate the results in simulations. As with standard sum-product inference, this procedure yields exact results for tree-structured graphs, and approximate solutions for loopy graphs." ] }
1704.01733
2950740990
The notion of entropy is shared between statistics and thermodynamics, and is fundamental to both disciplines. This makes statistical problems particularly suitable for reaction network implementations. In this paper we show how to perform a statistical operation known as Information Projection or E projection with stochastic mass-action kinetics. Our scheme encodes desired conditional distributions as the equilibrium distributions of reaction systems. To our knowledge this is a first scheme to exploit the inherent stochasticity of reaction networks for information processing. We apply this to the problem of an artificial cell trying to infer its environment from partial observations.
One previous work which has engaged with stochasticity in reaction networks is by @cite_20 . They give a reaction scheme that takes an arbitrary finite probability distribution and encodes it in the stationary distribution of a reaction system. In comparison, we are taking samples from a marginal distribution and encoding the full distribution in terms of the stationary distribution. Thus our scheme allows us to do conditioning and inference.
{ "cite_N": [ "@cite_20" ], "mid": [ "2271750763" ], "abstract": [ "We explore the range of probabilistic behaviours that can be engineered with Chemical Reaction Networks (CRNs). We show that at steady state CRNs are able to \"program\" any distribution with finite support in @math , with @math . Moreover, any distribution with countable infinite support can be approximated with arbitrarily small error under the @math norm. We also give optimized schemes for special distributions, including the uniform distribution. Finally, we formulate a calculus to compute on distributions that is complete for finite support distributions, and can be compiled to a restricted class of CRNs that at steady state realize those distributions." ] }
1704.01733
2950740990
The notion of entropy is shared between statistics and thermodynamics, and is fundamental to both disciplines. This makes statistical problems particularly suitable for reaction network implementations. In this paper we show how to perform a statistical operation known as Information Projection or E projection with stochastic mass-action kinetics. Our scheme encodes desired conditional distributions as the equilibrium distributions of reaction systems. To our knowledge this is a first scheme to exploit the inherent stochasticity of reaction networks for information processing. We apply this to the problem of an artificial cell trying to infer its environment from partial observations.
In Gopalkrishnan @cite_2 , one of the present authors has proposed a molecular scheme to do Maximum Likelihood Estimation in Log-Linear models. The reaction networks employed in that work are essentially identical to the reaction networks employed in this work, modulo some minor technical differences. In that paper, the reaction networks were used to obtain M-projections (or reverse I-projections), and thereby to solve for Maximum Likelihood Estimators. In this paper, we obtain E-projections, and sample from conditional distributions. The results in that paper were purely at the level of deterministic mass-action kinetics. The results in this paper obtain at the level of stochastic behavior.
{ "cite_N": [ "@cite_2" ], "mid": [ "1184690218" ], "abstract": [ "We propose a novel molecular computing scheme for statistical inference. We focus on the much-studied statistical inference problem of computing maximum likelihood estimators for log-linear models. Our scheme takes log-linear models to reaction systems, and the observed data to initial conditions, so that the corresponding equilibrium of each reaction system encodes the corresponding maximum likelihood estimator. The main idea is to exploit the coincidence between thermodynamic entropy and statistical entropy. We map a Maximum Entropy characterization of the maximum likelihood estimator onto a Maximum Entropy characterization of the equilibrium concentrations for the reaction system. This allows for an efficient encoding of the problem, and reveals that reaction networks are superbly suited to statistical inference tasks. Such a scheme may also provide a template to understanding how in vivo biochemical signaling pathways integrate extensive information about their environment and history." ] }
1704.01719
2952976870
Person re-identification (ReID) is an important task in wide area video surveillance which focuses on identifying people across different cameras. Recently, deep learning networks with a triplet loss become a common framework for person ReID. However, the triplet loss pays main attentions on obtaining correct orders on the training set. It still suffers from a weaker generalization capability from the training set to the testing set, thus resulting in inferior performance. In this paper, we design a quadruplet loss, which can lead to the model output with a larger inter-class variation and a smaller intra-class variation compared to the triplet loss. As a result, our model has a better generalization ability and can achieve a higher performance on the testing set. In particular, a quadruplet deep network using a margin-based online hard negative mining is proposed based on the quadruplet loss for the person ReID. In extensive experiments, the proposed network outperforms most of the state-of-the-art algorithms on representative datasets which clearly demonstrates the effectiveness of our proposed method.
Most of existing methods in person ReID focus on either feature extraction @cite_35 @cite_0 @cite_40 @cite_2 @cite_8 , or similarity measurement @cite_18 @cite_44 @cite_43 @cite_13 . Person image descriptors commonly used include color histogram @cite_10 @cite_18 @cite_16 , local binary patterns @cite_10 , Gabor features @cite_18 , and etc., which show certain robustness to the variations of poses, illumination and viewpoints. For similarity measurement, many metric learning approaches are proposed to learn a suitable metric, such as locally adaptive decision functions @cite_11 , local fisher discriminant analysis @cite_31 , cross-view quadratic discriminant analysis @cite_20 , and etc. However, manually crafting features and metrics are usually not optimal to cope with large intra-class variations.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_8", "@cite_10", "@cite_0", "@cite_44", "@cite_40", "@cite_43", "@cite_2", "@cite_31", "@cite_16", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "", "2047632871", "2604912015", "2068042582", "2203864774", "1709635438", "", "", "", "", "", "2496888427", "1949591461", "" ], "abstract": [ "", "In this paper, we propose a new approach for matching images observed in different camera views with complex cross-view transforms and apply it to person re-identification. It jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are first locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsity-inducing norm and information theoretical regularization. This approach can be generalized to the settings where test images are from new camera views, not the same as those in the training set. Extensive experiments are conducted on public datasets and our own dataset. Comparisons with the state-of-the-art metric learning and person re-identification methods show the superior performance of our approach.", "", "In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art.", "We propose a novel Multi-Task Learning with Low Rank Attribute Embedding (MTL-LORAE) framework for person re-identification. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information to improve re-identification accuracy. Both low level features and semantic data-driven attributes are utilized. Since attributes are generally correlated, we introduce a low rank attribute embedding into the MTL formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered to better describe people. The learning objective function consists of a quadratic loss regarding class labels and an attribute embedding error, which is solved by an alternating optimization procedure. Experiments on three person re-identification datasets have demonstrated that MTL-LORAE outperforms existing approaches by a large margin and produces state-of-the-art results.", "This paper addresses the problem of handling spatial misalignments due to camera-view changes or human-pose variations in person re-identification. We first introduce a boosting-based approach to learn a correspondence structure which indicates the patch-wise matching probabilities between images from a target camera pair. The learned correspondence structure can not only capture the spatial correspondence pattern between cameras but also handle the viewpoint or human-pose variation in individual images. We further introduce a global-based matching process. It integrates a global matching constraint over the learned correspondence structure to exclude cross-view misalignments during the image patch matching process, hence achieving a more reliable matching score between images. Experimental results on various datasets demonstrate the effectiveness of our approach.", "", "", "", "", "", "A variety of encoding methods for bag of word (BoW) model have been proposed to encode the local features in image classification. However, most of them are unsupervised and just employ k-means to form the visual vocabulary, thus reducing the discriminative power of the features. In this paper, we propose a metric embedded discriminative vocabulary learning for high-level person representation with application to person re-identification. A new and effective term is introduced which aims at making the same persons closer while different ones farther in the metric space. With the learned vocabulary, we utilize a linear coding method to encode the image-level features (or holistic image features) for extracting high-level person representation. Different from traditional unsupervised approaches, our method can explore the relationship (same or not) among the persons. Since there is an analytic solution to the linear coding, it is easy to obtain the final high-level features. The experimental results on person reidentification demonstrate the effectiveness of our proposed algorithm.", "Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2 , 4.88 , 28.91 , and 31.55 on the four databases, respectively.", "" ] }
1704.01372
2606993052
Large amount of image denoising literature focuses on single channel images and often experimentally validates the proposed methods on tens of images at most. In this paper, we investigate the interaction between denoising and classification on large scale dataset. Inspired by classification models, we propose a novel deep learning architecture for color (multichannel) image denoising and report on thousands of images from ImageNet dataset as well as commonly used imagery. We study the importance of (sufficient) training data, how semantic class information can be traded for improved denoising results. As a result, our method greatly improves PSNR performance by 0.34 - 0.51 dB on average over state-of-the art methods on large scale dataset. We conclude that it is beneficial to incorporate in classification models. On the other hand, we also study how noise affect classification performance. In the end, we come to a number of interesting conclusions, some being counter-intuitive.
In the realm of image denoising the self-similarities found in a natural image are widely exploited by state-of-the-art methods such as block matching and 3D collaborative filtering method (BM3D) of Dabov al @cite_22 and its color version CBM3D @cite_30 . The main idea is to group image patches which are similar in shape and texture. (C)BM3D collaboratively filters the patch groups by shrinkage in a 3D transform domain to produce a sparse representation of the true signal in the group. Later, Rajwade al @cite_19 applied the same idea and grouped the similar patches from a noisy image into a 3D stack to then compute the high-order singular value decomposition (HOSVD) coefficients of this stack. At last, they inverted the HOSVD transform to obtain the clean image. HOSVD has a high time complexity which renders the method as very slow @cite_19 .
{ "cite_N": [ "@cite_30", "@cite_19", "@cite_22" ], "mid": [ "2047710600", "2154011501", "2056370875" ], "abstract": [ "We propose an effective color image denoising method that exploits filtering in highly sparse local 3D transform domain in each channel of a luminance-chrominance color space. For each image block in each channel, a 3D array is formed by stacking together blocks similar to it, a process that we call \"grouping\". The high similarity between grouped blocks in each 3D array enables a highly sparse representation of the true signal in a 3D transform domain and thus a subsequent shrinkage of the transform spectra results in effective noise attenuation. The peculiarity of the proposed method is the application of a \"grouping constraint\" on the chrominances by reusing exactly the same grouping as for the luminance. The results demonstrate the effectiveness of the proposed grouping constraint and show that the developed denoising algorithm achieves state-of-the-art performance in terms of both peak signal-to-noise ratio and visual quality.", "In this paper, we propose a very simple and elegant patch-based, machine learning technique for image denoising using the higher order singular value decomposition (HOSVD). The technique simply groups together similar patches from a noisy image (with similarity defined by a statistically motivated criterion) into a 3D stack, computes the HOSVD coefficients of this stack, manipulates these coefficients by hard thresholding, and inverts the HOSVD transform to produce the final filtered image. Our technique chooses all required parameters in a principled way, relating them to the noise model. We also discuss our motivation for adopting the HOSVD as an appropriate transform for image denoising. We experimentally demonstrate the excellent performance of the technique on grayscale as well as color images. On color images, our method produces state-of-the-art results, outperforming other color image denoising algorithms at moderately high noise levels. A criterion for optimal patch-size selection and noise variance estimation from the residual images (after denoising) is also presented.", "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality." ] }
1704.01372
2606993052
Large amount of image denoising literature focuses on single channel images and often experimentally validates the proposed methods on tens of images at most. In this paper, we investigate the interaction between denoising and classification on large scale dataset. Inspired by classification models, we propose a novel deep learning architecture for color (multichannel) image denoising and report on thousands of images from ImageNet dataset as well as commonly used imagery. We study the importance of (sufficient) training data, how semantic class information can be traded for improved denoising results. As a result, our method greatly improves PSNR performance by 0.34 - 0.51 dB on average over state-of-the art methods on large scale dataset. We conclude that it is beneficial to incorporate in classification models. On the other hand, we also study how noise affect classification performance. In the end, we come to a number of interesting conclusions, some being counter-intuitive.
Nowadays most of the visual data is actually tensor ( color image and video) instead of matrix ( grayscale image). Though traditional CNN models with 2D spatial filter were considered to be sufficient and achieved good results, in certain scenarios, high dimensional rank filter becomes necessary to extract important features from tensor. Ji al @cite_26 introduced a CNN with 3-dimensional filter (3DCNN) and demonstrated superior performance to the traditional 2D CNN on two action recognition benchmarks. In their 3DCNN model, the output @math of feature map at @math position at @math -th CNN layer is computed as follows: where the temporal and spatial size of kernel is @math and @math respectively.
{ "cite_N": [ "@cite_26" ], "mid": [ "1983364832" ], "abstract": [ "We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods." ] }
1704.01235
2950478799
Research on automated image enhancement has gained momentum in recent years, partially due to the need for easy-to-use tools for enhancing pictures captured by ubiquitous cameras on mobile devices. Many of the existing leading methods employ machine-learning-based techniques, by which some enhancement parameters for a given image are found by relating the image to the training images with known enhancement parameters. While knowing the structure of the parameter space can facilitate search for the optimal solution, none of the existing methods has explicitly modeled and learned that structure. This paper presents an end-to-end, novel joint regression and ranking approach to model the interaction between desired enhancement parameters and images to be processed, employing a Gaussian process (GP). GP allows searching for ideal parameters using only the image features. The model naturally leads to a ranking technique for comparing images in the induced feature space. Comparative evaluation using the ground-truth based on the MIT-Adobe FiveK dataset plus subjective tests on an additional data-set were used to demonstrate the effectiveness of the proposed approach.
Automated image enhancement has recently been an active research area. Various solutions have been proposed for this task. We review those works which aim to improve the visual appeal of an image using automated techniques. A novel tone-operator was proposed to solve the tone reproduction problem @cite_20 . A database named MIT-Adobe FiveK of corresponding low and high-quality images was published in @cite_25 . They also proposed algorithm to solve the problem of global tonal adjustment. The tone adjustment problem only manipulates the luminance channel. In @cite_2 , an approach was presented, focusing on correcting images containing faces. They built a system to align faces between a good'' and a bad'' photo and then use the good faces to correct the bad ones.
{ "cite_N": [ "@cite_25", "@cite_20", "@cite_2" ], "mid": [ "2025328853", "2240434622", "2160777052" ], "abstract": [ "Adjusting photographs to obtain compelling renditions requires skill and time. Even contrast and brightness adjustments are challenging because they require taking into account the image content. Photographers are also known for having different retouching preferences. As the result of this complexity, rule-based, one-size-fits-all automatic techniques often fail. This problem can greatly benefit from supervised machine learning but the lack of training data has impeded work in this area. Our first contribution is the creation of a high-quality reference dataset. We collected 5,000 photos, manually annotated them, and hired 5 trained photographers to retouch each picture. The result is a collection of 5 sets of 5,000 example input-output pairs that enable supervised learning. We first use this dataset to predict a user's adjustment from a large training set. We then show that our dataset and features enable the accurate adjustment personalization using a carefully chosen set of training photos. Finally, we introduce difference learning: this method models and predicts difference between users. It frees the user from using predetermined photos for training. We show that difference learning enables accurate prediction using only a handful of examples.", "A classic photographic task is the mapping of the potentially high dynamic range of real world luminances to the low dynamic range of the photographic print. This tone reproduction problem is also faced by computer graphics practitioners who map digital images to a low dynamic range print or screen. The work presented in this paper leverages the time-tested techniques of photographic practice to develop a new tone reproduction operator. In particular, we use and extend the techniques developed by Ansel Adams to deal with digital images. The resulting algorithm is simple and produces good results for a wide variety of images.", "We describe a framework for improving the quality of personal photos by using a person's favorite photographs as examples. We observe that the majority of a person's photographs include the faces of a photographer's family and friends and often the errors in these photographs are the most disconcerting. We focus on correcting these types of images and use common faces across images to automatically perform both global and face-specific corrections. Our system achieves this by using face detection to align faces between “good” and “bad” photos such that properties of the good examples can be used to correct a bad photo. These “personal” photos provide strong guidance for a number of operations and, as a result, enable a number of high-quality image processing operations. We illustrate the power and generality of our approach by presenting a novel deblurring algorithm, and we show corrections that perform sharpening, superresolution, in-painting of over- and underexposured regions, and white-balancing." ] }
1704.01235
2950478799
Research on automated image enhancement has gained momentum in recent years, partially due to the need for easy-to-use tools for enhancing pictures captured by ubiquitous cameras on mobile devices. Many of the existing leading methods employ machine-learning-based techniques, by which some enhancement parameters for a given image are found by relating the image to the training images with known enhancement parameters. While knowing the structure of the parameter space can facilitate search for the optimal solution, none of the existing methods has explicitly modeled and learned that structure. This paper presents an end-to-end, novel joint regression and ranking approach to model the interaction between desired enhancement parameters and images to be processed, employing a Gaussian process (GP). GP allows searching for ideal parameters using only the image features. The model naturally leads to a ranking technique for comparing images in the induced feature space. Comparative evaluation using the ground-truth based on the MIT-Adobe FiveK dataset plus subjective tests on an additional data-set were used to demonstrate the effectiveness of the proposed approach.
Content-aware enhancement approaches have been developed which aim to improve a specific image region. Some examples of such approaches are @cite_23 @cite_4 . A drawback of these is the reliance on obtaining segmented regions that are to be enhanced, which itself may prove difficult. Pixel-level enhancement was performed by using local scene descriptors. First, images similar to the input are retrieved from the training set. Then for each pixel in the input, a set of pixels was retrieved from the training set and they were used to improve the input pixel. Finally, Gaussian random fields are used to maintain the spatial smoothness in the enhanced image. This approach does not take the global information of an image into account and hence the local adjustments may not look right when viewed globally. A deep-learning based approach was presented in @cite_19 . In @cite_3 , users were required to enhance a small amount of images to augment the current training data.
{ "cite_N": [ "@cite_19", "@cite_3", "@cite_4", "@cite_23" ], "mid": [ "996591170", "", "2142013027", "1970369748" ], "abstract": [ "Photo retouching enables photographers to invoke dramatic visual impressions by artistically enhancing their photos through stylistic color and tone adjustments. However, it is also a time-consuming and challenging task that requires advanced skills beyond the abilities of casual photographers. Using an automated algorithm is an appealing alternative to manual work but such an algorithm faces many hurdles. Many photographic styles rely on subtle adjustments that depend on the image content and even its semantics. Further, these adjustments are often spatially varying. Because of these characteristics, existing automatic algorithms are still limited and cover only a subset of these challenges. Recently, deep machine learning has shown unique abilities to address hard problems that resisted machine algorithms for long. This motivated us to explore the use of deep learning in the context of photo editing. In this paper, we explain how to formulate the automatic photo adjustment problem in a way suitable for this approach. We also introduce an image descriptor that accounts for the local semantics of an image. Our experiments demonstrate that our deep learning formulation applied using these descriptors successfully capture sophisticated photographic styles. In particular and unlike previous techniques, it can model local adjustments that depend on the image semantics. We show on several examples that this yields results that are qualitatively and quantitatively better than previous work.", "", "Automatic photo enhancement is one of the long-standing goals in image processing and computational photography. While a variety of methods have been proposed for manipulating tone and colour, most automatic methods used in practice, operate on the entire image without attempting to take the content of the image into account. In this paper, we present a new framework for automatic photo enhancement that attempts to take local and global image semantics into account. Specifically, our content-aware scheme attempts to detect and enhance the appearance of human faces, blue skies with or without clouds and underexposed salient regions. A user study was conducted that demonstrates the effectiveness of the proposed approach compared to existing auto-enhancement tools. © 2012 Wiley Periodicals, Inc.", "We present a framework for generating content-adaptive macros that can transfer complex photo manipulations to new target images. We demonstrate applications of our framework to face, landscape, and global manipulations. To create a content-adaptive macro, we make use of multiple training demonstrations. Specifically, we use automated image labeling and machine learning techniques to learn the dependencies between image features and the parameters of each selection, brush stroke, and image processing operation in the macro. Although our approach is limited to learning manipulations where there is a direct dependency between image features and operation parameters, we show that our framework is able to learn a large class of the most commonly used manipulations using as few as 20 training demonstrations. Our framework also provides interactive controls to help macro authors and users generate training demonstrations and correct errors due to incorrect labeling or poor parameter estimation. We ask viewers to compare images generated using our content-adaptive macros with and without corrections to manually generated ground-truth images and find that they consistently rate both our automatic and corrected results as close in appearance to the ground truth. We also evaluate the utility of our proposed macro generation workflow via a small informal lab study with professional photographers. The study suggests that our workflow is effective and practical in the context of real-world photo editing." ] }
1704.01235
2950478799
Research on automated image enhancement has gained momentum in recent years, partially due to the need for easy-to-use tools for enhancing pictures captured by ubiquitous cameras on mobile devices. Many of the existing leading methods employ machine-learning-based techniques, by which some enhancement parameters for a given image are found by relating the image to the training images with known enhancement parameters. While knowing the structure of the parameter space can facilitate search for the optimal solution, none of the existing methods has explicitly modeled and learned that structure. This paper presents an end-to-end, novel joint regression and ranking approach to model the interaction between desired enhancement parameters and images to be processed, employing a Gaussian process (GP). GP allows searching for ideal parameters using only the image features. The model naturally leads to a ranking technique for comparing images in the induced feature space. Comparative evaluation using the ground-truth based on the MIT-Adobe FiveK dataset plus subjective tests on an additional data-set were used to demonstrate the effectiveness of the proposed approach.
Two closely related and recent works involve training a ranking model from low and high-quality image pairs @cite_14 @cite_0 . In a recent state-of-art method @cite_14 , a dataset of @math corresponding image pairs was reported, where even the intermediate enhancement steps are recorded. A ranking model trained with this information can quantify the (enhancement) quality of an image. In @cite_0 , non-corresponding low and high-quality image pairs were used to train a ranking model. Both the approaches use @math NN search at the test time to create a pool of candidate images first. After extracting features and ranking all of them, the best image is presented to the user.
{ "cite_N": [ "@cite_0", "@cite_14" ], "mid": [ "2950482945", "2113636985" ], "abstract": [ "Personalized and content-adaptive image enhancement can find many applications in the age of social media and mobile computing. This paper presents a relative-learning-based approach, which, unlike previous methods, does not require matching original and enhanced images for training. This allows the use of massive online photo collections to train a ranking model for improved enhancement. We first propose a multi-level ranking model, which is learned from only relatively-labeled inputs that are automatically crawled. Then we design a novel parameter sampling scheme under this model to generate the desired enhancement parameters for a new image. For evaluation, we first verify the effectiveness and the generalization abilities of our approach, using images that have been enhanced labeled by experts. Then we carry out subjective tests, which show that users prefer images enhanced by our approach over other existing methods.", "We present a machine-learned ranking approach for automatically enhancing the color of a photograph. Unlike previous techniques that train on pairs of images before and after adjustment by a human user, our method takes into account the intermediate steps taken in the enhancement process, which provide detailed information on the person's color preferences. To make use of this data, we formulate the color enhancement task as a learning-to-rank problem in which ordered pairs of images are used for training, and then various color enhancements of a novel input image can be evaluated from their corresponding rank values. From the parallels between the decision tree structures we use for ranking and the decisions made by a human during the editing process, we posit that breaking a full enhancement sequence into individual steps can facilitate training. Our experiments show that this approach compares well to existing methods for automatic color enhancement." ] }
1704.01235
2950478799
Research on automated image enhancement has gained momentum in recent years, partially due to the need for easy-to-use tools for enhancing pictures captured by ubiquitous cameras on mobile devices. Many of the existing leading methods employ machine-learning-based techniques, by which some enhancement parameters for a given image are found by relating the image to the training images with known enhancement parameters. While knowing the structure of the parameter space can facilitate search for the optimal solution, none of the existing methods has explicitly modeled and learned that structure. This paper presents an end-to-end, novel joint regression and ranking approach to model the interaction between desired enhancement parameters and images to be processed, employing a Gaussian process (GP). GP allows searching for ideal parameters using only the image features. The model naturally leads to a ranking technique for comparing images in the induced feature space. Comparative evaluation using the ground-truth based on the MIT-Adobe FiveK dataset plus subjective tests on an additional data-set were used to demonstrate the effectiveness of the proposed approach.
Now we briefly review Gaussian process based methods which are relevant in this context. GP has been effectively used to obtain good performance for applications where complex relationships have to be learned using a small amount of data (in the order of several hundreds) @cite_1 . In @cite_5 , it was used for view-invariant facial recognition. A GP latent variable model was used to learn a discriminative feature space using LDA prior where examples from similar classes are project nearby. In @cite_13 , GP regression was used to map the non-frontal facial points to the frontal view. Then facial expression methods can be used using these projected frontal view points. Coupled GP have been used to capture dependencies between the mappings learned between non-frontal and frontal poses, which improves the facial expression recognition performance @cite_8 .
{ "cite_N": [ "@cite_13", "@cite_5", "@cite_1", "@cite_8" ], "mid": [ "2126234799", "2104563967", "", "2138206939" ], "abstract": [ "We present a regression-based scheme for multi-view facial expression recognition based on 2D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a state-of-the-art facial expression recognition method. To learn the mapping functions we investigate four regression models: Linear Regression (LR), Support Vector Regression (SVR), Relevance Vector Regression (RVR) and Gaussian Process Regression (GPR). Our extensive experiments on the CMU Multi-PIE facial expression database show that the proposed scheme outperforms view-specific classifiers by utilizing considerably less training data.", "Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a discriminative shared Gaussian process latent variable model (DS-GPLVM) for multiview and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multiview manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, labeled face parts in the wild, and static facial expressions in the wild). We show that this model outperforms the state-of-the-art methods for multiview and view-invariant facial expression classification, and several state-of-the-art methods for multiview learning and feature fusion.", "", "We propose a method for head-pose invariant facial expression recognition that is based on a set of characteristic facial points. To achieve head-pose invariance, we propose the Coupled Scaled Gaussian Process Regression (CSGPR) model for head-pose normalization. In this model, we first learn independently the mappings between the facial points in each pair of (discrete) nonfrontal poses and the frontal pose, and then perform their coupling in order to capture dependences between them. During inference, the outputs of the coupled functions from different poses are combined using a gating function, devised based on the head-pose estimation for the query points. The proposed model outperforms state-of-the-art regression-based approaches to head-pose normalization, 2D and 3D Point Distribution Models (PDMs), and Active Appearance Models (AAMs), especially in cases of unknown poses and imbalanced training data. To the best of our knowledge, the proposed method is the first one that is able to deal with expressive faces in the range from @math to @math pan rotation and @math to @math tilt rotation, and with continuous changes in head pose, despite the fact that training was conducted on a small set of discrete poses. We evaluate the proposed method on synthetic and real images depicting acted and spontaneously displayed facial expressions." ] }
1704.01249
2952251792
Social networking on mobile devices has become a commonplace of everyday life. In addition, photo capturing process has become trivial due to the advances in mobile imaging. Hence people capture a lot of photos everyday and they want them to be visually-attractive. This has given rise to automated, one-touch enhancement tools. However, the inability of those tools to provide personalized and content-adaptive enhancement has paved way for machine-learned methods to do the same. The existing typical machine-learned methods heuristically (e.g. kNN-search) predict the enhancement parameters for a new image by relating the image to a set of similar training images. These heuristic methods need constant interaction with the training images which makes the parameter prediction sub-optimal and computationally expensive at test time which is undesired. This paper presents a novel approach to predicting the enhancement parameters given a new image using only its features, without using any training images. We propose to model the interaction between the image features and its corresponding enhancement parameters using the matrix factorization (MF) principles. We also propose a way to integrate the image features in the MF formulation. We show that our approach outperforms heuristic approaches as well as recent approaches in MF and structured prediction on synthetic as well as real-world data of image enhancement.
Development of machine-learned image enhancement systems has recently been an active research area of immense practical significance. Various approaches have been put forward for this task. We review those works which improve the visual appearance of an image using automated techniques. To encourage research in this field, a database named MIT-Adobe FiveK containing corresponding low and high-quality images was proposed in @cite_22 . The authors also proposed an algorithm to solve the problem of global tonal adjustment. The tone adjustment problem only manipulates the luminance channel, where we manipulate saturation, brightness and contrast of an image.
{ "cite_N": [ "@cite_22" ], "mid": [ "2025328853" ], "abstract": [ "Adjusting photographs to obtain compelling renditions requires skill and time. Even contrast and brightness adjustments are challenging because they require taking into account the image content. Photographers are also known for having different retouching preferences. As the result of this complexity, rule-based, one-size-fits-all automatic techniques often fail. This problem can greatly benefit from supervised machine learning but the lack of training data has impeded work in this area. Our first contribution is the creation of a high-quality reference dataset. We collected 5,000 photos, manually annotated them, and hired 5 trained photographers to retouch each picture. The result is a collection of 5 sets of 5,000 example input-output pairs that enable supervised learning. We first use this dataset to predict a user's adjustment from a large training set. We then show that our dataset and features enable the accurate adjustment personalization using a carefully chosen set of training photos. Finally, we introduce difference learning: this method models and predicts difference between users. It frees the user from using predetermined photos for training. We show that difference learning enables accurate prediction using only a handful of examples." ] }
1704.01249
2952251792
Social networking on mobile devices has become a commonplace of everyday life. In addition, photo capturing process has become trivial due to the advances in mobile imaging. Hence people capture a lot of photos everyday and they want them to be visually-attractive. This has given rise to automated, one-touch enhancement tools. However, the inability of those tools to provide personalized and content-adaptive enhancement has paved way for machine-learned methods to do the same. The existing typical machine-learned methods heuristically (e.g. kNN-search) predict the enhancement parameters for a new image by relating the image to a set of similar training images. These heuristic methods need constant interaction with the training images which makes the parameter prediction sub-optimal and computationally expensive at test time which is undesired. This paper presents a novel approach to predicting the enhancement parameters given a new image using only its features, without using any training images. We propose to model the interaction between the image features and its corresponding enhancement parameters using the matrix factorization (MF) principles. We also propose a way to integrate the image features in the MF formulation. We show that our approach outperforms heuristic approaches as well as recent approaches in MF and structured prediction on synthetic as well as real-world data of image enhancement.
Content-based enhancement approaches have been developed in the past which try to improve a particular image region @cite_20 @cite_4 . These approaches require segmented regions which are to be enhanced. This itself may prove to be difficult. Approaches which work on pixels have also been developed using local scene descriptors. Firstly, similar images from the training set are retrieved. Then for each pixel in the input, similar pixels were retrieved from the training set, which were then used to improve the input pixel. Finally, Gaussian random fields maintain the spatial smoothness in the enhanced image. This approach does not consider the global information provided by the image and hence the enhancements may not be visually-appealing when viewed globally. In @cite_3 , a small number of image enhancements were collected from the users which were then used along with the additional training data.
{ "cite_N": [ "@cite_3", "@cite_4", "@cite_20" ], "mid": [ "2028219289", "2142013027", "" ], "abstract": [ "We address the problem of incorporating user preference in automatic image enhancement. Unlike generic tools for automatically enhancing images, we seek to develop methods that can first observe user preferences on a training set, and then learn a model of these preferences to personalize enhancement of unseen images. The challenge of designing such system lies at intersection of computer vision, learning, and usability; we use techniques such as active sensor selection and distance metric learning in order to solve the problem. The experimental evaluation based on user studies indicates that different users do have different preferences in image enhancement, which suggests that personalization can further help improve the subjective quality of generic image enhancements.", "Automatic photo enhancement is one of the long-standing goals in image processing and computational photography. While a variety of methods have been proposed for manipulating tone and colour, most automatic methods used in practice, operate on the entire image without attempting to take the content of the image into account. In this paper, we present a new framework for automatic photo enhancement that attempts to take local and global image semantics into account. Specifically, our content-aware scheme attempts to detect and enhance the appearance of human faces, blue skies with or without clouds and underexposed salient regions. A user study was conducted that demonstrates the effectiveness of the proposed approach compared to existing auto-enhancement tools. © 2012 Wiley Periodicals, Inc.", "" ] }
1704.01442
2953009942
With the widespread adoption of social media sites like Twitter and Facebook, there has been a shift in the way information is produced and consumed. Earlier, the only producers of information were traditional news organizations, which broadcast the same carefully-edited information to all consumers over mass media channels. Whereas, now, in online social media, any user can be a producer of information, and every user selects which other users she connects to, thereby choosing the information she consumes. Moreover, the personalized recommendations that most social media sites provide also contribute towards the information consumed by individual users. In this work, we define a concept of information diet -- which is the topical distribution of a given set of information items (e.g., tweets) -- to characterize the information produced and consumed by various types of users in the popular Twitter social media. At a high level, we find that (i) popular users mostly produce very specialized diets focusing on only a few topics; in fact, news organizations (e.g., NYTimes) produce much more focused diets on social media as compared to their mass media diets, (ii) most users' consumption diets are primarily focused towards one or two topics of their interest, and (iii) the personalized recommendations provided by Twitter help to mitigate some of the topical imbalances in the users' consumption diets, by adding information on diverse topics apart from the users' primary topics of interest.
Analysis of content on mass media: Media studies has been an active field which analyzes the content coverage on mass media, and its effects on the society. http: en.wikipedia.org wiki Media There exist a number of media watchdog organizations' (e.g., FAIR ( http: fair.org ), AIM ( http: www.aim.org )) which judge the content covered by news organizations based on fairness, balance and accuracy. Additionally, there have also been studies on media biases @cite_1 @cite_9 . Such studies are easier to perform over mass media since it is a broadcast medium and all users receive the same information. On the other hand, studying the information consumed on social media is much more challenging since individual users shape their own personalized channels of information by selecting the other users to follow.
{ "cite_N": [ "@cite_9", "@cite_1" ], "mid": [ "1119807432", "2060704337" ], "abstract": [ "It is widely thought that news organizations exhibit ideological bias, but rigorously quantifying such slant has proven methodologically challenging. Through a combination of machine learning and crowdsourcing techniques, we investigate the selection and framing of political issues in 15 major U.S. news outlets. Starting with 803,146 news stories published over 12 months, we first used supervised learning algorithms to identify the 14 of articles pertaining to political events. We then recruited 749 online human judges to classify a random subset of 10,950 of these political articles according to topic and ideological position. Our analysis yields an ideological ordering of outlets consistent with prior work. We find, however, that news outlets are considerably more similar than generally believed. Specifically, with the exception of political scandals, we find that major news organizations present topics in a largely non-partisan manner, casting neither Democrats nor Republicans in a particularly favorable or unfavorable light. Moreover, again with the exception of political scandals, there is little evidence of systematic differences in story selection, with all major news outlets covering a wide variety of topics with frequency largely unrelated to the outlet's ideological position. Finally, we find that news organizations express their ideological bias not by directly advocating for a preferred political party, but rather by disproportionately criticizing one side, a convention that further moderates overall differences.\u0000", "We measure media bias by estimating ideological scores for several major media outlets. To compute this, we count the times that a particular media outlet cites various think tanks and policy groups, and then compare this with the times that members of Congress cite the same groups. Our results show a strong liberal bias: all of the news outlets we examine, except Fox News' Special Report and the Washington Times, received scores to the left of the average member of Congress. Consistent with claims made by conservative critics, CBS Evening News and the New York Times received scores far to the left of center. The most centrist media outlets were PBS NewsHour, CNN's Newsnight, and ABC's Good Morning America; among print outlets, USA Today was closest to the center. All of our findings refer strictly to news content; that is, we exclude editorials, letters, and the like. \"The editors in Los Angeles killed the story. They told Witcover that it didn't ‘come off’ and that it was an ‘opinion’ story.… The solution was simple, they told him. All he had to do was get other people to make the same points and draw the same conclusions and then write the article in their words\" (emphasis in original). Timothy Crouse, Boys on the Bus [1973, p. 116]." ] }
1704.01442
2953009942
With the widespread adoption of social media sites like Twitter and Facebook, there has been a shift in the way information is produced and consumed. Earlier, the only producers of information were traditional news organizations, which broadcast the same carefully-edited information to all consumers over mass media channels. Whereas, now, in online social media, any user can be a producer of information, and every user selects which other users she connects to, thereby choosing the information she consumes. Moreover, the personalized recommendations that most social media sites provide also contribute towards the information consumed by individual users. In this work, we define a concept of information diet -- which is the topical distribution of a given set of information items (e.g., tweets) -- to characterize the information produced and consumed by various types of users in the popular Twitter social media. At a high level, we find that (i) popular users mostly produce very specialized diets focusing on only a few topics; in fact, news organizations (e.g., NYTimes) produce much more focused diets on social media as compared to their mass media diets, (ii) most users' consumption diets are primarily focused towards one or two topics of their interest, and (iii) the personalized recommendations provided by Twitter help to mitigate some of the topical imbalances in the users' consumption diets, by adding information on diverse topics apart from the users' primary topics of interest.
Information production & consumption on social media: Prior studies on information production and consumption on social media @cite_17 @cite_3 @cite_4 have been limited to studying the amount of information being exchanged among various users. There has not been any notable effort towards analyzing the topical composition of the information produced or consumed, which is the goal of this work.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_17" ], "mid": [ "", "2101196063", "2112896229" ], "abstract": [ "", "Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it.", "We study several longstanding questions in media communications research, in the context of the microblogging service Twitter, regarding the production, flow, and consumption of information. To do so, we exploit a recently introduced feature of Twitter known as \"lists\" to distinguish between elite users - by which we mean celebrities, bloggers, and representatives of media outlets and other formal organizations - and ordinary users. Based on this classification, we find a striking concentration of attention on Twitter, in that roughly 50 of URLs consumed are generated by just 20K elite users, where the media produces the most information, but celebrities are the most followed. We also find significant homophily within categories: celebrities listen to celebrities, while bloggers listen to bloggers etc; however, bloggers in general rebroadcast more information than the other categories. Next we re-examine the classical \"two-step flow\" theory of communications, finding considerable support for it on Twitter. Third, we find that URLs broadcast by different categories of users or containing different types of content exhibit systematically different lifespans. And finally, we examine the attention paid by the different user categories to different news topics." ] }
1704.01442
2953009942
With the widespread adoption of social media sites like Twitter and Facebook, there has been a shift in the way information is produced and consumed. Earlier, the only producers of information were traditional news organizations, which broadcast the same carefully-edited information to all consumers over mass media channels. Whereas, now, in online social media, any user can be a producer of information, and every user selects which other users she connects to, thereby choosing the information she consumes. Moreover, the personalized recommendations that most social media sites provide also contribute towards the information consumed by individual users. In this work, we define a concept of information diet -- which is the topical distribution of a given set of information items (e.g., tweets) -- to characterize the information produced and consumed by various types of users in the popular Twitter social media. At a high level, we find that (i) popular users mostly produce very specialized diets focusing on only a few topics; in fact, news organizations (e.g., NYTimes) produce much more focused diets on social media as compared to their mass media diets, (ii) most users' consumption diets are primarily focused towards one or two topics of their interest, and (iii) the personalized recommendations provided by Twitter help to mitigate some of the topical imbalances in the users' consumption diets, by adding information on diverse topics apart from the users' primary topics of interest.
There have also been some prior works on whether social media users are receiving multiple perspectives on a specific event or topic @cite_24 @cite_16 @cite_18 @cite_7 @cite_10 . Though we focus only on the topical composition of the information produced and consumed by social media users, the concept of information diet introduced in this work can be extended to study opinion polarization on social media.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_24", "@cite_16", "@cite_10" ], "mid": [ "2024633545", "2152284345", "2175300453", "91442942", "" ], "abstract": [ "The bias in the news media is an inherent flaw of the news production process. The resulting bias often causes a sharp increase in political polarization and in the cost of conflict on social issues such as Iraq war. It is very difficult, if not impossible, for readers to have penetrating views on realities against such bias. This paper presents NewsCube, a novel Internet news service aiming at mitigating the effect of media bias. NewsCube automatically creates and promptly provides readers with multiple classified viewpoints on a news event of interest. As such, it effectively helps readers understand a fact from a plural of viewpoints and formulate their own, more balanced viewpoints. While media bias problem has been studied extensively in communications and social sciences, our work is the first to develop a news service as a solution and study its effect. We discuss the effect of the service through various user studies.", "In this paper, we study the linking patterns and discussion topics of political bloggers. Our aim is to measure the degree of interaction between liberal and conservative blogs, and to uncover any differences in the structure of the two communities. Specifically, we analyze the posts of 40 \"A-list\" blogs over the period of two months preceding the U.S. Presidential Election of 2004, to study how often they referred to one another and to quantify the overlap in the topics they discussed, both within the liberal and conservative communities, and also across communities. We also study a single day snapshot of over 1,000 political blogs. This snapshot captures blogrolls (the list of links to other blogs frequently found in sidebars), and presents a more static picture of a broader blogosphere. Most significantly, we find differences in the behavior of liberal and conservative blogs, with conservative blogs linking to each other more frequently and in a denser pattern.", "Political discourse in the United States is getting increasingly polarized. This polarization frequently causes different communities to react very differently to the same news events. Political blogs as a form of social media provide an unique insight into this phenomenon. We present a multitarget, semisupervised latent variable model, MCR-LDA to model this process by analyzing political blogs posts and their comment sections from different political communities jointly to predict the degree of polarization that news topics cause. Inspecting the model after inference reveals topics and the degree to which it triggers polarization. In this approach, community responses to news topics are observed using sentiment polarity and comment volume which serves as a proxy for the level of interest in the topic. In this context, we also present computational methods to assign sentiment polarity to the comments which serve as targets for latent variable models that predict the polarity based on the topics in the blog content. Our results show that the joint modeling of communities with different political beliefs using MCR-LDA does not sacrifice accuracy in sentiment polarity prediction when compared to approaches that are tailored to specific communities and additionally provides a view of the polarization in responses from the different communities.", "In this study we investigate how social media shape the networked public sphere and facilitate communication between communities with different political orientations. We examine two networks of political communication on Twitter, comprised of more than 250,000 tweets from the six weeks leading up to the 2010 U.S. congressional midterm elections. Using a combination of network clustering algorithms and manually-annotated data we demonstrate that the network of political retweets exhibits a highly segregated partisan structure, with extremely limited connectivity between left- and right-leaning users. Surprisingly this is not the case for the user-to-user mention network, which is dominated by a single politically heterogeneous cluster of users in which ideologically-opposed individuals interact at a much higher rate compared to the network of retweets. To explain the distinct topologies of the retweet and mention networks we conjecture that politically motivated individuals provoke interaction by injecting partisan content into information streams whose primary audience consists of ideologically-opposed users. We conclude with statistical evidence in support of this hypothesis.", "" ] }
1704.01442
2953009942
With the widespread adoption of social media sites like Twitter and Facebook, there has been a shift in the way information is produced and consumed. Earlier, the only producers of information were traditional news organizations, which broadcast the same carefully-edited information to all consumers over mass media channels. Whereas, now, in online social media, any user can be a producer of information, and every user selects which other users she connects to, thereby choosing the information she consumes. Moreover, the personalized recommendations that most social media sites provide also contribute towards the information consumed by individual users. In this work, we define a concept of information diet -- which is the topical distribution of a given set of information items (e.g., tweets) -- to characterize the information produced and consumed by various types of users in the popular Twitter social media. At a high level, we find that (i) popular users mostly produce very specialized diets focusing on only a few topics; in fact, news organizations (e.g., NYTimes) produce much more focused diets on social media as compared to their mass media diets, (ii) most users' consumption diets are primarily focused towards one or two topics of their interest, and (iii) the personalized recommendations provided by Twitter help to mitigate some of the topical imbalances in the users' consumption diets, by adding information on diverse topics apart from the users' primary topics of interest.
Topic inference of social media posts: To our knowledge, all prior attempts to infer the topic of a tweet hashtag trending topic rely on the content itself -- either applying NLP and ML techniques @cite_19 @cite_0 @cite_6 @cite_2 or mapping to external sources such as Wikipedia or Web search results @cite_21 @cite_8 -- in order to infer the topics. Such methodologies are of limited utility in the case of social media like Twitter, primarily due to the tweets being too short, and the informal nature of the language used by most users @cite_22 @cite_23 . In contrast to these previous approaches which focus on the content, our methodology focuses on the characteristics of the authors of the content to infer its topic.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_21", "@cite_6", "@cite_0", "@cite_19", "@cite_23", "@cite_2" ], "mid": [ "2101359637", "2122390506", "2013579020", "2183238970", "2137958601", "", "2094268994", "2098825757" ], "abstract": [ "In this paper, we design and evaluate a novel who-is-who service for inferring attributes that characterize individual Twitter users. Our methodology exploits the Lists feature, which allows a user to group other users who tend to tweet on a topic that is of interest to her, and follow their collective tweets. Our key insight is that the List meta-data (names and descriptions) provides valuable semantic cues about who the users included in the Lists are, including their topics of expertise and how they are perceived by the public. Thus, we can infer a user's expertise by analyzing the meta-data of crowdsourced Lists that contain the user. We show that our methodology can accurately and comprehensively infer attributes of millions of Twitter users, including a vast majority of Twitter's influential users (based on ranking metrics like number of followers). Our work provides a foundation for building better search and recommendation services on Twitter.", "Twitter streams are on overload: active users receive hundreds of items per day, and existing interfaces force us to march through a chronologically-ordered morass to find tweets of interest. We present an approach to organizing a user's own feed into coherently clustered trending topics for more directed exploration. Our Twitter client, called Eddi, groups tweets in a user's feed into topics mentioned explicitly or implicitly, which users can then browse for items of interest. To implement this topic clustering, we have developed a novel algorithm for discovering topics in short status updates powered by linguistic syntactic transformation and callouts to a search engine. An algorithm evaluation reveals that search engine callouts outperform other approaches when they employ simple syntactic transformation and backoff strategies. Active Twitter users evaluated Eddi and found it to be a more efficient and enjoyable way to browse an overwhelming status update feed than the standard chronological interface.", "Microblogs have become an important source of information for the purpose of marketing, intelligence, and reputation management. Streams of microblogs are of great value because of their direct and real-time nature. Determining what an individual microblog post is about, however, can be non-trivial because of creative language usage, the highly contextualized and informal nature of microblog posts, and the limited length of this form of communication. We propose a solution to the problem of determining what a microblog post is about through semantic linking: we add semantics to posts by automatically identifying concepts that are semantically related to it and generating links to the corresponding Wikipedia articles. The identified concepts can subsequently be used for, e.g., social media mining, thereby reducing the need for manual inspection and selection. Using a purpose-built test collection of tweets, we show that recently proposed approaches for semantic linking do not perform well, mainly due to the idiosyncratic nature of microblog posts. We propose a novel method based on machine learning with a set of innovative features and show that it is able to achieve significant improvements over all other methods, especially in terms of precision.", "Today, it is the norm for online social (OSN) users to have accounts on multiple services. For example, a recent study showed that 34 of all Twitter users also use Pinterest. This situation leads to interesting questions such as: Are the activities that users perform on each site disjoint? Alternatively, if users perform the same actions on multiple sites, where does the information originate? Given the interlinking between social networks, failure to understand activity across multiple sites may obfuscate the true information dissemination dynamics of the social web. In this study, we take the first steps towards a more complete understanding of user behavior across multiple OSNs. We collect a sample of over 30,000 users that have accounts on both Twitter and Pinterest, crawling their profile information and activity on a daily basis for a period of almost three months. We develop a novel methodology for comparing activity across these two sites. We find that the global patterns of use across the two sites differ significantly, and that users tend to post items to Pinterest before posting them on Twitter. Our findings can inform the understanding of the behavior of users on individual sites, as well as the dynamics of sharing across the social web.", "As microblogging grows in popularity, services like Twitter are coming to support information gathering needs above and beyond their traditional roles as social networks. But most users’ interaction with Twitter is still primarily focused on their social graphs, forcing the often inappropriate conflation of “people I follow” with “stuff I want to read.” We characterize some information needs that the current Twitter interface fails to support, and argue for better representations of content for solving these challenges. We present a scalable implementation of a partially supervised learning model (Labeled LDA) that maps the content of the Twitter feed into dimensions. These dimensions correspond roughly to substance, style, status, and social characteristics of posts. We characterize users and tweets using this model, and present results on two information consumption oriented tasks.", "", "One of the key challenges for users of social media is judging the topical expertise of other users in order to select trustful information sources about specific topics and to judge credibility of content produced by others. In this paper, we explore the usefulness of different types of user-related data for making sense about the topical expertise of Twitter users. Types of user-related data include messages a user authored or re-published, biographical information a user published on his her profile page and information about user lists to which a user belongs. We conducted a user study that explores how useful different types of data are for informing human's expertise judgements. We then used topic modeling based on different types of data to build and assess computational expertise models of Twitter users. We use We follow directories as a proxy measurement for perceived expertise in this assessment. Our findings show that different types of user-related data indeed differ substantially in their ability to inform computational expertise models and humans's expertise judgements. Tweets and retweets a#x2014; which are often used in literature for gauging the expertise area of users a#x2014; are surprisingly useless for inferring the expertise topics of their authors and are outperformed by other types of user-related data such as information about users' list memberships. Our results have implications for algorithms, user interfaces and methods that focus on capturing expertise of social media users.", "Twitter summarizes the great deal of messages posted by users in the form of trending topics that reflect the top conversations being discussed at a given moment. These trending topics tend to be connected to current affairs. Different happenings can give rise to the emergence of these trending topics. For instance, a sports event broadcasted on TV, or a viral meme introduced by a community of users. Detecting the type of origin can facilitate information filtering, enhance real-time data processing, and improve user experience. In this paper, we introduce a typology to categorize the triggers that leverage trending topics: news, current events, memes, and commemoratives. We define a set of straightforward language-independent features that rely on the social spread of the trends to discriminate among those types of trending topics. Our method provides an efficient way to immediately and accurately categorize trending topics without need of external data, outperforming a content-based approach." ] }
1704.01244
2606249155
With recent advancements in drone technology, researchers are now considering the possibility of deploying small cells served by base stations mounted on flying drones. A major advantage of such drone small cells is that the operators can quickly provide cellular services in areas of urgent demand without having to pre-install any infrastructure. Since the base station is attached to the drone, technically it is feasible for the base station to dynamic reposition itself in response to the changing locations of users for reducing the communication distance, decreasing the probability of signal blocking, and ultimately increasing the spectral efficiency. In this paper, we first propose distributed algorithms for autonomous control of drone movements, and then model and analyse the spectral efficiency performance of a drone small cell to shed new light on the fundamental benefits of dynamic repositioning. We show that, with dynamic repositioning, the spectral efficiency of drone small cells can be increased by nearly 100 for realistic drone speed, height, and user traffic model and without incurring any major increase in drone energy consumption.
Drones have been considered both in the context of data gathering in wireless sensor networks @cite_6 @cite_7 @cite_13 , and more recently in the context of delivering data to mobile users in cellular networks. Since the focus of this paper is on cellular networks, we only review the drone-related research relevant to cellular networks.
{ "cite_N": [ "@cite_13", "@cite_7", "@cite_6" ], "mid": [ "2052566383", "", "2075324951" ], "abstract": [ "Intelligent collaborative environments, where heterogenous entities operate together in achieving common mission objectives have been increasingly adopted for monitoring and surveillance of interest areas and physical infrastructures. They can be assembled from multiple existing technologies ranging from wireless sensor networks (WSN), terrestrial remote operated vehicles (ROV) and unmanned aerial vehicles (UAV). In this context, we first introduce a multi-level system framework for multi-sensory robotic surveillance of critical infrastructure protection through communication, data acquisition and processing - MUROS. Leveraging a cognitive radio (CR) scheme is discussed as key point of the paper, arguing that by exploiting in an opportunistic fashion the time, frequency and spatial stream of the wireless environment, increased communication reliability can be achieved with positive impact on the availability and service level at each hierarchical level. The application of CR, given the heterogeneous nature of the application across multiple radio interfaces and protocols, stand outs as a novel and feasible research direction. We argument the advantages of this scheme within the constraints of a working scenario and define a simulation-based approach in order to validate our solution.", "", "Recent technological advances in electronics, sen- sors, and communications devices have facilitated the prolifer- ation of Unmanned Aircraft System (UAS)-aided applications. However, the UAS-aided communications networks are yet to receive sufficient research endeavor. In this paper, we address one of the most important research challenges pertaining to UAS-aided networks comprising adaptive modulation-capable nodes, namely how to fairly maximize the energy efficiency (throughput per energy). For the mobility pattern innate to the UAS, we demonstrate how the adaptive modulation behaves. Furthermore, we formulate the problem as a potential game that is played between the UAS and the network-nodes, and prove its stability, optimality, and convergence. Based upon the potential game, a data collection method is envisioned to maximize the energy efficiency with the fairness constraint. Additionally, we analyze the Price of Anarchy (PoA) of our proposed game. Extensive simulations exhibit the effectiveness of our proposal under varying environments. sensor nodes require only capabilities to communicate with the CHs. The mobility pattern of the UAS causes the distance between a CH and the UAS to vary. The distance between the CH and the UAS affects the Signal-to-Noise Ratio (SNR), which in turn affects the Bit Error Rate (BER) of the CH transmissions. Both SNR and BER affect the modulation scheme. This is because modulation schemes that transmit more bits per symbol require higher values of SNR for a given BER requirement (9). Moreover, if high levels of BER are acceptable, the achievable number of bits per symbol that a modulation scheme transmits can be increased." ] }
1704.01244
2606249155
With recent advancements in drone technology, researchers are now considering the possibility of deploying small cells served by base stations mounted on flying drones. A major advantage of such drone small cells is that the operators can quickly provide cellular services in areas of urgent demand without having to pre-install any infrastructure. Since the base station is attached to the drone, technically it is feasible for the base station to dynamic reposition itself in response to the changing locations of users for reducing the communication distance, decreasing the probability of signal blocking, and ultimately increasing the spectral efficiency. In this paper, we first propose distributed algorithms for autonomous control of drone movements, and then model and analyse the spectral efficiency performance of a drone small cell to shed new light on the fundamental benefits of dynamic repositioning. We show that, with dynamic repositioning, the spectral efficiency of drone small cells can be increased by nearly 100 for realistic drone speed, height, and user traffic model and without incurring any major increase in drone energy consumption.
Because of the flexibility and agility of drones, deploying the drone base stations (BSs) in optimal locations to maximize various network metrics are investigated in the literature. Al- @cite_17 provided an analytical model to find an optimal altitude for one UAV that provides the maximum coverage of the area. A service threshold in terms of maximum allowable path loss is defined in this model. Another recent study by @cite_25 studied the problem of finding the optimal cell boundaries and deployment location for multiple non-interfering UAVs. The objective of that study was to minimize the total transmission power of the UAVs.
{ "cite_N": [ "@cite_25", "@cite_17" ], "mid": [ "2963686678", "2031834036" ], "abstract": [ "In this paper, the optimal deployment of multiple unmanned aerial vehicles (UAVs) acting as flying base stations is investigated. Considering the downlink scenario, the goal is to minimize the total required transmit power of UAVs while satisfying the users' rate requirements. To this end, the optimal locations of UAVs as well as the cell boundaries of their coverage areas are determined. To find those optimal parameters, the problem is divided into two sub-problems that are solved iteratively. In the first sub-problem, given the cell boundaries corresponding to each UAV, the optimal locations of the UAVs are derived using the facility location framework. In the second sub-problem, the locations of UAVs are assumed to be fixed, and the optimal cell boundaries are obtained using tools from optimal transport theory. The analytical results show that the total required transmit power is significantly reduced by determining the optimal coverage areas for UAVs. These results also show that, moving the UAVs based on users' distribution, and adjusting their altitudes can lead to a minimum power consumption. Finally, it is shown that the proposed deployment approach, can improve the system's power efficiency by a factor of 20 χ compared to the classical Voronoi cell association technique with fixed UAVs locations.", "Low-altitude aerial platforms (LAPs) have recently gained significant popularity as key enablers for rapid deployable relief networks where coverage is provided by onboard radio heads. These platforms are capable of delivering essential wireless communication for public safety agencies in remote areas or during the aftermath of natural disasters. In this letter, we present an analytical approach to optimizing the altitude of such platforms to provide maximum radio coverage on the ground. Our analysis shows that the optimal altitude is a function of the maximum allowed pathloss and of the statistical parameters of the urban environment, as defined by the International Telecommunication Union. Furthermore, we present a closed-form formula for predicting the probability of the geometrical line of sight between a LAP and a ground receiver." ] }
1704.01244
2606249155
With recent advancements in drone technology, researchers are now considering the possibility of deploying small cells served by base stations mounted on flying drones. A major advantage of such drone small cells is that the operators can quickly provide cellular services in areas of urgent demand without having to pre-install any infrastructure. Since the base station is attached to the drone, technically it is feasible for the base station to dynamic reposition itself in response to the changing locations of users for reducing the communication distance, decreasing the probability of signal blocking, and ultimately increasing the spectral efficiency. In this paper, we first propose distributed algorithms for autonomous control of drone movements, and then model and analyse the spectral efficiency performance of a drone small cell to shed new light on the fundamental benefits of dynamic repositioning. We show that, with dynamic repositioning, the spectral efficiency of drone small cells can be increased by nearly 100 for realistic drone speed, height, and user traffic model and without incurring any major increase in drone energy consumption.
UAVs were also expected to establish emergency communication links during disaster situation and thus improve public safety in @cite_27 . That work showed that by optimal placement of UAVs the system throughput can be improved significantly. Brute force search was used to find the optimal location of UAVs in the target area.
{ "cite_N": [ "@cite_27" ], "mid": [ "1540107614" ], "abstract": [ "Communications play an important role during public safety operations. Since the current communication technologies heavily rely on the backbone network, the failure of base stations (BSs) due to natural disasters or malevolent attacks causes communication difficulties for public safety and emergency communications. Recently, the use of unmanned aerial vehicles (UAVs) such as quadcopters and unmanned gliders have gained attention in public safety communications. They can be used as unmanned aerial base stations (UABSs), which can be deployed rapidly as a part of the heterogeneous network architecture. However, due to their mobile characteristics, interference management in the network becomes very challenging. In this paper, we explore the use of UABSs for public safety communications during natural disasters, where part of the communication infrastructure becomes damaged and dysfunctional (e.g., as in the aftermath of the 2011 earthquake and tsunami in Japan). Through simulations, we analyze the throughput gains that can be obtained by exploiting the mobility feature of the UAVs. Our simulation results show that when there is loss of network infrastructure, the deployment of UABSs at optimized locations can improve the throughput coverage and the 5th percentile spectral efficiency of the network. Furthermore, the improvement is observed to be more significant with higher path-loss exponents." ] }
1704.01244
2606249155
With recent advancements in drone technology, researchers are now considering the possibility of deploying small cells served by base stations mounted on flying drones. A major advantage of such drone small cells is that the operators can quickly provide cellular services in areas of urgent demand without having to pre-install any infrastructure. Since the base station is attached to the drone, technically it is feasible for the base station to dynamic reposition itself in response to the changing locations of users for reducing the communication distance, decreasing the probability of signal blocking, and ultimately increasing the spectral efficiency. In this paper, we first propose distributed algorithms for autonomous control of drone movements, and then model and analyse the spectral efficiency performance of a drone small cell to shed new light on the fundamental benefits of dynamic repositioning. We show that, with dynamic repositioning, the spectral efficiency of drone small cells can be increased by nearly 100 for realistic drone speed, height, and user traffic model and without incurring any major increase in drone energy consumption.
Multiple interfering UAVs bring more challenges such as the distance between UAVs. The problem of the optimal deployment of two interfering drone small cells is investigated in @cite_19 . @cite_10 addressed the problem of cell outage or cell overload using UAVs to temporarily offload the traffic to neighbor cells in 4G networks. In that work, a central planning model for the placement of UAV relays was discussed, the feasibility of the solution was also proved using an analytical model.
{ "cite_N": [ "@cite_19", "@cite_10" ], "mid": [ "2226130968", "1971595511" ], "abstract": [ "The use of drone small cells (DSCs) which are aerial wireless base stations that can be mounted on flying devices such as unmanned aerial vehicles (UAVs), is emerging as an effective technique for providing wireless services to ground users in a variety of scenarios. The efficient deployment of such DSCs while optimizing the covered area is one of the key design challenges. In this paper, considering the low altitude platform (LAP), the downlink coverage performance of DSCs is investigated. The optimal DSC altitude which leads to a maximum ground coverage and minimum required transmit power for a single DSC is derived. Furthermore, the problem of providing a maximum coverage for a certain geographical area using two DSCs is investigated in two scenarios; interference free and full interference between DSCs. The impact of the distance between DSCs on the coverage area is studied and the optimal distance between DSCs resulting in maximum coverage is derived. Numerical results verify our analytical results on the existence of optimal DSCs altitude separation distance and provide insights on the optimal deployment of DSCs to supplement wireless network coverage.", "Compensating temporary overload or site outage in cellular mobile networks is still an unsolved problem in order to avoid situations where services are unavailable. For this objective, we propose to use a swarm of Unmanned Aerial Vehicles (UAVs) equipped with cellular technology to temporarily offload traffic into neighbouring cells in LTE 4G networks. We discuss relay placement, amount of relays and relay transmit power for overload and outage compensation and provide an analytical model for evaluating system performance in the downlink. We assume that the spatial separation between the aerial service provider, users, and offload eNodeB is beneficial for temporarily increasing spectral efficiency. Our results give evidence, that aerial network provisioning can be used for optimizing mobile networks in overload and outage scenarios." ] }
1704.01244
2606249155
With recent advancements in drone technology, researchers are now considering the possibility of deploying small cells served by base stations mounted on flying drones. A major advantage of such drone small cells is that the operators can quickly provide cellular services in areas of urgent demand without having to pre-install any infrastructure. Since the base station is attached to the drone, technically it is feasible for the base station to dynamic reposition itself in response to the changing locations of users for reducing the communication distance, decreasing the probability of signal blocking, and ultimately increasing the spectral efficiency. In this paper, we first propose distributed algorithms for autonomous control of drone movements, and then model and analyse the spectral efficiency performance of a drone small cell to shed new light on the fundamental benefits of dynamic repositioning. We show that, with dynamic repositioning, the spectral efficiency of drone small cells can be increased by nearly 100 for realistic drone speed, height, and user traffic model and without incurring any major increase in drone energy consumption.
Besides, we propose that drone BSs should not fly too fast in cellular networks because, (a) Drones flying at a higher speed will cause much more energy consumption than that at a lower speed @cite_20 . (b) Drones flying at a high speed will cause tremendous damage in case of collision.
{ "cite_N": [ "@cite_20" ], "mid": [ "1561867011" ], "abstract": [ "Coverage path planning is the operation of finding a path that covers all the points of a specific area. Thanks to the recent advances of hardware technology, Unmanned Aerial Vehicles (UAVs) are starting to be used for photogrammetric sensing of large areas in several application domains, such as agriculture, rescuing, and surveillance. However, most of the research focused on finding the optimal path taking only geometrical constraints into account, without considering the peculiar features of the robot, like available energy, weight, maximum speed, sensor resolution, etc. This paper proposes an energy-aware path planning algorithm that minimizes energy consumption while satisfying a set of other requirements, such as coverage and resolution. The algorithm is based on an energy model derived from real measurements. Finally, the proposed approach is validated through a set of experiments." ] }
1704.01355
2963959365
Snapshot Isolation (SI) is a widely adopted concurrency control mechanism in database systems, which utilizes timestamps to resolve conflicts between transactions. However, centralized allocation of timestamps is a potential bottleneck for parallel transaction management. This bottleneck is becoming increasingly visible with the rapidly growing degree of parallelism of today's computing platforms. This paper introduces Posterior Snapshot Isolation (PostSI), an SI mechanism that allows transactions to determine their timestamps autonomously, without relying on centralized coordination. As such, PostSI can scale well, rendering it suitable for various multi-core and MPP platforms. Extensive experiments are conducted to demonstrate its advantage over existing approaches.
Decentralized concurrency control has been studied extensively in the context of distributed or parallel databases @cite_24 @cite_20 @cite_35 . Most systems resort to locking for distributed concurrency control @cite_24 @cite_3 , as locking seems relatively easy to decentralize. To decentralize a locking based concurrency controller, such as 2PL, we maintain a lock table on each data node, such that locking can be performed locally. Dead lock detection is usually necessary for lock based approaches. While distributed dead lock detection requires no centralized coordination, it can be expensive. Similarly, mechanisms of Optimistic Concurrency Control (OCC) are not difficult to decentralize either @cite_19 @cite_41 -- the read and write sets of a transaction can be partitioned and stored locally, and the validation can be conducted separately on each node. However, traditional 2PL and OCC based approaches do not share the advantage of MVCC. They either block or abort transactions when encountering read-write conflicts.
{ "cite_N": [ "@cite_35", "@cite_41", "@cite_3", "@cite_24", "@cite_19", "@cite_20" ], "mid": [ "2585130803", "1433235304", "2127872526", "2389944897", "2016035261", "2018464987" ], "abstract": [ "Increasing transaction volumes have led to a resurgence of interest in distributed transaction processing. In particular, partitioning data across several servers can improve throughput by allowing servers to process transactions in parallel. But executing transactions across servers limits the scalability and performance of these systems. In this paper, we quantify the effects of distribution on concurrency control protocols in a distributed environment. We evaluate six classic and modern protocols in an in-memory distributed database evaluation framework called Deneva, providing an apples-to-apples comparison between each. Our results expose severe limitations of distributed transaction processing engines. Moreover, in our analysis, we identify several protocol-specific scalability bottlenecks. We conclude that to achieve truly scalable operation, distributed concurrency control solutions must seek a tighter coupling with either novel network hardware (in the local area) or applications (via data modeling and semantically-aware execution), or both.", "Distributed storage systems run transactions across machines to ensure serializability. Traditional protocols for distributed transactions are based on two-phase locking (2PL) or optimistic concurrency control (OCC). 2PL serializes transactions as soon as they conflict and OCC resorts to aborts, leaving many opportunities for concurrency on the table. This paper presents ROCOCO, a novel concurrency control protocol for distributed transactions that outperforms 2PL and OCC by allowing more concurrency. ROCOCO executes a transaction as a collection of atomic pieces, each of which commonly involves only a single server. Servers first track dependencies between concurrent transactions without actually executing them. At commit time, a transaction's dependency information is sent to all servers so they can re-order conflicting pieces and execute them in a serializable order. We compare ROCOCO to OCC and 2PL using a scaled TPC-C benchmark. ROCOCO outperforms 2PL and OCC in workloads with varying degrees of contention. When the contention is high, ROCOCO's throughput is 130 and 347 higher than that of 2PL and OCC.", "This paper presents Granola, a transaction coordination infrastructure for building reliable distributed storage applications. Granola provides a strong consistency model, while significantly reducing transaction coordination overhead. We introduce specific support for a new type of independent distributed transaction, which we can serialize with no locking overhead and no aborts due to write conflicts. Granola uses a novel timestamp-based coordination mechanism to order distributed transactions, offering lower latency and higher throughput than previous systems that offer strong consistency. Our experiments show that Granola has low overhead, is scalable and has high throughput. We implemented the TPC-C benchmark on Granola, and achieved 3× the throughput of a platform using a locking approach.", "Concurrency control is necessary and important in any multiusers, especially distributed database systems. In the paper we have briefly discussed the questions about concurrency control in distributed database systems, its eritieron of correctness, algorithms and techniques, and some questions related to its implementation such as deadloek, locks in relational database, robustness and so on.", "There is an ever-increasing demand for more complex transactions and higher throughputs in transaction processing systems leading to higher degrees of transaction concurrency and, hence, higher data contention. The conventional two-phase locking (2PL) Concurrency Control (CC) method may, therefore, restrict system throughput to levels inconsistent with the available processing capacity. This is especially a concern in shared-nothing or data-partitioned systems due to the extra latencies for internode communication and a reliable commit protocol. The optimistic CC (OCC) is a possible solution, but currently proposed methods have the disadvantage of repeated transaction restarts. We present a distributed OCC method followed by locking, such that locking is an integral part of distributed validation and two-phase commit. This method ensures at most one re-execution, if the validation for the optimistic phase fails. Deadlocks, which are possible with 2PL, are prevented by preclaiming locks for the second execution phase. This is done in the same order at all nodes. We outline implementation details and compare the performance of the new OCC method with distributed 2PL through a detailed simulation that incorporates queueing effects at the devices of the computer systems, buffer management, concurrency control, and commit processing. It is shown that for higher data contention levels, the hybrid OCC method allows a much higher maximum transaction throughput than distributed 2PL in systems with high processing capacities. In addition to the comparison of CC methods, the simulation study is used to study the effect of varying the number of computer systems with a fixed total processing capacity and the effect of locality of access in each case. We also describe several interesting variants of the proposed OCC method, including methods for handling access variance, i.e., when rerunning a transaction results in accesses to a different set of objects.", "Transactional Information Systems is the long-awaited, comprehensive work from leading scientists in the transaction processing field. Weikum and Vossen begin with a broad look at the role of transactional technology in today's economic and scientific endeavors, then delve into critical issues faced by all practitioners, presenting today's most effective techniques for controlling concurrent access by multiple clients, recovering from system failures, and coordinating distributed transactions. The authors emphasize formal models that are easily applied across fields, that promise to remain valid as current technologies evolve, and that lend themselves to generalization and extension in the development of new classes of network-centric, functionally rich applications. This book's purpose and achievement is the presentation of the foundations of transactional systems as well as the practical aspects of the field what will help you meet today's challenges. * Provides the most advanced coverage of the topic available anywhere--along with the database background required for you to make full use of this material. * Explores transaction processing both generically as a broadly applicable set of information technology practices and specifically as a group of techniques for meeting the goals of your enterprise. * Contains information essential to developers of Web-based e-Commerce functionality--and a wide range of more \"traditional\" applications. * Details the algorithms underlying core transaction processing functionality. Table of Contents PART ONE - BACKGROUND AND MOTIVATION Chapter 1 What Is It All About? Chapter 2 Computational Models PART TWO - CONCURRENCY CONTROL Chapter 3 Concurrency Control: Notions of Correctness for the Page Model Chapter 4 Concurrency Control Algorithms Chapter 5 Multiversion Concurrency Control Chapter 6 Concurrency Control on Objects: Notions of Correctness Chapter 7 Concurrency Control Algorithms on Objects Chapter 8 Concurrency Control on Relational Databases Chapter 9 Concurrency Control on Search Structures Chapter 10 Implementation and Pragmatic Issues PART THREE - RECOVERY Chapter 11 Transaction Recovery Chapter 12 Crash Recovery: Notion of Correctness Chapter 13 Page Model Crash Recovery Algorithms Chapter 14 Object Model Crash Recovery Chapter 15 Special Issues of Recovery Chapter 16 Media Recovery Chapter 17 Application Recovery PART FOUR - COORDINATION OF DISTRIBUTED TRANSACTIONS Chapter 18 Distributed Concurrency Control Chapter 19 Distributed Transaction Recovery PART FIVE - APPLICATIONS AND FUTURE PERSPECTIVES Chapter 20 What Is Next?" ] }
1704.01355
2963959365
Snapshot Isolation (SI) is a widely adopted concurrency control mechanism in database systems, which utilizes timestamps to resolve conflicts between transactions. However, centralized allocation of timestamps is a potential bottleneck for parallel transaction management. This bottleneck is becoming increasingly visible with the rapidly growing degree of parallelism of today's computing platforms. This paper introduces Posterior Snapshot Isolation (PostSI), an SI mechanism that allows transactions to determine their timestamps autonomously, without relying on centralized coordination. As such, PostSI can scale well, rendering it suitable for various multi-core and MPP platforms. Extensive experiments are conducted to demonstrate its advantage over existing approaches.
When considering MVCC, decentralization of concurrency control becomes a challenge, as most existing implementations of MVCC rely on timestamps to determine the right data version for a transaction to access. (While MV2PL @cite_8 does not require timestamp, it can only delay rather than avoiding blocking when confronted with read-write conflicts.) In the literature, several approaches of distributed or parallel MVCC have been proposed @cite_36 @cite_32 @cite_4 . They either aim to improve the scalability of distributed MVCC @cite_13 @cite_44 @cite_5 @cite_32 @cite_16 , or to provide high-availability support @cite_4 @cite_33 @cite_33 @cite_23 . However, most of the approaches still use central clocks to allocate timestamps. In what follows, we briefly review the work that attempts to alleviate centralized timestamp allocation.
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_8", "@cite_36", "@cite_32", "@cite_44", "@cite_23", "@cite_5", "@cite_16", "@cite_13" ], "mid": [ "1969925795", "2025073323", "1545155892", "1507320341", "2034173754", "", "", "2086181186", "2147806092", "2145522924" ], "abstract": [ "We describe the design and implementation of Walter, a key-value store that supports transactions and replicates data across distant sites. A key feature behind Walter is a new property called Parallel Snapshot Isolation (PSI). PSI allows Walter to replicate data asynchronously, while providing strong guarantees within each site. PSI precludes write-write conflicts, so that developers need not worry about conflict-resolution logic. To prevent write-write conflicts and implement PSI, Walter uses two new and simple techniques: preferred sites and counting sets. We use Walter to build a social networking application and port a Twitter-like application.", "Modern cloud systems are geo-replicated to improve application latency and availability. Transactional consistency is essential for application developers; however, the corresponding concurrency control and commitment protocols are costly in a geo-replicated setting. To minimize this cost, we identify the following essential scalability properties: (i) only replicas updated by a transaction T make steps to execute T; (ii) a read-only transaction never waits for concurrent transactions and always commits; (iii) a transaction may read object versions committed after it started; and (iv) two transactions synchronize with each other only if their writes conflict. We present Non-Monotonic Snapshot Isolation (NMSI), the first strong consistency criterion to allow implementations with all four properties. We also present a practical implementation of NMSI called Jessy, which we compare experimentally against a number of well-known criteria. Our measurements show that the latency and throughput of NMSI are comparable to the weakest criterion, read-committed, and between two to fourteen times faster than well-known strong consistencies.", "This book is an introduction to the design and implementation of concurrency control and recovery mechanisms for transaction management in centralized and distributed database systems. Concurrency control and recovery have become increasingly important as businesses rely more and more heavily on their on-line data processing activities. For high performance, the system must maximize concurrency by multiprogramming transactions. But this can lead to interference between queries and updates, which concurrency control mechanisms must avoid. In addition, a satisfactory recovery system is necessary to ensure that inevitable transaction and database system failures do not corrupt the database.", "Federated transaction management (also known as multidatabase transaction management in the literature) is needed to ensure the consistency of data that is distributed across multiple, largely autonomous, and possibly heterogeneous component databases and accessed by both global and local transactions. While the global atomicity of such transactions can be enforced by using a standardized commit protocol like XA or its CORBA counterpart OTS, global serializability is not self-guaranteed as the underlying component systems may use a variety of potentially incompatible local concurrency control protocols. The problem of how to achieve global serializability, by either constraining the component systems or implementing additional global protocols at the federation level, has been intensively studied in the literature, but did not have much impact on the practical side. A major deficiency of the prior work has been that it focused on the idealized correctness criterion of serializability and disregarded the subtle but important variations of SQL isolation levels supported by most commercial database systems. This paper reconsiders the problem of federated transaction management, more specifically its concurrency control issues, with particular focus on isolation levels used in practice, especially the popular snapshot isolation provided by Oracle. As pointed out in a SIGMOD 1995 paper by , a rigorous foundation for reasoning about such concurrency control features of commercial systems is sorely missing. The current paper aims to close this gap by developing a formal framework that allows us to reason about local and global transaction executions where some (or all) transactions are running under snapshot isolation. The paper derives criteria and practical protocols for guaranteeing global snapshot isolation at the federation level. It further generalizes the well-known ticket method to cope with combinations of isolation levels in a federated system.", "Modern database systems employ Snapshot Isolation to implement concurrency control and isolationbecause it promises superior query performance compared to lock-based alternatives. Furthermore, Snapshot Isolation never blocks readers, which is an important property for modern information systems, which have mixed workloads of heavy OLAP queries and short update transactions. This paper revisits the problem of implementing Snapshot Isolation in a distributed database system and makes three important contributions. First, a complete definition of Distributed Snapshot Isolation is given, thereby extending existing definitions from the literature. Based on this definition, a set of criteria is proposed to efficiently implement Snapshot Isolation in a distributed system. Second, the design space of alternative methods to implement Distributed Snapshot Isolation is presented based on this set of criteria. Third, a new approach to implement Distributed Snapshot Isolation is devised; we refer to this approach as Incremental. The results of comprehensive performance experiments with the TPC-C benchmark show that the Incremental approach significantly outperforms any other known method from the literature. Furthermore, the Incremental approach requires no a priori knowledge of which nodes of a distributed system are involved in executing a transaction. Also, the Incremental approach can execute transactions that involve data from a single node only with the same efficiency as a centralized database system. This way, the Incremental approach takes advantage of sharding or other ways to improve data locality. The cost for synchronizing transactions in a distributed system is only paid by transactions that actually involve data from several nodes. All these properties make the Incremental approach more practical than related methods proposed in the literature.", "", "", "One of the core principles of the SAP HANA database system is the comprehensive support of distributed query facility. Supporting scale-out scenarios was one of the major design principles of the system from the very beginning. Within this paper, we first give an overview of the overall functionality with respect to data allocation, metadata caching and query routing. We then dive into some level of detail for specific topics and explain features and methods not common in traditional disk-based database systems. In summary, the paper provides a comprehensive overview of distributed query processing in SAP HANA database to achieve scalability to handle large databases and heterogeneous types of workloads.", "Clock-SI is a fully distributed protocol that implements snapshot isolation (SI) for partitioned data stores. It derives snapshot and commit timestamps from loosely synchronized clocks, rather than from a centralized timestamp authority as used in current systems. A transaction obtains its snapshot timestamp by reading the clock at its originating partition and Clock-SI provides the corresponding consistent snapshot across all the partitions. In contrast to using a centralized timestamp authority, Clock-SI has availability and performance benefits: It avoids a single point of failure and a potential performance bottleneck, and improves transaction latency and throughput. We develop an analytical model to study the trade-offs introduced by Clock-SI among snapshot age, delay probabilities of transactions, and abort rates of update transactions. We verify the model predictions using a system implementation. Furthermore, we demonstrate the performance benefits of Clock-SI experimentally using a micro-benchmark and an application-level benchmark on a partitioned key-value store. For short read-only transactions, Clock-SI improves latency and throughput by 50 by avoiding communications with a centralized timestamp authority. With a geographically partitioned data store, Clock-SI reduces transaction latency by more than 100 milliseconds. Moreover, the performance benefits of Clock-SI come with higher availability.", "Snapshot isolation (SI) is an important database transactional isolation level adopted by major database management systems (DBMS). Until now, there is no solution for any traditional DBMS to be easily replicated with global SI for distributed transactions in cloud computing environments. HBase is a column-oriented data store for Hadoop that has been proven to scale and perform well on clouds. HBase features random access performance on par with open source DBMS such as MySQL. However, HBase only provides single atomic row writes based on row locks and very limited transactional support. In this paper, we show how multi-row distributed transactions with global SI guarantee can be easily supported by using bare-bones HBase with its default configuration so that the high throughput, scalability, fault tolerance, access transparency and easy deployability properties of HBase can be inherited. Through performance studies, we quantify the cost of adopting our technique. The contribution of this paper is that we provide a novel approach to use HBase as a cloud database solution with global SI at low added cost. Our approach can be easily extended to other column-oriented data stores." ] }
1704.01355
2963959365
Snapshot Isolation (SI) is a widely adopted concurrency control mechanism in database systems, which utilizes timestamps to resolve conflicts between transactions. However, centralized allocation of timestamps is a potential bottleneck for parallel transaction management. This bottleneck is becoming increasingly visible with the rapidly growing degree of parallelism of today's computing platforms. This paper introduces Posterior Snapshot Isolation (PostSI), an SI mechanism that allows transactions to determine their timestamps autonomously, without relying on centralized coordination. As such, PostSI can scale well, rendering it suitable for various multi-core and MPP platforms. Extensive experiments are conducted to demonstrate its advantage over existing approaches.
In @cite_32 , the authors introduced Distributed Snapshot Isolation (DSI), an SI scheme for MPP databases. They proposed four methods to implement DSI. Among the four, the is regarded as the most efficient. In this method, a local transaction only interacts with the local clock to retrieve timestamps. Only when a transaction attempts to access the data on a remote node, does it interact with that node to obtain an appropriate remote timestamp. To ensure the validity of remote timestamps, a global clock is still required, and a mapping between each local clock and the global clock is maintained. Each node communicates with the coordinator occasionally to keep the mapping up-to-date. Although this method can avoid centralized coordination for single-node transactions, it is still mandatory for cross-node transactions. Compared to DSI, ViCC eliminates the need for centralized coordination completely.
{ "cite_N": [ "@cite_32" ], "mid": [ "2034173754" ], "abstract": [ "Modern database systems employ Snapshot Isolation to implement concurrency control and isolationbecause it promises superior query performance compared to lock-based alternatives. Furthermore, Snapshot Isolation never blocks readers, which is an important property for modern information systems, which have mixed workloads of heavy OLAP queries and short update transactions. This paper revisits the problem of implementing Snapshot Isolation in a distributed database system and makes three important contributions. First, a complete definition of Distributed Snapshot Isolation is given, thereby extending existing definitions from the literature. Based on this definition, a set of criteria is proposed to efficiently implement Snapshot Isolation in a distributed system. Second, the design space of alternative methods to implement Distributed Snapshot Isolation is presented based on this set of criteria. Third, a new approach to implement Distributed Snapshot Isolation is devised; we refer to this approach as Incremental. The results of comprehensive performance experiments with the TPC-C benchmark show that the Incremental approach significantly outperforms any other known method from the literature. Furthermore, the Incremental approach requires no a priori knowledge of which nodes of a distributed system are involved in executing a transaction. Also, the Incremental approach can execute transactions that involve data from a single node only with the same efficiency as a centralized database system. This way, the Incremental approach takes advantage of sharding or other ways to improve data locality. The cost for synchronizing transactions in a distributed system is only paid by transactions that actually involve data from several nodes. All these properties make the Incremental approach more practical than related methods proposed in the literature." ] }
1704.01355
2963959365
Snapshot Isolation (SI) is a widely adopted concurrency control mechanism in database systems, which utilizes timestamps to resolve conflicts between transactions. However, centralized allocation of timestamps is a potential bottleneck for parallel transaction management. This bottleneck is becoming increasingly visible with the rapidly growing degree of parallelism of today's computing platforms. This paper introduces Posterior Snapshot Isolation (PostSI), an SI mechanism that allows transactions to determine their timestamps autonomously, without relying on centralized coordination. As such, PostSI can scale well, rendering it suitable for various multi-core and MPP platforms. Extensive experiments are conducted to demonstrate its advantage over existing approaches.
To avoid using a central clock, another viable approach is to use synchronized distributed physical clocks (a.k.a. true time devices). A typical example is Spanner @cite_17 , a distributed database system developed by Google. Spanner utilizes GPS clocks and atomic clocks to constraint the deviation among different physical clocks within a small error bound. It then builds its concurrency control mechanism upon the timestamps generated by the true time devices. However, as GPS clocks and atomic clocks are not common hardware, the approach of Spanner does not seem to be widely applicable. Instead of using hardware of high accuracy, Clock-SI @cite_16 resorts to an algorithmic approach that derives timestamps from loosely synchronized physical clocks. Loose clock synchronization @cite_6 would unavoidably result in skew of time. To deal with time skew, Clock-SI has to let a node falling behind to see only old data snapshots or to force an ahead node to wait for a behind node. This makes Clock-SI unstable, as enlarged clock skew will result in severe performance loss. ViCC chooses not to deal with synchronized physical clocks.
{ "cite_N": [ "@cite_16", "@cite_6", "@cite_17" ], "mid": [ "2147806092", "2110026634", "" ], "abstract": [ "Clock-SI is a fully distributed protocol that implements snapshot isolation (SI) for partitioned data stores. It derives snapshot and commit timestamps from loosely synchronized clocks, rather than from a centralized timestamp authority as used in current systems. A transaction obtains its snapshot timestamp by reading the clock at its originating partition and Clock-SI provides the corresponding consistent snapshot across all the partitions. In contrast to using a centralized timestamp authority, Clock-SI has availability and performance benefits: It avoids a single point of failure and a potential performance bottleneck, and improves transaction latency and throughput. We develop an analytical model to study the trade-offs introduced by Clock-SI among snapshot age, delay probabilities of transactions, and abort rates of update transactions. We verify the model predictions using a system implementation. Furthermore, we demonstrate the performance benefits of Clock-SI experimentally using a micro-benchmark and an application-level benchmark on a partitioned key-value store. For short read-only transactions, Clock-SI improves latency and throughput by 50 by avoiding communications with a centralized timestamp authority. With a geographically partitioned data store, Clock-SI reduces transaction latency by more than 100 milliseconds. Moreover, the performance benefits of Clock-SI come with higher availability.", "This paper describes an efficient optimistic concurrency control scheme for use in distributed database systems in which objects are cached and manipulated at client machines while persistent storage and transactional support are provided by servers. The scheme provides both serializability and external consistency for committed transactions; it uses loosely synchronized clocks to achieve global serialization. It stores only a single version of each object, and avoids maintaining any concurrency control information on a per-object basis; instead, it tracks recent invalidations on a per-client basis, an approach that has low in-memory space overhead and no per-object disk overhead. In addition to its low space overheads, the scheme also performs well. The paper presents a simulation study that compares the scheme to adaptive callback locking, the best concurrency control scheme for client-server object-oriented database systems studied to date. The study shows that our scheme outperforms adaptive callback locking for low to moderate contention workloads, and scales better with the number of clients. For high contention workloads, optimism can result in a high abort rate; the scheme presented here is a first step toward a hybrid scheme that we expect to perform well across the full range of workloads.", "" ] }
1704.01355
2963959365
Snapshot Isolation (SI) is a widely adopted concurrency control mechanism in database systems, which utilizes timestamps to resolve conflicts between transactions. However, centralized allocation of timestamps is a potential bottleneck for parallel transaction management. This bottleneck is becoming increasingly visible with the rapidly growing degree of parallelism of today's computing platforms. This paper introduces Posterior Snapshot Isolation (PostSI), an SI mechanism that allows transactions to determine their timestamps autonomously, without relying on centralized coordination. As such, PostSI can scale well, rendering it suitable for various multi-core and MPP platforms. Extensive experiments are conducted to demonstrate its advantage over existing approaches.
With the prevalence of multicore processors, some recent work @cite_15 @cite_29 @cite_37 has studied how to scale MVCC on multicore platforms. In @cite_15 , a unique MVCC mechanism named BOHM is proposed. It determines the versions of transactions writes prior to their execution, so as to improve the parallelism of transaction processing. In @cite_29 , the authors proposed a carefully engineered MVCC mechanism which use Precise Locking to achieve enhanced performance. In @cite_37 , a transaction repairing scheme is introduced to speedup the abort and restart" phase of transactions. Nevertheless, none of the these approaches aims to get rid off centralized timestamp allocation. While ViCC mainly consider distributed and parallel databases, it can potentially be applied to multicore platforms too.
{ "cite_N": [ "@cite_15", "@cite_29", "@cite_37" ], "mid": [ "1550275036", "2020129682", "2612369081" ], "abstract": [ "Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multi-versioned systems are often significantly outperformed by single-version systems. We propose Bohm, a new concurrency control protocol for main-memory multi-versioned database systems. Bohm guarantees serializable execution while ensuring that reads never block writes. In addition, Bohm does not require reads to perform any bookkeeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. Bohm has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that Bohm performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees.", "Multi-Version Concurrency Control (MVCC) is a widely employed concurrency control mechanism, as it allows for execution modes where readers never block writers. However, most systems implement only snapshot isolation (SI) instead of full serializability. Adding serializability guarantees to existing SI implementations tends to be prohibitively expensive. We present a novel MVCC implementation for main-memory database systems that has very little overhead compared to serial execution with single-version concurrency control, even when maintaining serializability guarantees. Updating data in-place and storing versions as before-image deltas in undo buffers not only allows us to retain the high scan performance of single-version systems but also forms the basis of our cheap and fine-grained serializability validation mechanism. The novel idea is based on an adaptation of precision locking and verifies that the (extensional) writes of recently committed transactions do not intersect with the (intensional) read predicate space of a committing transaction. We experimentally show that our MVCC model allows very fast processing of transactions with point accesses as well as read-heavy transactions and that there is little need to prefer SI over full serializability any longer.", "The optimistic variants of Multi-Version Concurrency Control (MVCC) avoid blocking concurrent transactions at the cost of having a validation phase. Upon failure in the validation phase, the transaction is usually aborted and restarted from scratch. The \"abort and restart\" approach becomes a performance bottleneck for use cases with high contention objects or long running transactions. In addition, restarting from scratch creates a negative feedback loop in the system, because the system incurs additional overhead that may create even more conflicts. In this paper, we propose a novel approach for conflict resolution in MVCC for in-memory databases. This low overhead approach summarizes the transaction programs in the form of a dependency graph. The dependency graph also contains the constructs used in the validation phase of the MVCC algorithm. Then, when encountering conflicts among transactions, our mechanism quickly detects the conflict locations in the program and partially re-executes the conflicting transactions. This approach maximizes the reuse of the computations done in the initial execution round, and increases the transaction processing throughput." ] }
1704.01355
2963959365
Snapshot Isolation (SI) is a widely adopted concurrency control mechanism in database systems, which utilizes timestamps to resolve conflicts between transactions. However, centralized allocation of timestamps is a potential bottleneck for parallel transaction management. This bottleneck is becoming increasingly visible with the rapidly growing degree of parallelism of today's computing platforms. This paper introduces Posterior Snapshot Isolation (PostSI), an SI mechanism that allows transactions to determine their timestamps autonomously, without relying on centralized coordination. As such, PostSI can scale well, rendering it suitable for various multi-core and MPP platforms. Extensive experiments are conducted to demonstrate its advantage over existing approaches.
Replication is commonly applied to to enhance the availability of a database. In @cite_31 , propose Generalized SI, which allows a transaction to push its start time earlier to facilitate concurrency control on replicated data. In @cite_4 , propose Parallel Snapshot Isolation (PSI), a weaker isolation level than SI that allows different nodes to have different commit orderings. Using asynchronous commit orderings, PSI was shown to achieve significant performance improvement. In @cite_33 , an even weaker version of SI called non-monotonic SI was proposed for replicated databases. As non-monotonic SI further relaxes some constraints of PSI, it outperforms PSI in certain circumstances. Other related work on implementing SI over replicated databases can be found in @cite_7 @cite_38 @cite_42 @cite_18 @cite_23 . In this paper, we do not consider data replication. The issue of data replication is actually orthogonal to that of timestamp allocation. Rather than being our competitors, these approaches are complementary to our work.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_4", "@cite_33", "@cite_7", "@cite_42", "@cite_23", "@cite_31" ], "mid": [ "2403712541", "", "1969925795", "2025073323", "2041180570", "2112527958", "", "2131751093" ], "abstract": [ "Many proposals for managing replicated data use sites running the Snapshot Isolation (SI) concurrency control mechanism, and provide 1-copy SI or something similar, as the global isolation level. This allows good scalability, since only ww-conflicts need to be managed globally. However, 1-copy SI can lead to data corruption and violation of integrity constraints [5]. 1-copy serializability is the global correctness condition that prevents data corruption. We propose a new algorithm Replicated Serializable Snapshot Isolation (RSSI) that uses SI at each site, and combines this with a certification algorithm to guarantee 1-copy serializable global execution. Management of ww-conflicts is similar to what is done in 1-copy SI. But unlike previous designs for 1-copy serializable systems, we do not need to prevent all rw-conflicts among concurrent transactions. We formalize this in a theorem that shows that many rw-conflicts are indeed false positives that do not risk non-serializable behavior. Our proposed RSSI algorithm will only abort a transaction when it detects a well-defined pattern of two consecutive rw-edges in the serialization graph. We have built a prototype that integrates our RSSI with the existing open-source PostgresR(SI) system. Our performance evaluation shows that there is a worst-case overhead of about 15 for getting full 1copy serializability as compared to 1-copy SI in a cluster of 8 nodes, with our proposed RSSI clearly outperforming the previous work [6] for update-intensive workloads.", "", "We describe the design and implementation of Walter, a key-value store that supports transactions and replicates data across distant sites. A key feature behind Walter is a new property called Parallel Snapshot Isolation (PSI). PSI allows Walter to replicate data asynchronously, while providing strong guarantees within each site. PSI precludes write-write conflicts, so that developers need not worry about conflict-resolution logic. To prevent write-write conflicts and implement PSI, Walter uses two new and simple techniques: preferred sites and counting sets. We use Walter to build a social networking application and port a Twitter-like application.", "Modern cloud systems are geo-replicated to improve application latency and availability. Transactional consistency is essential for application developers; however, the corresponding concurrency control and commitment protocols are costly in a geo-replicated setting. To minimize this cost, we identify the following essential scalability properties: (i) only replicas updated by a transaction T make steps to execute T; (ii) a read-only transaction never waits for concurrent transactions and always commits; (iii) a transaction may read object versions committed after it started; and (iv) two transactions synchronize with each other only if their writes conflict. We present Non-Monotonic Snapshot Isolation (NMSI), the first strong consistency criterion to allow implementations with all four properties. We also present a practical implementation of NMSI called Jessy, which we compare experimentally against a number of well-known criteria. Our measurements show that the latency and throughput of NMSI are comparable to the weakest criterion, read-committed, and between two to fourteen times faster than well-known strong consistencies.", "Snapshot isolation is a popular transactional isolation level in database systems. Several replication techniques based on snapshot isolation have recently been proposed. These proposals, however, do not fully leverage the local concurrency controls that provide snapshot isolation. Furthermore, guaranteeing snapshot isolation in lazy replicated systems may result in transaction inversions, which happen when transactions see stale data. Strong snapshot isolation, which is provided in centralized database servers, avoids transaction inversions but is expensive to provide in a lazy replicated system. In this paper, we show how snapshot isolation can be maintained in lazy replicated systems while taking full advantage of the local concurrency controls. We propose strong session snapshot isolation, a correctness criterion that prevents transaction inversions. We show how strong session snapshot isolation can be implemented efficiently in a lazy replicated database system. Through performance studies, we quantify the cost of implementing our techniques in lazy replicated systems.", "Database replication is widely used for fault tolerance and performance. However, it requires replica control to keep data copies consistent despite updates. The traditional correctness criterion for the concurrent execution of transactions in a replicated database is 1-copy-serializability. It is based on serializability, the strongest isolation level in a nonreplicated system. In recent years, however, Snapshot Isolation (SI), a slightly weaker isolation level, has become popular in commercial database systems. There exist already several replica control protocols that provide SI in a replicated system. However, most of the correctness reasoning for these protocols has been rather informal. Additionally, most of the work so far ignores the issue of integrity constraints. In this article, we provide a formal definition of 1-copy-SI using and extending a well-established definition of SI in a nonreplicated system. Our definition considers integrity constraints in a way that conforms to the way integrity constraints are handled in commercial systems. We discuss a set of necessary and sufficient conditions for a replicated history to be producible under 1-copy-SI. This makes our formalism a convenient tool to prove the correctness of replica control algorithms.", "", "Generalized snapshot isolation extends snapshot isolation as used in Oracle and other databases in a manner suitable for replicated databases. While (conventional) snapshot isolation requires that transactions observe the \"latest\" snapshot of the database, generalized snapshot isolation allows the use of \"older\" snapshots, facilitating a replicated implementation. We show that many of the desirable properties of snapshot isolation remain. In particular, read-only transactions never block or abort and they do not cause update transactions to block or abort. Moreover, under certain assumptions on the transaction workload the execution is serializable. An implementation of generalized snapshot isolation can choose which past snapshot it uses. An interesting choice for a replicated database is prefix-consistent snapshot isolation, in which the snapshot contains at least all the writes of locally committed transactions. We present two implementations of prefix-consistent snapshot isolation. We conclude with an analytical performance model of one implementation, demonstrating the benefits, in particular reduced latency for read-only transactions, and showing that the potential downsides, in particular change in abort rate of update transactions, are limited." ] }
1704.01508
2607292742
Recent research has shown a substantial active presence of bots in online social networks (OSNs). In this paper we utilise our past work on studying bots (Stweeler) to comparatively analyse the usage and impact of bots and humans on Twitter, one of the largest OSNs in the world. We collect a large-scale Twitter dataset and define various metrics based on tweet metadata. We divide and filter the dataset in four popularity groups in terms of number of followers. Using a human annotation task we assign 'bot' and 'human' ground-truth labels to the dataset, and compare the annotations against an online bot detection tool for evaluation. We then ask a series of questions to discern important behavioural bot and human characteristics using metrics within and among four popularity groups. From the comparative analysis we draw important differences as well as surprising similarities between the two entities, thus paving the way for reliable classification of automated political infiltration, advertisement campaigns, and general bot detection.
Unlike these works, we do not aim to monitor the success of bot infiltration. Rather we are interested in understanding the behavioural differences of bots and humans. That said, there is work that has inspected bot or human behaviour in isolation. For example, @cite_6 examined the retweet behaviour of people, focussing on how people tweet , as well as why and what people retweet . The authors found that participants retweet using different styles, and for diverse reasons ( for others or for social action). This is relevant to our own work, as we also study retweets. In contrast, we directly compare retweet patterns of bots and humans (rather than just humans).
{ "cite_N": [ "@cite_6" ], "mid": [ "2001653897" ], "abstract": [ "Twitter - a microblogging service that enables users to post messages (\"tweets\") of up to 140 characters - supports a variety of communicative practices; participants use Twitter to converse with individuals, groups, and the public at large, so when conversations emerge, they are often experienced by broader audiences than just the interlocutors. This paper examines the practice of retweeting as a way by which participants can be \"in a conversation.\" While retweeting has become a convention inside Twitter, participants retweet using different styles and for diverse reasons. We highlight how authorship, attribution, and communicative fidelity are negotiated in diverse ways. Using a series of case studies and empirical data, this paper maps out retweeting as a conversational practice." ] }
1704.01502
2952103128
This paper focuses on a novel and challenging vision task, dense video captioning, which aims to automatically describe a video clip with multiple informative and diverse caption sentences. The proposed method is trained without explicit annotation of fine-grained sentence to video region-sequence correspondence, but is only based on weak video-level sentence annotations. It differs from existing video captioning systems in three technical aspects. First, we propose lexical fully convolutional neural networks (Lexical-FCN) with weakly supervised multi-instance multi-label learning to weakly link video regions with lexical labels. Second, we introduce a novel submodular maximization scheme to generate multiple informative and diverse region-sequences based on the Lexical-FCN outputs. A winner-takes-all scheme is adopted to weakly associate sentences to region-sequences in the training phase. Third, a sequence-to-sequence learning based language model is trained with the weakly supervised information obtained through the association process. We show that the proposed method can not only produce informative and diverse dense captions, but also outperform state-of-the-art single video captioning methods by a large margin.
has been explored in various works recently @cite_5 @cite_10 @cite_37 @cite_51 @cite_56 . Most of these works @cite_37 @cite_10 @cite_5 focused on generating a long caption (story-like), which first temporally segmented the video with action localization @cite_10 or different levels of details @cite_5 , and then generated multiple captions for those segments and connected them with natural language processing techniques. However, these methods simply considered the temporally segmentation, and ignored the frame-level region attention and the motion-sequence of region-level objects. Yu @cite_37 considered both the temporal and spatial attention, but still ignored the association or alignment of the sentences and visual locations. In contrast, this paper tries to exploit both the temporal and spatial region information and further explores the correspondence between sentences and region-sequences for more accurate modeling.
{ "cite_N": [ "@cite_37", "@cite_10", "@cite_56", "@cite_5", "@cite_51" ], "mid": [ "1957740064", "2405676915", "2035434106", "1596841185", "1995820507" ], "abstract": [ "We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively.", "Recent advances in image captioning task have led to increasing interests in video captioning task. However, most works on video captioning are focused on generating single input of aggregated features, which hardly deviates from image captioning process and does not fully take advantage of dynamic contents present in videos. We attempt to generate video captions that convey richer contents by temporally segmenting the video with action localization, generating multiple captions from multiple frames, and connecting them with natural language processing techniques, in order to generate a story-like caption. We show that our proposed method can generate captions that are richer in contents and can compete with state-of-the-art method without explicitly using video-level features as input.", "This contribution addresses generation of natural language descriptions for human actions and behaviour observed in video streams. The work starts with implementation of conventional image processing techniques to extract high-level features from video. Because human is often the most important and also interesting feature, description focuses on humans and their activities. Although feature extraction processes are erroneous at various levels, we explore approaches to put them together to produce a coherent description. Evaluation is made by calculating the overlap similarity score between human authored and machine generated descriptions.", "Humans can easily describe what they see in a coherent way and at varying level of detail. However, existing approaches for automatic video description focus on generating only single sentences and are not able to vary the descriptions’ level of detail. In this paper, we address both of these limitations: for a variable level of detail we produce coherent multi-sentence descriptions of complex videos. To understand the difference between detailed and short descriptions, we collect and analyze a video description corpus of three levels of detail. We follow a two-step approach where we first learn to predict a semantic representation (SR) from video and then generate natural language descriptions from it. For our multi-sentence descriptions we model across-sentence consistency at the level of the SR by enforcing a consistent topic. Human judges rate our descriptions as more readable, correct, and relevant than related work.", "The problem of describing images through natural language has gained importance in the computer vision community. Solutions to image description have either focused on a top-down approach of generating language through combinations of object detections and language models or bottom-up propagation of keyword tags from training images to test images through probabilistic or nearest neighbor techniques. In contrast, describing videos with natural language is a less studied problem. In this paper, we combine ideas from the bottom-up and top-down approaches to image description and propose a method for video description that captures the most relevant contents of a video in a natural language description. We propose a hybrid system consisting of a low level multimodal latent topic model for initial keyword annotation, a middle level of concept detectors and a high level module to produce final lingual descriptions. We compare the results of our system to human descriptions in both short and long forms on two datasets, and demonstrate that final system output has greater agreement with the human descriptions than any single level." ] }
1704.01502
2952103128
This paper focuses on a novel and challenging vision task, dense video captioning, which aims to automatically describe a video clip with multiple informative and diverse caption sentences. The proposed method is trained without explicit annotation of fine-grained sentence to video region-sequence correspondence, but is only based on weak video-level sentence annotations. It differs from existing video captioning systems in three technical aspects. First, we propose lexical fully convolutional neural networks (Lexical-FCN) with weakly supervised multi-instance multi-label learning to weakly link video regions with lexical labels. Second, we introduce a novel submodular maximization scheme to generate multiple informative and diverse region-sequences based on the Lexical-FCN outputs. A winner-takes-all scheme is adopted to weakly associate sentences to region-sequences in the training phase. Third, a sequence-to-sequence learning based language model is trained with the weakly supervised information obtained through the association process. We show that the proposed method can not only produce informative and diverse dense captions, but also outperform state-of-the-art single video captioning methods by a large margin.
is of great advantages over the ImageNet based CNN model @cite_33 in image video captioning, since the ImageNet based CNN model only captures a limited number of object concepts, while the lexical based CNN model is able to capture all kinds of semantic concepts (nouns for objects and scenes, adjective for shape and attributes, verb for actions, etc). It is non-trivial to adopt fine-tune the existing ImageNet CNN models with lexical output. Previous works @cite_32 @cite_35 @cite_27 @cite_16 @cite_0 have proposed several ways for this purpose. For instance, @cite_32 adopted a weakly supervised multiple instance learning (MIL) approach @cite_22 @cite_19 to train a CNN based word detector without the annotations of image-region to words correspondence; and @cite_35 applied a multiple label learning (MLL) method to learn the CNN based mapping between visual inputs and multiple concept tags.
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_22", "@cite_32", "@cite_0", "@cite_19", "@cite_27", "@cite_16" ], "mid": [ "2952155606", "2952020226", "2154318594", "2949769367", "1969616664", "2166010828", "2463508871", "1893116441" ], "abstract": [ "While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired image-sentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model's ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-caption data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.", "Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem.", "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1 . When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34 of the time.", "We present a system to automatically generate natural language descriptions from images. This system consists of two parts. The first part, content planning, smooths the output of computer vision-based detection and recognition algorithms with statistics mined from large pools of visually descriptive text to determine the best content words to use to describe an image. The second step, surface realization, chooses words to construct natural language sentences based on the predicted content and general statistics from natural language. We present multiple approaches for the surface realization step and evaluate each using automatic measures of similarity to human generated reference descriptions. We also collect forced choice human evaluations between descriptions from the proposed generation system and descriptions from competing approaches. The proposed system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.", "A good image object detection algorithm is accurate, fast, and does not require exact locations of objects in a training set. We can create such an object detector by taking the architecture of the Viola-Jones detector cascade and training it with a new variant of boosting that we call MIL-Boost. MILBoost uses cost functions from the Multiple Instance Learning literature combined with the AnyBoost framework. We adapt the feature selection criterion of MILBoost to optimize the performance of the Viola-Jones cascade. Experiments show that the detection rate is up to 1.6 times better using MILBoost. This increased detection rate shows the advantage of simultaneously learning the locations and scales of the objects in the training set along with the parameters of the classifier.", "Recent captioning models are limited in their ability to scale and describe concepts unseen in paired image-text corpora. We propose the Novel Object Captioner (NOC), a deep visual semantic captioning model that can describe a large number of object categories not present in existing image-caption datasets. Our model takes advantage of external sources -- labeled images from object recognition datasets, and semantic knowledge extracted from unannotated text. We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of image-caption datasets. We demonstrate that our model exploits semantic information to generate captions for hundreds of object categories in the ImageNet object recognition dataset that are not observed in MSCOCO image-caption training data, as well as many categories that are observed very rarely. Both automatic evaluations and human judgements show that our model considerably outperforms prior work in being able to describe many more categories of objects.", "Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] and M-VAD [31] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long Short-Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these classifiers we generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD and M-VAD datasets. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task." ] }
1704.01502
2952103128
This paper focuses on a novel and challenging vision task, dense video captioning, which aims to automatically describe a video clip with multiple informative and diverse caption sentences. The proposed method is trained without explicit annotation of fine-grained sentence to video region-sequence correspondence, but is only based on weak video-level sentence annotations. It differs from existing video captioning systems in three technical aspects. First, we propose lexical fully convolutional neural networks (Lexical-FCN) with weakly supervised multi-instance multi-label learning to weakly link video regions with lexical labels. Second, we introduce a novel submodular maximization scheme to generate multiple informative and diverse region-sequences based on the Lexical-FCN outputs. A winner-takes-all scheme is adopted to weakly associate sentences to region-sequences in the training phase. Third, a sequence-to-sequence learning based language model is trained with the weakly supervised information obtained through the association process. We show that the proposed method can not only produce informative and diverse dense captions, but also outperform state-of-the-art single video captioning methods by a large margin.
with long short-term memory (LSTM) @cite_8 was initially proposed in the field of machine translation @cite_15 . Venugopalan (S2VT) @cite_7 generalized it to video captioning. Compared with contemporaneous works @cite_25 @cite_55 @cite_54 which require additional temporal features from 3D ConvNets @cite_11 , S2VT can directly encode the temporal information by using LSTM on the frame sequence, and no longer needs the frame-level soft-attention mechanism @cite_25 . This paper adopts the S2VT model @cite_7 with a bi-directional formulation to improve the encoder quality, which shows better performance than the vanilla S2VT model in our experiments.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_55", "@cite_54", "@cite_15", "@cite_25", "@cite_11" ], "mid": [ "2950019618", "", "2425121537", "1573040851", "2949888546", "2950307714", "2952633803" ], "abstract": [ "Real-world videos often have complex dynamics; and methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem, we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).", "", "While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for \"MSRVideo to Text\") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.", "Automatically describing video content with natural language is a fundamental challenge of computer vision. Re-current Neural Networks (RNNs), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with the given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best published performance in generating natural sentences: 45.3 and 31.0 in terms of BLEU@4 and METEOR, respectively. Superior performances are also reported on two movie description datasets (M-VAD and MPII-MD). In addition, we demonstrate that LSTM-E outperforms several state-of-the-art techniques in predicting Subject-Verb-Object (SVO) triplets.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use." ] }
1704.01599
2952785835
Typically, every part in most coherent text has some plausible reason for its presence, some function that it performs to the overall semantics of the text. Rhetorical relations, e.g. contrast, cause, explanation, describe how the parts of a text are linked to each other. Knowledge about this socalled discourse structure has been applied successfully to several natural language processing tasks. This work studies the use of rhetorical relations for Information Retrieval (IR): Is there a correlation between certain rhetorical relations and retrieval performance? Can knowledge about a document's rhetorical relations be useful to IR? We present a language model modification that considers rhetorical relations when estimating the relevance of a document to a query. Empirical evaluation of different versions of our model on TREC settings shows that certain rhetorical relations can benefit retrieval effectiveness notably (> 10 in mean average precision over a state-of-the-art baseline).
Sun & Chai @cite_3 investigate the role of discourse processing and its implication on query expansion for a sequence of questions in scenario-based context question answering (QA). They consider a sequence of questions as a mini discourse. An empirical examination of three discourse theoretic models indicates that their discourse-based approach can significantly improve QA performance over a baseline of plain reference resolution.
{ "cite_N": [ "@cite_3" ], "mid": [ "2057277029" ], "abstract": [ "Motivated by the recent effort on scenario-based context question answering (QA), this paper investigates the role of discourse processing and its implication on query expansion for a sequence of questions. Our view is that a question sequence is not random, but rather follows a coherent manner to serve some information goals. Therefore, this sequence of questions can be considered as a mini discourse with some characteristics of discourse cohesion. Understanding such a discourse will help QA systems better interpret questions and retrieve answers. Thus, we examine three models driven by Centering Theory for discourse processing: a reference model that resolves pronoun references for each question, a forward model that makes use of the forward looking centers from previous questions, and a transition model that takes into account the transition state between adjacent questions. Our empirical results indicate that more sophisticated processing based on discourse transitions and centers can significantly improve the performance of document retrieval compared to models that only resolve references. This paper provides a systematic evaluation of these models and discusses their potentials and limitations in processing coherent context questions." ] }
1704.01599
2952785835
Typically, every part in most coherent text has some plausible reason for its presence, some function that it performs to the overall semantics of the text. Rhetorical relations, e.g. contrast, cause, explanation, describe how the parts of a text are linked to each other. Knowledge about this socalled discourse structure has been applied successfully to several natural language processing tasks. This work studies the use of rhetorical relations for Information Retrieval (IR): Is there a correlation between certain rhetorical relations and retrieval performance? Can knowledge about a document's rhetorical relations be useful to IR? We present a language model modification that considers rhetorical relations when estimating the relevance of a document to a query. Empirical evaluation of different versions of our model on TREC settings shows that certain rhetorical relations can benefit retrieval effectiveness notably (> 10 in mean average precision over a state-of-the-art baseline).
In the area of text compression, @cite_12 study the usefulness of rhetorical relations between sentences for summarisation. They find that most of the significant rhetorical relations are associated to non-discriminative sentences, i.e. sentences that are for summarisation. They report that rhetorical relations that may be intuitively perceived as highly salient do not provide strong indicators of informativeness; instead, the usefulness of rhetorical relations is in providing constraints for navigating through the text's structure. These findings are compatible with the study of Clarke & Lapata @cite_17 into constraining text compression on the basis of rhetorical relations. For a more in-depth look into the impact of individual rhetorical relations to summarisation see Teufel & Moens @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_12", "@cite_17" ], "mid": [ "", "2120136138", "2026191715" ], "abstract": [ "", "We present analyses aimed at eliciting which specific aspects of discourse provide the strongest indication for text importance. In the context of content selection for single document summarization of news, we examine the benefits of both the graph structure of text provided by discourse relations and the semantic sense of these relations. We find that structure information is the most robust indicator of importance. Semantic sense only provides constraints on content selection but is not indicative of important content by itself. However, sense features complement structure information and lead to improved performance. Further, both types of discourse information prove complementary to non-discourse features. While our results establish the usefulness of discourse features, we also find that lexical overlap provides a simple and cheap alternative to discourse for computing text structure with comparable performance for the task of content selection.", "Sentence compression holds promise for many applications ranging from summarization to subtitle generation. The task is typically performed on isolated sentences without taking the surrounding context into account, even though most applications would operate over entire documents. In this article we present a discourse-informed model which is capable of producing document compressions that are coherent and informative. Our model is inspired by theories of local coherence and formulated within the framework of integer linear programming. Experimental results show significant improvements over a state-of-the-art discourse agnostic approach." ] }
1704.01599
2952785835
Typically, every part in most coherent text has some plausible reason for its presence, some function that it performs to the overall semantics of the text. Rhetorical relations, e.g. contrast, cause, explanation, describe how the parts of a text are linked to each other. Knowledge about this socalled discourse structure has been applied successfully to several natural language processing tasks. This work studies the use of rhetorical relations for Information Retrieval (IR): Is there a correlation between certain rhetorical relations and retrieval performance? Can knowledge about a document's rhetorical relations be useful to IR? We present a language model modification that considers rhetorical relations when estimating the relevance of a document to a query. Empirical evaluation of different versions of our model on TREC settings shows that certain rhetorical relations can benefit retrieval effectiveness notably (> 10 in mean average precision over a state-of-the-art baseline).
Closer to our work, @cite_13 extend an IR ranking model by adding a re-ranking strategy based on document discourse. Specifically, their re-ranking formula consists of the original retrieval status value computed with the BM11 model, which is then multiplied by a function that linearly combines inverse document frequency and term distance for each query term within a discourse unit. They focus on one discourse type only ( advantage-disadvantage ) which they identify manually in queries, and show that their approach improves retrieval performance for these queries. Our work differs on several points. We use an automatic (not manual) discourse parser to identify rhetorical relations in the documents to be retrieved (not queries). We consider 15 rhetorical relations (not 1) and we study their impact to retrieval performance using a modification of the IR language model.
{ "cite_N": [ "@cite_13" ], "mid": [ "2585363530" ], "abstract": [ "In ad hoc information retrieval (IR), some information need (e.g., find the advantages and disadvantages of smoking) requires the explicit identification of information related to the discourse type (e.g., advantages disadvantages) as well as to the topic (e.g., smoking). Such information need is not uncommon and may not be satisfied by using conventional retrieval methods. We extend existing retrieval models by adding a re-ranking strategy based on a novel graph-based retrieval model using document contexts that are called information units (IU). For evaluation, we focused on a discourse type that appeared in a subset of TREC topics where the retrieval effectiveness achieved by our conventional retrieval models for those topics was low. We showed that our approach is able to enhance the retrieval effectiveness for the selected TREC topics. This shows that our preliminary investigation is promising and deserves further investigation." ] }
1704.01262
2952645648
In today's age of internet and social media, one can find an enormous volume of forged images on-line. These images have been used in the past to convey falsified information and achieve harmful intentions. The spread and the effect of the social media only makes this problem more severe. While creating forged images has become easier due to software advancements, there is no automated algorithm which can reliably detect forgery. Image forgery detection can be seen as a subset of image understanding problem. Human performance is still the gold-standard for these type of problems when compared to existing state-of-art automated algorithms. We conduct a subjective evaluation test with the aid of eye-tracker to investigate into human factors associated with this problem. We compare the performance of an automated algorithm and humans for forgery detection problem. We also develop an algorithm which uses the data from the evaluation test to predict the difficulty-level of an image (the difficulty-level of an image here denotes how difficult it is for humans to detect forgery in an image. Terms such as "Easy difficult image" will be used in the same context). The experimental results presented in this paper should facilitate development of better algorithms in the future.
Recent years have seen an active research in this area. Copy-and-move forgery (CMF) is one of the most common methods of forgery in digital images. SURF-feature and textural-descriptors were used to detect CMF @cite_5 @cite_13 . Forgery in JPEG images is detected by analysing the DCT coefficients as the forged image is most likely to be compressed twice @cite_3 . Another class of approaches uses high-level information in an image, such as, shadows @cite_12 , light environment @cite_9 etc. Approaches solely depending on image statistics are image-format independent and are more computationally complex. Hilbert-Huang transform and Markov transition matrix of block DCT coefficients were proposed in @cite_6 and @cite_7 respectively. We refer the reader to @cite_2 and @cite_1 for an extensive review of forgery detection approaches.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_2", "@cite_5", "@cite_13", "@cite_12" ], "mid": [ "", "2170464076", "2144113691", "", "1544446329", "", "2154651279", "2020132033", "2140117117" ], "abstract": [ "", "We present a theoretical analysis of the relationship between incoming radiance and irradiance. Specifically, we address the question of whether it is possible to compute the incident radiance from knowledge of the irradiance at all surface orientations. This is a fundamental question in computer vision and inverse radiative transfer. We show that the irradiance can be viewed as a simple convolution of the incident illumination, i.e., radiance and a clamped cosine transfer function. Estimating the radiance can then be seen as a deconvolution operation. We derive a simple closed-form formula for the irradiance in terms of spherical harmonic coefficients of the incident illumination and demonstrate that the odd-order modes of the lighting with order greater than 1 are completely annihilated. Therefore these components cannot be estimated from the irradiance, contradicting a theorem that is due to Preisendorfer. A practical realization of the radiance-from-irradiance problem is the estimation of the lighting from images of a homogeneous convex curved Lambertian surface of known geometry under distant illumination, since a Lambertian object reflects light equally in all directions proportional to the irradiance. We briefly discuss practical and physical considerations and describe a simple experimental test to verify our theoretical results.", "Verifying the integrity of digital images and detecting the traces of tampering without using any protecting pre-extracted or pre-embedded information have become an important and hot research field. The popularity of this field and the rapid growth in papers published during the last years have put considerable need on creating a complete bibliography addressing published papers in this area. In this paper, an extensive list of blind methods for detecting image forgery is presented. By the word blind we refer to those methods that use only the image function. An attempt has been made to make this paper complete by listing most of the existing references and by providing a detailed classification group.", "", "Image splicing is a commonly used technique in image tampering. This paper presents a novel approach to passive detection of image splicing. In the proposed scheme, the image splicing detection problem is tackled as a twoclass classification problem under the pattern recognition framework. Considering the high non-linearity and non-stationarity nature of image splicing operation, a recently developed Hilbert-Huang transform (HHT) is utilized to generate features for classification. Furthermore, a well established statistical natural image model based on moments of characteristic functions with wavelet decomposition is employed to distinguish the spliced images from the authentic images. We use support vector machine (SVM) as the classifier. The initial experimental results demonstrate that the proposed scheme outperforms the prior arts.", "", "As the advent and growing popularity of image editing software, digital images can be manipulated easily without leaving obvious visual clues. If the tampered images are abused, it may lead to potential social, legal or private consequences. To this end, it’s very necessary and also challenging to find effective methods to detect digital image forgeries. In this paper, a fast method to detect image copy-move forgery is proposed based on the SURF (Speed up Robust Features) descriptors, which are invariant to rotation, scaling etc. Results of experiments indicate that the proposed method is valid in detecting the image region duplication and quite robust to additive noise and blurring.", "Copy-move forgery is one of the most common type of tampering in digital images. Copy-moves are parts of the image that are copied and pasted onto another part of the same image. Detection methods in general use block-matching methods, which first divide the image into overlapping blocks and then extract features from each block, assuming similar blocks will yield similar features. In this paper we present a block-based approach which exploits texture as feature to be extracted from blocks. Our goal is to study if texture is well suited for the specific application, and to compare performance of several texture descriptors. Tests have been made on both uncompressed and JPEG compressed images.", "We describe a geometric technique to detect physically inconsistent arrangements of shadows in an image. This technique combines multiple constraints from cast and attached shadows to constrain the projected location of a point light source. The consistency of the shadows is posed as a linear programming problem. A feasible solution indicates that the collection of shadows is physically plausible, while a failure to find a solution provides evidence of photo tampering." ] }
1704.01262
2952645648
In today's age of internet and social media, one can find an enormous volume of forged images on-line. These images have been used in the past to convey falsified information and achieve harmful intentions. The spread and the effect of the social media only makes this problem more severe. While creating forged images has become easier due to software advancements, there is no automated algorithm which can reliably detect forgery. Image forgery detection can be seen as a subset of image understanding problem. Human performance is still the gold-standard for these type of problems when compared to existing state-of-art automated algorithms. We conduct a subjective evaluation test with the aid of eye-tracker to investigate into human factors associated with this problem. We compare the performance of an automated algorithm and humans for forgery detection problem. We also develop an algorithm which uses the data from the evaluation test to predict the difficulty-level of an image (the difficulty-level of an image here denotes how difficult it is for humans to detect forgery in an image. Terms such as "Easy difficult image" will be used in the same context). The experimental results presented in this paper should facilitate development of better algorithms in the future.
This paper focuses on human performance evaluation of forgery detection. To the best of our knowledge, this work is first of its kind. Human performance evaluation studies have increased performance of object detectors and annotation predictors in @cite_8 . Eye-tracking has also been used to study the behavioral aspects of radiologist's performance @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_8" ], "mid": [ "2070182363", "2100379672" ], "abstract": [ "Rationale and Objectives. I examined whether the principles of search, detection, and decision making described for pulmonary nodule detection can be applied to lesion detection in mammographic images. Methods. The eye position of six radiologists (three staff mammographers and three radiology residents) was recorded as they searched mammograms for masses and microcalcifications. Results. True- and false-positive decisions were associated with prolonged gaze durations; false-negative decisions were associated with longer gaze durations than true-negatives. Readers with more experience tended to detect lesions earlier in the search than did readers with less experience, but those with less experience tended to spend more time overall searching the images and cover more image area than did those with more experience. Conclusion. Gaze duration is a useful predictor of missed lesions in mammography, making gaze duration a potential tool for perceptual feedback. Mammographic search for readers with different degrees of experience can be characterized by gaze durations, scan paths, and detection times.", "We posit that user behavior during natural viewing of images contains an abundance of information about the content of images as well as information related to user intent and user defined content importance. In this paper, we conduct experiments to better understand the relationship between images, the eye movements people make while viewing images, and how people construct natural language to describe images. We explore these relationships in the context of two commonly used computer vision datasets. We then further relate human cues with outputs of current visual recognition systems and demonstrate prototype applications for gaze-enabled detection and annotation." ] }
1704.00941
2605706831
Because of the significant increase in size and complexity of the networks, the distributed computation of eigenvalues and eigenvectors of graph matrices has become very challenging and yet it remains as important as before. In this paper we develop efficient distributed algorithms to detect, with higher resolution, closely situated eigenvalues and corresponding eigenvectors of symmetric graph matrices. We model the system of graph spectral computation as physical systems with Lagrangian and Hamiltonian dynamics. The spectrum of Laplacian matrix, in particular, is framed as a classical spring-mass system with Lagrangian dynamics. The spectrum of any general symmetric graph matrix turns out to have a simple connection with quantum systems and it can be thus formulated as a solution to a Schr "odinger-type differential equation. Taking into account the higher resolution requirement in the spectrum computation and the related stability issues in the numerical solution of the underlying differential equation, we propose the application of symplectic integrators to the calculation of eigenspectrum. The effectiveness of the proposed techniques is demonstrated with numerical simulations on real-world networks of different sizes and complexities.
The general idea of using mechanical oscillatory behaviour for the detection of eigenvalues has appeared in a few previous works, see e.g., @cite_17 @cite_7 . Though the technique in @cite_17 is close to ours, our methods differ by focussing on a Schr "odinger-type equation and numerical integrators specific to it. Moreover, we demonstrate the efficiency and stability of the methods in real-world networks of varying sizes, in contrast to a small size synthetic network considered in @cite_17 , and our methods can be used to estimate eigenvectors as well.
{ "cite_N": [ "@cite_7", "@cite_17" ], "mid": [ "2963662649", "2150171340" ], "abstract": [ "We propose a novel distributed algorithm to cluster graphs. The algorithm recovers the solution obtained from spectral clustering without the need for expensive eigenvalue eigenvector computations. We prove that, by propagating waves through the graph, a local fast Fourier transform yields the local component of every eigenvector of the Laplacian matrix, thus providing clustering information. For large graphs, the proposed algorithm is orders of magnitude faster than random walk based approaches. We prove the equivalence of the proposed algorithm to spectral clustering and derive convergence rates. We demonstrate the benefit of using this decentralized clustering algorithm for community detection in social graphs, accelerating distributed estimation in sensor networks and efficient computation of distributed multi-agent search strategies.", "In this paper, we present a decentralized algorithm to estimate the eigenvalues of the Laplacian matrix that encodes the network topology of a multi-agent system. We consider network topologies modeled by undirected graphs. The basic idea is to provide a local interaction rule among agents so that their state trajectory is a linear combination of sinusoids oscillating only at frequencies function of the eigenvalues of the Laplacian matrix. In this way, the problem of decentralized estimation of the eigenvalues is mapped into a standard signal processing problem in which the unknowns are the finite number of frequencies at which the signal oscillates." ] }
1704.00941
2605706831
Because of the significant increase in size and complexity of the networks, the distributed computation of eigenvalues and eigenvectors of graph matrices has become very challenging and yet it remains as important as before. In this paper we develop efficient distributed algorithms to detect, with higher resolution, closely situated eigenvalues and corresponding eigenvectors of symmetric graph matrices. We model the system of graph spectral computation as physical systems with Lagrangian and Hamiltonian dynamics. The spectrum of Laplacian matrix, in particular, is framed as a classical spring-mass system with Lagrangian dynamics. The spectrum of any general symmetric graph matrix turns out to have a simple connection with quantum systems and it can be thus formulated as a solution to a Schr "odinger-type differential equation. Taking into account the higher resolution requirement in the spectrum computation and the related stability issues in the numerical solution of the underlying differential equation, we propose the application of symplectic integrators to the calculation of eigenspectrum. The effectiveness of the proposed techniques is demonstrated with numerical simulations on real-world networks of different sizes and complexities.
In comparison to @cite_7 we do not deform the system and we use new symplectic numerical integrators @cite_1 @cite_12 . For the problem of distributed spectral decomposition in networks, one of the first and prominent works appeared in @cite_5 . But their algorithm requires distributed orthonormalization at each step and they solve this difficult operation via random walks. But if the graph is not well-connected (low conductance), this task will take a very long time to converge. Our distributed implementation based on fluid diffusion in the network does not require such orthonormalization.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_12", "@cite_7" ], "mid": [ "2158886595", "1976957036", "2124605172", "2963662649" ], "abstract": [ "In many large network settings, such as computer networks, social networks, or hyperlinked text documents, much information can be obtained from the network's spectral properties. However, traditional centralized approaches for computing eigenvectors struggle with at least two obstacles: the data may be difficult to obtain (both due to technical reasons and because of privacy concerns), and the sheer size of the networks makes the computation expensive. A decentralized, distributed algorithm addresses both of these obstacles: it utilizes the computational power of all nodes in the network and their ability to communicate, thus speeding up the computation with the network size. And as each node knows its incident edges, the data collection problem is avoided as well.Our main result is a simple decentralized algorithm for computing the top k eigenvectors of a symmetric weighted adjacency matrix, and a proof that it converges essentially in O(τ MIX log2 n) rounds of communication and computation, where τ MIX is the mixing time of a random walk on the network. An additional contribution of our work is a decentralized way of actually detecting convergence, and diagnosing the current error. Our protocol scales well, in that the amount of computation performed at any node in any one round, and the sizes of messages sent, depend polynomially on k, but not on the (typically much larger) number n of nodes.", "We present a family of symplectic splitting methods especially tailored to solve numerically the time-dependent Schrodinger equation. When discretized in time, this equation can be recast in the form of a classical Hamiltonian system with a Hamiltonian function corresponding to a generalized high-dimensional separable harmonic oscillator. The structure of the system allows us to build highly efficient symplectic integrators at any order. The new methods are accurate, easy to implement, and very stable in comparison with other standard symplectic integrators.", "We provide a comprehensive survey of splitting and composition methods for the numerical integration of ordinary differential equations (ODEs). Splitting methods constitute an appropriate choice when the vector field associated with the ODE can be decomposed into several pieces and each of them is integrable. This class of integrators are explicit, simple to implement and preserve structural properties of the system. In consequence, they are specially useful in geometric numerical integration. In addition, the numerical solution obtained by splitting schemes can be seen as the exact solution to a perturbed system of ODEs possessing the same geometric properties as the original system. This backward error interpretation has direct implications for the qualitative behavior of the numerical solution as well as for the error propagation along time. Closely connected with splitting integrators are composition methods. We analyze the order conditions required by a method to achieve a given order and summarize the different families of schemes one can find in the literature. Finally, we illustrate the main features of splitting and composition methods on several numerical examples arising from applications.", "We propose a novel distributed algorithm to cluster graphs. The algorithm recovers the solution obtained from spectral clustering without the need for expensive eigenvalue eigenvector computations. We prove that, by propagating waves through the graph, a local fast Fourier transform yields the local component of every eigenvector of the Laplacian matrix, thus providing clustering information. For large graphs, the proposed algorithm is orders of magnitude faster than random walk based approaches. We prove the equivalence of the proposed algorithm to spectral clustering and derive convergence rates. We demonstrate the benefit of using this decentralized clustering algorithm for community detection in social graphs, accelerating distributed estimation in sensor networks and efficient computation of distributed multi-agent search strategies." ] }
1704.00939
2607065675
In this paper, we describe a methodology to infer Bullish or Bearish sentiment towards companies brands. More specifically, our approach leverages affective lexica and word embeddings in combination with convolutional neural networks to infer the sentiment of financial news headlines towards a target company. Such architecture was used and evaluated in the context of the SemEval 2017 challenge (task 5, subtask 2), in which it obtained the best performance.
While image and sound come with a natural high dimensional embedding, the issue of is still an open research problem in the context of natural language and text. It is beyond the scope of this paper to do a thorough overview of word representations, for this we refer the interest reader to the excellent review provided by @cite_8 . Here, we will just introduce the main representations that are related to the proposed method.
{ "cite_N": [ "@cite_8" ], "mid": [ "2543875770" ], "abstract": [ "This paper have two parts. In the first part we discuss word embeddings. We discuss the need for them, some of the methods to create them, and some of their interesting properties. We also compare them to image embeddings and see how word embedding and image embedding can be combined to perform different tasks. In the second part we implement a convolutional neural network trained on top of pre-trained word vectors. The network is used for several sentence-level classification tasks, and achieves state-of-art (or comparable) results, demonstrating the great power of pre-trainted word embeddings over random ones." ] }
1704.00939
2607065675
In this paper, we describe a methodology to infer Bullish or Bearish sentiment towards companies brands. More specifically, our approach leverages affective lexica and word embeddings in combination with convolutional neural networks to infer the sentiment of financial news headlines towards a target company. Such architecture was used and evaluated in the context of the SemEval 2017 challenge (task 5, subtask 2), in which it obtained the best performance.
In the seminal paper @cite_12 , the authors introduce a statistical language model computed in an unsupervised training context using shallow neural networks. The goal was to predict the following word, given the previous context in the sentence, showing a major advance with respect to n-grams. Collobert @cite_23 empirically proved the usefulness of using unsupervised word representations for a variety of different NLP tasks and set the neural network architecture for many current approaches. Mikolov @cite_21 proposed a simplified model ( word2vec ) that allows to train on larger corpora, and showed how semantic relationships emerge from this training. Pennington @cite_25 , with the GloVe approach, maintain the semantic capacity of word2vec while introducing the statistical information from latent semantic analysis (LSA) showing that they can improve in semantic and syntactic tasks.
{ "cite_N": [ "@cite_21", "@cite_23", "@cite_25", "@cite_12" ], "mid": [ "1614298861", "2952230511", "2250539671", "2132339004" ], "abstract": [ "", "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.", "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.", "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts." ] }
1704.00939
2607065675
In this paper, we describe a methodology to infer Bullish or Bearish sentiment towards companies brands. More specifically, our approach leverages affective lexica and word embeddings in combination with convolutional neural networks to infer the sentiment of financial news headlines towards a target company. Such architecture was used and evaluated in the context of the SemEval 2017 challenge (task 5, subtask 2), in which it obtained the best performance.
There is usually a trade-off between coverage (the amount of entries) and precision (the accuracy of the sentiment information). For instance, regarding sentiment lexica, @cite_7 , @cite_1 , associates each entry with the numerical scores, ranging from 0 (negative) to 1 (positive); following this approach, it has been possible to automatically obtain a list of 155k words, compensating a low precision with a high coverage @cite_11 . On the other side of the spectrum, we have methods such as @cite_3 , @cite_27 , @cite_10 with low coverage (from 1k to 14k words), but for which the precision is maximized. These scores were manually assigned by multiple annotators, and in some cases validated by crowd-sourcing @cite_27 .
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_3", "@cite_27", "@cite_10", "@cite_11" ], "mid": [ "38739846", "193524605", "2151543699", "2084046180", "", "1904892614" ], "abstract": [ "Opinion mining (OM) is a recent subdiscipline at the crossroads of information retrieval and computational linguistics which is concerned not with the topic a document is about, but with the opinion it expresses. OM has a rich set of applications, ranging from tracking users’ opinions about products or about political candidates as expressed in online forums, to customer relationship management. In order to aid the extraction of opinions from text, recent research has tried to automatically determine the “PN-polarity” of subjective terms, i.e. identify whether a term that is a marker of opinionated content has a positive or a negative connotation. Research on determining whether a term is indeed a marker of opinionated content (a subjective term) or not (an objective term) has been, instead, much more scarce. In this work we describe SENTIWORDNET, a lexical resource in which each WORDNET synset s is associated to three numerical scoresObj(s), Pos(s) and Neg(s), describing how objective, positive, and negative the terms contained in the synset are. The method used to develop SENTIWORDNET is based on the quantitative analysis of the glosses associated to synsets, and on the use of the resulting vectorial term representations for semi-supervised synset classification. The three scores are derived by combining the results produced by a committee of eight ternary classifiers, all characterized by similar accuracy levels but different classification behaviour. SENTIWORDNET is freely available for research purposes, and is endowed with a Web-based graphical user interface.", "In this work we present SENTIWORDNET 3.0, a lexical resource explicitly devised for supporting sentiment classification and opinion mining applications. SENTIWORDNET 3.0 is an improved version of SENTIWORDNET 1.0, a lexical resource publicly available for research purposes, now currently licensed to more than 300 research groups and used in a variety of research projects worldwide. Both SENTIWORDNET 1.0 and 3.0 are the result of automatically annotating all WORDNET synsets according to their degrees of positivity, negativity, and neutrality. SENTIWORDNET 1.0 and 3.0 differ (a) in the versions of WORDNET which they annotate (WORDNET 2.0 and 3.0, respectively), (b) in the algorithm used for automatically annotating WORDNET, which now includes (additionally to the previous semi-supervised learning step) a random-walk step for refining the scores. We here discuss SENTIWORDNET 3.0, especially focussing on the improvements concerning aspect (b) that it embodies with respect to version 1.0. We also report the results of evaluating SENTIWORDNET 3.0 against a fragment of WORDNET 3.0 manually annotated for positivity, negativity, and neutrality; these results indicate accuracy improvements of about 20 with respect to SENTIWORDNET 1.0.", "", "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.", "", "Deriving prior polarity lexica for sentiment analysis – where positive or negative scores are associated with words out of context – is a challenging task. Usually, a trade-off between precision and coverage is hard to find, and it depends on the methodology used to build the lexicon. Manually annotated lexica provide a high precision but lack in coverage, whereas automatic derivation from pre-existing knowledge guarantees high coverage at the cost of a lower precision. Since the automatic derivation of prior polarities is less time consuming than manual annotation, there has been a great bloom of these approaches, in particular based on the SentiWordNet resource. In this paper, we compare the most frequently used techniques based on SentiWordNet with newer ones and blend them in a learning framework (a so called ‘ensemble method’). By taking advantage of manually built prior polarity lexica, our ensemble method is better able to predict the prior value of unseen words and to outperform all the other SentiWordNet approaches. Using this technique we have built SentiWords , a prior polarity lexicon of approximately 155,000 words, that has both a high precision and a high coverage. We finally show that in sentiment analysis tasks, using our lexicon allows us to outperform both the single metrics derived from SentiWordNet and popular manually annotated sentiment lexica." ] }
1704.00939
2607065675
In this paper, we describe a methodology to infer Bullish or Bearish sentiment towards companies brands. More specifically, our approach leverages affective lexica and word embeddings in combination with convolutional neural networks to infer the sentiment of financial news headlines towards a target company. Such architecture was used and evaluated in the context of the SemEval 2017 challenge (task 5, subtask 2), in which it obtained the best performance.
@cite_0 , contains 10k words taken from ConceptNet and aligned with WordNetAffect, and extends the latter to concepts like have breakfast'. @cite_4 contains roughly 4k lemma #PoS manually annotated by one linguist using 80 emotion labels. @cite_22 contains almost 10k lemmas annotated with an intensity label for each emotion using Mechanical Turk. Finally, is an extension of @cite_2 and contains 2.5k words in the form lemma #PoS . The latter is the only lexicon providing words annotated also with emotion scores rather than only with labels.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_22", "@cite_2" ], "mid": [ "", "2141243797", "2040467972", "1595791987" ], "abstract": [ "", "We propose a convenient fusion of natural-language processing and fuzzy logic techniques for analyzing affect content in free text; our main goals are fast analysis and visualization of affect content for decision-making. The primary linguistic resource for fuzzy semantic typing is the fuzzy affect lexicon, from which other important resources are generated, notably the fuzzy thesaurus and affect category groups. Free text is tagged with affect categories from the lexicon, and the affect categories' centralities and intensities are combined using techniques from fuzzy logic to produce affect sets fuzzy sets that represent the affect quality of a document. We show different aspects of affect analysis using news stories and movie reviews. Our experiments show a very good correspondence of affect sets with human judgments of affect content. We ascribe this to the effective representation of ambiguity in our fuzzy affect lexicon, and the ability of fuzzy logic to deal successfully with the ambiguity of words in natural language. Planned extensions of the system include personalized profiles for Web-based content dissemination, fuzzy retrieval, clustering and classification.", "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper, we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help to identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help to obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotion-annotation questions, and show that asking if a term is associated with an emotion leads to markedly higher interannotator agreement than that obtained by asking if a term evokes an emotion.", "In this paper, we address the tasks of recognition and interpretation of affect communicated through text messaging. The evolving nature of language in online conversations is a main issue in affect sensing from this media type, since sentence parsing might fail while syntactical structure analysis. The developed Affect Analysis Model was designed to handle not only correctly written text, but also informal messages written in abbreviated or expressive manner. The proposed rule-based approach processes each sentence in sequential stages, including symbolic cue processing, detection and transformation of abbreviations, sentence parsing, and word phrase sentence-level analyses. In a study based on 160 sentences, the system result agrees with at least two out of three human annotators in 70 of the cases. In order to reflect the detected affective information and social behaviour, an avatar was created." ] }