aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1608.07728 | 2510078559 | In this paper, we derive key-rate expressions for different quantum key distribution protocols. Our key-rate equations utilize multiple channel statistics, including those gathered from mismatched measurement bases - i.e., when Alice and Bob choose incompatible bases. In particular, we will consider an Extended B92 and a two-way semi-quantum protocol. For both these protocols, we demonstrate that their tolerance to noise is higher than previously thought - in fact, we will show the semi-quantum protocol can actually tolerate the same noise level as the fully quantum BB84 protocol. Along the way, we will also consider an optimal QKD protocol for various quantum channels. Finally, all the key-rate expressions which we derive in this paper are applicable to any arbitrary, not necessarily symmetric, quantum channel. | In this paper, building off of our conference paper in @cite_23 (where we only considered three states for parameter estimation), we will apply mismatched measurements to non-BB84 style protocols and to protocols relying on two-way quantum channels. After an introduction to our notation, we will first explain the parameter estimation method and our technique. We will then apply it to the Extended B92 protocol @cite_6 and derive an improved key-rate bound for it. We will then use our method to consider an optimal'' QKD protocol. Finally, we will analyze a multi-state semi-quantum protocol from @cite_15 which relies on a two-way quantum channel. This new proof of security will derive a far more optimistic bound on the key rate expression than the one previously constructed in @cite_8 (the latter did not use mismatched measurement bases). | {
"cite_N": [
"@cite_15",
"@cite_6",
"@cite_23",
"@cite_8"
],
"mid": [
"1988306297",
"2091539141",
"2950750721",
"1677427585"
],
"abstract": [
"Secure key distribution among two remote parties is impossible when both are classical, unless some unproven (and arguably unrealistic) computation-complexity assumptions are made, such as the difficulty of factorizing large numbers. On the other hand, a secure key distribution is possible when both parties are quantum. What is possible when only one party (Alice) is quantum, yet the other (Bob) has only classical capabilities? We present two protocols with this constraint, and prove their robustness against attacks: we prove that any attempt of an adversary to obtain information (and even a tiny amount of information) necessarily induces some errors that the legitimate users could notice.",
"We introduce a novel form of decoy-state technique to make the single-photon Bennett 1992 protocol robust against losses and noise of a communication channel. Two uninformative states are prepared by the transmitter in order to prevent the unambiguous state discrimination attack and improve the phase-error rate estimation. The presented method does not require strong reference pulses, additional electronics or extra detectors for its implementation.",
"In this paper we consider a three-state variant of the BB84 quantum key distribution (QKD) protocol. We derive a new lower-bound on the key rate of this protocol in the asymptotic scenario and use mismatched measurement outcomes to improve the channel estimation. Our new key rate bound remains positive up to an error rate of @math , exactly that achieved by the four-state BB84 protocol.",
"Semi-quantum key distribution protocols are designed to allow two users to establish a secure secret key when one of the two users is limited to performing certain “classical” operations. There have been several such protocols developed recently, however, due to their reliance on a two-way quantum communication channel (and thus, the attacker's opportunity to interact with the qubit twice), their security analysis is difficult and little is known concerning how secure they are compared to their fully quantum counterparts. In this paper we prove the unconditional security of a particular semi-quantum protocol and derive an expression for its key rate, in the asymptotic scenario."
]
} |
1608.07411 | 2508491788 | Volume-based reconstruction is usually expensive both in terms of memory consumption and runtime. Especially for sparse geometric structures, volumetric representations produce a huge computational overhead. We present an efficient way to fuse range data via a variational Octree-based minimization approach by taking the actual range data geometry into account. We transform the data into Octree-based truncated signed distance fields and show how the optimization can be conducted on the newly created structures. The main challenge is to uphold speed and a low memory footprint without sacrificing the solutions' accuracy during optimization. We explain how to dynamically adjust the optimizer's geometric structure via joining splitting of Octree nodes and how to define the operators. We evaluate on various datasets and outline the suitability in terms of performance and geometric accuracy. | In many fields, level-set methods are often employed to solve given problems in e.g. fluid dynamics @cite_2 @cite_1 @cite_9 , computer graphics @cite_26 @cite_15 or 3D reconstruction @cite_13 @cite_20 @cite_24 where the physical properties of the model act upon the level-set function via PDEs. We refer the reader to a survey on 3D distance fields as a special variant of level-set functions @cite_12 . In contrast to explicit representations which can entail topological difficulties as well as rendering mathematical operations harder to implement, level-sets can implicitly represent arbitrary shapes and are therefore often preferred. Nonetheless, optimizing in volumetric data always is costly and related work tackled it in the following way: | {
"cite_N": [
"@cite_26",
"@cite_9",
"@cite_1",
"@cite_24",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"2146105286",
"2118972993",
"",
"1987648924",
"",
"2099342750",
"2009422376",
"",
"2055686029"
],
"abstract": [
"In this paper, we propose the use of the level-set method as the underlying technology of a volume sculpting system. The main motivation is that this leads to a very generic technique for deformation of volumetric solids. In addition, our method preserves a distance field volume representation. A scaling window is used to adapt the level-set method to local deformations and to allow the user to control the intensity of the tool. Level-set based tools have been implemented in an interactive sculpting system, and we show sculptures created using the system.",
"Abstract Since the seminal work of [Sussman, M, Smereka P, Osher S. A level set approach for computing solutions to incompressible two-phase flow. J Comput Phys 1994;114:146–59] on coupling the level set method of [Osher S, Sethian J. Fronts propagating with curvature-dependent speed: algorithms based on Hamilton–Jacobi formulations. J Comput Phys 1988;79:12–49] to the equations for two-phase incompressible flow, there has been a great deal of interest in this area. That work demonstrated the most powerful aspects of the level set method, i.e. automatic handling of topological changes such as merging and pinching, as well as robust geometric information such as normals and curvature. Interestingly, this work also demonstrated the largest weakness of the level set method, i.e. mass or information loss characteristic of most Eulerian capturing techniques. In fact, [Sussman M, Smereka P, Osher S. A level set approach for computing solutions to incompressible two-phase flow. J Comput Phys 1994;114:146–59] introduced a partial differential equation for battling this weakness, without which their work would not have been possible. In this paper, we discuss both historical and most recent works focused on improving the computational accuracy of the level set method focusing in part on applications related to incompressible flow due to both of its popularity and stringent accuracy requirements. Thus, we discuss higher order accurate numerical methods such as Hamilton–Jacobi WENO [Jiang G-S, Peng D. Weighted ENO schemes for Hamilton–Jacobi equations. SIAM J Sci Comput 2000;21:2126–43], methods for maintaining a signed distance function, hybrid methods such as the particle level set method [Enright D, Fedkiw R, Ferziger J, Mitchell I. A hybrid particle level set method for improved interface capturing. J Comput Phys 2002;183:83–116] and the coupled level set volume of fluid method [Sussman M, Puckett EG. A coupled level set and volume-of-fluid method for computing 3d and axisymmetric incompressible two-phase flows. J Comput Phys 2000;162:301–37], and adaptive gridding techniques such as the octree approach to free surface flows proposed in [Losasso F, Gibou F, Fedkiw R. Simulating water and smoke with an octree data structure, ACM Trans Graph (SIGGRAPH Proc) 2004;23:457–62].",
"",
"We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.",
"",
"This article introduces the Hierarchical Run-Length Encoded (H-RLE) Level Set data structure. This novel data structure combines the best features of the DT-Grid (of Nielsen and Museth [2004]) and the RLE Sparse Level Set (of [2004]) to provide both optimal efficiency and extreme versatility. In brief, the H-RLE level set employs an RLE in a dimensionally recursive fashion. The RLE scheme allows the compact storage of sequential nonnarrowband regions while the dimensionally recursive encoding along each axis efficiently compacts nonnarrowband planes and volumes. Consequently, this new structure can store and process level sets with effective voxel resolutions exceeding 5000 × 3000 × 3000 (45 billion voxels) on commodity PCs with only 1 GB of memory. This article, besides introducing the H-RLE level set data structure and its efficient core algorithms, also describes numerous applications that have benefited from our use of this structure: our unified implicit object representation, efficient and robust mesh to level set conversion, rapid ray tracing, level set metamorphosis, collision detection, and fully sparse fluid simulation (including RLE vector and matrix representations.) Our comparisons of the popular octree level set and Peng level set structures to the H-RLE level set indicate that the latter is superior in both narrowband sequential access speed and overall memory usage.",
"A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scan-convert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a run-length encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the final manifold by extracting an isosurface from the volumetric grid. We show that under certain assumptions, this isosurface is optimal in the least squares sense. To fill gaps in the model, we tessellate over the boundaries between regions seen to be empty and regions never observed. Using this method, we are able to integrate a large number of range images (as many as 70) yielding seamless, high-detail models of up to 2.6 million triangles.",
"",
"We propose a probabilistic formulation of joint silhouette extraction and 3D reconstruction given a series of calibrated 2D images. Instead of segmenting each image separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most probable 3D shape that gives rise to the observed color information. The probabilistic framework, based on Bayesian inference, enables robust 3D reconstruction by optimally taking into account the contribution of all views. We solve the arising maximum a posteriori shape inference in a globally optimal manner by convex relaxation techniques in a spatially continuous representation. For an interactively provided user input in the form of scribbles specifying foreground and background regions, we build corresponding color distributions as multivariate Gaussians and find a volume occupancy that best fits to this data in a variational sense. Compared to classical methods for silhouette-based multiview reconstruction, the proposed approach does not depend on initialization and enjoys significant resilience to violations of the model assumptions due to background clutter, specular reflections, and camera sensor perturbations. In experiments on several real-world data sets, we show that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts. This results in more accurate visual hull estimation, needed by a multitude of image-based modeling approaches. We made use of recent advances in parallel computing with a GPU implementation of the proposed method generating reconstructions on volume grids of more than 20 million voxels in up to 4.41 seconds."
]
} |
1608.07400 | 2517672267 | We show that collaborative filtering can be viewed as a sequence prediction problem, and that given this interpretation, recurrent neural networks offer very competitive approach. In particular we study how the long short-term memory (LSTM) can be applied to collaborative filtering, and how it compares to standard nearest neighbors and matrix factorization methods on movie recommendation. We show that the LSTM is competitive in all aspects, and largely outperforms other methods in terms of item coverage and short term predictions. | Some earlier works have framed collaborative filtering as a sequence prediction problem and used simpler Markov chain methods to solve it. In the early 2000s, @cite_10 used a simple Markov model and tested it for web-page recommendation. @cite_8 adopted a similar approach, using sequential pattern mining. Both showed the superiority of methods based on sequence over nearest-neighbors approaches. In @cite_15 @cite_14 , defended the view of recommendation systems as a Markov decision process, and although the predictive model was not their main focus, they did present in @cite_15 a Markov chain approach, improved by some heuristics such as skipping and clustering. | {
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_10",
"@cite_8"
],
"mid": [
"2953132212",
"",
"2157973827",
"2117111450"
],
"abstract": [
"Typical Recommender systems adopt a static view of the recommendation process and treat it as a prediction problem. We argue that it is more appropriate to view the problem of generating recommendations as a sequential decision problem and, consequently, that Markov decision processes (MDP) provide a more appropriate model for Recommender systems. MDPs introduce two benefits: they take into account the long-term effects of each recommendation, and they take into account the expected value of each recommendation. To succeed in practice, an MDP-based Recommender system must employ a strong initial model; and the bulk of this paper is concerned with the generation of such a model. In particular, we suggest the use of an n-gram predictive model for generating the initial MDP. Our n-gram model induces a Markov-chain model of user behavior whose predictive accuracy is greater than that of existing predictive models. We describe our predictive model in detail and evaluate its performance on real data. In addition, we show how the model can be used in an MDP-based Recommender system.",
"",
"We treat collaborative filtering as a univariate time series problem: given a user's previous votes, predict the next vote. We describe two families of methods for transforming data to encode time order in ways amenable to off-the-shelf classification and density estimation tools. Using a decision-tree learning tool and two real-world data sets, we compare the results of these approaches to the results of collaborative filtering without ordering information. The improvements in both predictive accuracy and in recommendation quality that we realize advocate the use of predictive algorithms exploiting the temporal order of data.",
"We describe an efficient framework for Web personalization based on sequential and non-sequential pattern discovery from usage data. Our experimental results performed on real usage data indicate that more restrictive patterns, such as contiguous sequential patterns (e.g., frequent navigational paths) are more suitable for predictive tasks, such as Web prefetching, (which involve predicting which item is accessed next by a user), while less constrained patterns, such as frequent item sets or general sequential patterns are more effective alternatives in the context of Web personalization and recommender systems."
]
} |
1608.07400 | 2517672267 | We show that collaborative filtering can be viewed as a sequence prediction problem, and that given this interpretation, recurrent neural networks offer very competitive approach. In particular we study how the long short-term memory (LSTM) can be applied to collaborative filtering, and how it compares to standard nearest neighbors and matrix factorization methods on movie recommendation. We show that the LSTM is competitive in all aspects, and largely outperforms other methods in terms of item coverage and short term predictions. | More recently, @cite_9 introduced a rather fair approach to build personalized Markov chain, exploiting matrix factorization to fight the sparsity problem. Their method is mainly designed for the next basket recommendation problem, but it would be of great interest to adapt it for a more general recommendation problem. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2171279286"
],
"abstract": [
"Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization."
]
} |
1608.07242 | 2513005088 | We present an online visual tracking algorithm by managing multiple target appearance models in a tree structure. The proposed algorithm employs Convolutional Neural Networks (CNNs) to represent target appearances, where multiple CNNs collaborate to estimate target states and determine the desirable paths for online model updates in the tree. By maintaining multiple CNNs in diverse branches of tree structure, it is convenient to deal with multi-modality in target appearances and preserve model reliability through smooth updates along tree paths. Since multiple CNNs share all parameters in convolutional layers, it takes advantage of multiple models with little extra cost by saving memory space and avoiding redundant network evaluations. The final target state is estimated by sampling target candidates around the state in the previous frame and identifying the best sample in terms of a weighted average score from a set of active CNNs. Our algorithm illustrates outstanding performance compared to the state-of-the-art techniques in challenging datasets such as online tracking benchmark and visual object tracking challenge. | Tracking-by-detection approaches formulate visual tracking as a discriminative object classification problem in a sequence of video frames. The techniques in this category typically learn classifiers to differentiate targets from surrounding backgrounds; various algorithms have achieved improved performance by coping with dynamic appearance changes and constructing robust target models. For example, @cite_9 modified a famous object detection algorithm, Adaboost, and presented an online learning method for tracking. A multiple instance learning technique has been introduced in @cite_35 to update classifier online, where a bag of image patches is employed as a training example instead of a single patch to alleviate labeling noises. By the similar motivation, an approach based on structured SVM has been proposed in @cite_38 . TLD @cite_36 proposed a semi-supervised learning technique with structural constraints. All of these techniques are successful in learning reasonable target representations by adopting online discriminative learning procedures, but still rely on simple shallow features; we believe that tracking performance may be improved further by using deep features. | {
"cite_N": [
"@cite_36",
"@cite_35",
"@cite_9",
"@cite_38"
],
"mid": [
"",
"2109579504",
"2000326692",
"2098941887"
],
"abstract": [
"",
"In this paper, we address the problem of tracking an object in a video given its location in the first frame and no other information. Recently, a class of tracking techniques called “tracking by detection” has been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrade the classifier and can cause drift. In this paper, we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems and can therefore lead to a more robust tracker with fewer parameter tweaks. We propose a novel online MIL algorithm for object tracking that achieves superior results with real-time performance. We present thorough experimental results (both qualitative and quantitative) on a number of challenging video clips.",
"Thermally stable light distillate turbine fuel composition containing 1) a substituted carbamate and 2) an aldehyde-amine condensation product, and a method for operating a turbine engine.",
"Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (accurate estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we are able to avoid the need for an intermediate classification step. Our method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, we introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased performance."
]
} |
1608.07242 | 2513005088 | We present an online visual tracking algorithm by managing multiple target appearance models in a tree structure. The proposed algorithm employs Convolutional Neural Networks (CNNs) to represent target appearances, where multiple CNNs collaborate to estimate target states and determine the desirable paths for online model updates in the tree. By maintaining multiple CNNs in diverse branches of tree structure, it is convenient to deal with multi-modality in target appearances and preserve model reliability through smooth updates along tree paths. Since multiple CNNs share all parameters in convolutional layers, it takes advantage of multiple models with little extra cost by saving memory space and avoiding redundant network evaluations. The final target state is estimated by sampling target candidates around the state in the previous frame and identifying the best sample in terms of a weighted average score from a set of active CNNs. Our algorithm illustrates outstanding performance compared to the state-of-the-art techniques in challenging datasets such as online tracking benchmark and visual object tracking challenge. | Although the representations by deep neural networks turn out to be effective in various visual recognition problems, tracking algorithms based on hand-crafted features @cite_23 @cite_30 often outperform CNN-based approaches. This is partly because CNNs are difficult to train using noisy labeled data online while they are easy to overfit to a small number of training examples; it is not straightforward to apply CNNs to visual tracking problems involving online learning. For example, the performance of @cite_31 , which is based on a shallow custom neural network, is not as successful as recent tracking algorithms based on shallow feature learning. However, CNN-based tracking algorithms started to present competitive accuracy in the online tracking benchmark @cite_5 by transferring the CNNs pretrained on ImageNet @cite_10 . For example, simple approaches based on fully convolutional networks or hierarchical representations illustrate substantially improved results @cite_4 @cite_14 . In addition, the combination of pretrained CNN and online SVM achieves competitive results @cite_16 . However, these deep learning based methods are still not very impressive compared to the tracking techniques based on hand-crafted features @cite_23 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_4",
"@cite_23",
"@cite_5",
"@cite_31",
"@cite_16",
"@cite_10"
],
"mid": [
"182940129",
"2211629196",
"2214352687",
"1915599933",
"2089961441",
"2069332137",
"2951157758",
""
],
"abstract": [
"We propose a multi-expert restoration scheme to address the model drift problem in online tracking. In the proposed scheme, a tracker and its historical snapshots constitute an expert ensemble, where the best expert is selected to restore the current tracker when needed based on a minimum entropy criterion, so as to correct undesirable model updates. The base tracker in our formulation exploits an online SVM on a budget algorithm and an explicit feature mapping method for efficient model update and inference. In experiments, our tracking method achieves substantially better overall performance than 32 trackers on a benchmark dataset of 50 video sequences under various evaluation settings. In addition, in experiments with a newly collected dataset of challenging sequences, we show that the proposed multi-expert restoration scheme significantly improves the robustness of our base tracker, especially in scenarios with frequent occlusions and repetitive appearance variations.",
"We propose a new approach for general object tracking with fully convolutional neural network. Instead of treating convolutional neural network (CNN) as a black-box feature extractor, we conduct in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet. The discoveries motivate the design of our tracking system. It is found that convolutional layers in different levels characterize the target from different perspectives. A top layer encodes more semantic features and serves as a category detector, while a lower layer carries more discriminative information and can better separate the target from distracters with similar appearance. Both layers are jointly used with a switch mechanism during tracking. It is also found that for a tracking target, only a subset of neurons are relevant. A feature map selection method is developed to remove noisy and irrelevant feature maps, which can reduce computation redundancy and improve tracking accuracy. Extensive evaluation on the widely used tracking benchmark [36] shows that the proposed tacker outperforms the state-of-the-art significantly.",
"Visual object tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we exploit features extracted from deep convolutional neural networks trained on object recognition datasets to improve tracking accuracy and robustness. The outputs of the last convolutional layers encode the semantic information of targets and such representations are robust to significant appearance variations. However, their spatial resolution is too coarse to precisely localize targets. In contrast, earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchies of convolutional layers as a nonlinear counterpart of an image pyramid representation and exploit these multiple levels of abstraction for visual tracking. Specifically, we adaptively learn correlation filters on each convolutional layer to encode the target appearance. We hierarchically infer the maximum response of each layer to locate targets. Extensive experimental results on a largescale benchmark dataset show that the proposed algorithm performs favorably against state-of-the-art methods.",
"Variations in the appearance of a tracked object, such as changes in geometry photometry, camera viewpoint, illumination, or partial occlusion, pose a major challenge to object tracking. Here, we adopt cognitive psychology principles to design a flexible representation that can adapt to changes in object appearance during tracking. Inspired by the well-known Atkinson-Shiffrin Memory Model, we propose MUlti-Store Tracker (MUSTer), a dual-component approach consisting of short- and long-term memory stores to process target appearance memories. A powerful and efficient Integrated Correlation Filter (ICF) is employed in the short-term store for short-term tracking. The integrated long-term component, which is based on keypoint matching-tracking and RANSAC estimation, can interact with the long-term memory and provide additional information for output control. MUSTer was extensively evaluated on the CVPR2013 Online Object Tracking Benchmark (OOTB) and ALOV++ datasets. The experimental results demonstrated the superior performance of MUSTer in comparison with other state-of-art trackers.",
"Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.",
"Defining hand-crafted feature representations needs expert knowledge, requires timeconsuming manual adjustments, and besides, it is arguably one of the limiting factors of object tracking. In this paper, we propose a novel solution to automatically relearn the most useful feature representations during the tracking process in order to accurately adapt appearance changes, pose and scale variations while preventing from drift and tracking failures. We employ a candidate pool of multiple Convolutional Neural Networks (CNNs) as a data-driven model of different instances of the target object. Individually, each CNN maintains a specific set of kernels that favourably discriminate object patches from their surrounding background using all available low-level cues. These kernels are updated in an online manner at each frame after being trained with just one instance at the initialization of the corresponding CNN. Given a frame, the most promising CNNs in the pool are selected to evaluate the hypothesises for the target object. The hypothesis with the highest score is assigned as the current detection window and the selected models are retrained using a warm-start back-propagation which optimizes a structural loss function. In addition to the model-free tracker, we introduce a class-specific version of the proposed method that is tailored for tracking of a particular object class such as human faces. Our experiments on a large selection of videos from the recent benchmarks demonstrate that our method outperforms the existing state-of-the-art algorithms and rarely loses the track of the target object.",
"We propose an online visual tracking algorithm by learning discriminative saliency map using Convolutional Neural Network (CNN). Given a CNN pre-trained on a large-scale image repository in offline, our algorithm takes outputs from hidden layers of the network as feature descriptors since they show excellent representation performance in various general visual recognition problems. The features are used to learn discriminative target appearance models using an online Support Vector Machine (SVM). In addition, we construct target-specific saliency map by backpropagating CNN features with guidance of the SVM, and obtain the final tracking result in each frame based on the appearance model generatively constructed with the saliency map. Since the saliency map visualizes spatial configuration of target effectively, it improves target localization accuracy and enable us to achieve pixel-level target segmentation. We verify the effectiveness of our tracking algorithm through extensive experiment on a challenging benchmark, where our method illustrates outstanding performance compared to the state-of-the-art tracking algorithms.",
""
]
} |
1608.07242 | 2513005088 | We present an online visual tracking algorithm by managing multiple target appearance models in a tree structure. The proposed algorithm employs Convolutional Neural Networks (CNNs) to represent target appearances, where multiple CNNs collaborate to estimate target states and determine the desirable paths for online model updates in the tree. By maintaining multiple CNNs in diverse branches of tree structure, it is convenient to deal with multi-modality in target appearances and preserve model reliability through smooth updates along tree paths. Since multiple CNNs share all parameters in convolutional layers, it takes advantage of multiple models with little extra cost by saving memory space and avoiding redundant network evaluations. The final target state is estimated by sampling target candidates around the state in the previous frame and identifying the best sample in terms of a weighted average score from a set of active CNNs. Our algorithm illustrates outstanding performance compared to the state-of-the-art techniques in challenging datasets such as online tracking benchmark and visual object tracking challenge. | Multiple models are often employed in generative tracking algorithms to handle target appearance variations and recover from tracking failures. Trackers based on sparse representation @cite_28 @cite_40 maintain multiple target templates to compute the likelihood of each sample by minimizing its reconstruction error while @cite_6 integrates multiple observation models via an MCMC framework. Nam al @cite_3 integrates patch-matching results from multiple frames and estimates the posterior of target state. On the other hand, ensemble classifiers have sometimes been applied to visual tracking problem. Tang al @cite_32 proposed a co-tracking framework based on two support vector machines. An ensemble of weak classifiers is employed to estimate target states in @cite_34 @cite_39 . Zhang al @cite_30 presented a framework based on multiple snapshots of SVM-based trackers to recover from tracking failures. | {
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_32",
"@cite_3",
"@cite_6",
"@cite_39",
"@cite_40",
"@cite_34"
],
"mid": [
"182940129",
"2183648259",
"2163532725",
"",
"2098854771",
"2112314277",
"2158917775",
""
],
"abstract": [
"We propose a multi-expert restoration scheme to address the model drift problem in online tracking. In the proposed scheme, a tracker and its historical snapshots constitute an expert ensemble, where the best expert is selected to restore the current tracker when needed based on a minimum entropy criterion, so as to correct undesirable model updates. The base tracker in our formulation exploits an online SVM on a budget algorithm and an explicit feature mapping method for efficient model update and inference. In experiments, our tracking method achieves substantially better overall performance than 32 trackers on a benchmark dataset of 50 video sequences under various evaluation settings. In addition, in experiments with a newly collected dataset of challenging sequences, we show that the proposed multi-expert restoration scheme significantly improves the robustness of our base tracker, especially in scenarios with frequent occlusions and repetitive appearance variations.",
"In this paper we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, corruption and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target at a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an � 1-regularized least squares problem. Then the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework in which a particle filter is used for propagating sample distributions over time. Two additional components further improve the robustness of our approach: 1) the nonnegativity constraints that help filter out clutter that is similar to tracked targets in reversed intensity patterns, and 2) a dynamic template update scheme that keeps track of the most representative templates throughout the tracking procedure. We test the proposed approach on five challenging sequences involving heavy occlusions, drastic illumination changes, and large pose variations. The proposed approach shows excellent performance in comparison with three previously proposed trackers.",
"This paper treats tracking as a foreground background classification problem and proposes an online semi- supervised learning framework. Initialized with a small number of labeled samples, semi-supervised learning treats each new sample as unlabeled data. Classification of new data and updating of the classifier are achieved simultaneously in a co-training framework. The object is represented using independent features and an online support vector machine (SVM) is built for each feature. The predictions from different features are fused by combining the confidence map from each classifier using a classifier weighting method which creates a final classifier that performs better than any classifier based on a single feature. The semi-supervised learning approach then uses the output of the combined confidence map to generate new samples and update the SVMs online. With this approach, the tracker gains increasing knowledge of the object and background and continually improves itself over time. Compared to other discriminative trackers, the online semi-supervised learning approach improves each individual classifier using the information from other features, thus leading to a more robust tracker. Experiments show that this framework performs better than state-of-the-art tracking algorithms on challenging sequences.",
"",
"We propose a novel tracking algorithm that can work robustly in a challenging scenario such that several kinds of appearance and motion changes of an object occur at the same time. Our algorithm is based on a visual tracking decomposition scheme for the efficient design of observation and motion models as well as trackers. In our scheme, the observation model is decomposed into multiple basic observation models that are constructed by sparse principal component analysis (SPCA) of a set of feature templates. Each basic observation model covers a specific appearance of the object. The motion model is also represented by the combination of multiple basic motion models, each of which covers a different type of motion. Then the multiple basic trackers are designed by associating the basic observation models and the basic motion models, so that each specific tracker takes charge of a certain change in the object. All basic trackers are then integrated into one compound tracker through an interactive Markov Chain Monte Carlo (IMCMC) framework in which the basic trackers communicate with one another interactively while run in parallel. By exchanging information with others, each tracker further improves its performance, which results in increasing the whole performance of tracking. Experimental results show that our method tracks the object accurately and reliably in realistic videos where the appearance and motion are drastically changing over time.",
"We propose a randomized ensemble algorithm to model the time-varying appearance of an object for visual tracking. In contrast with previous online methods for updating classifier ensembles in tracking-by-detection, the weight vector that combines weak classifiers is treated as a random variable and the posterior distribution for the weight vector is estimated in a Bayesian manner. In essence, the weight vector is treated as a distribution that reflects the confidence among the weak classifiers used to construct and adapt the classifier ensemble. The resulting formulation models the time-varying discriminative ability among weak classifiers so that the ensembled strong classifier can adapt to the varying appearance, backgrounds, and occlusions. The formulation is tested in a tracking-by-detection implementation. Experiments on 28 challenging benchmark videos demonstrate that the proposed method can achieve results comparable to and often better than those of state-of-the-art approaches.",
"In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing l p, q mixed norms (p Є 2, ∞ and q = 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p = q = 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers.",
""
]
} |
1608.07443 | 2949895960 | The internet of things (IoT) has gained worldwide attention in recent years. It transforms the everyday objects that surround us into proactive actors of the Internet, generating and consuming information. An important issue related to the appearance of such large-scale self-coordinating IoT is the reliability and the collaboration between the objects in the presence of environmental hazards. High failure rates lead to significant loss of data. Therefore, data survivability is a main challenge of the IoT. In this paper, we have developed a compartmental e-Epidemic SIR (Susceptible-Infectious-Recovered) model to save the data in the network and let it survive after attacks. Furthermore, our model takes into account the dynamic topology of the network where natural death (crashing nodes) and birth are defined and analyzed. Theoretical methods and simulations are employed to solve and simulate the system of equations developed and to analyze the model. | In the literature, we can find several mathematical models which illustrate the dynamical behavior of the transmission of biological diseases and or computer viruses. Based on the Kermack and McKendrick SIR classical epidemic model @cite_10 @cite_22 , dynamical models for malicious objects propagation were proposed. Due to the numerous similarities between biological viruses and computer viruses, several approaches and models are proposed to study the spreading and attacking behavior of computer viruses in different phenomena, e.g. virus propagation @cite_11 @cite_27 @cite_13 , e-mail propagation schemes @cite_23 , virus immunization @cite_24 @cite_7 , quarantine @cite_29 @cite_15 , vaccination @cite_1 , etc. The authors in @cite_3 propose an improved SEI (susceptible-exposed-infected) model to simulate virus propagation. @cite_14 propose an SEIS-V epidemic model with vertical transmission using vaccination (that is, run of anti-virus software time and again with full efficiency) so that a temporary recovery from the infection of worms can be obtained. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_10",
"@cite_29",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"",
"2080655821",
"2148301044",
"2049222678",
"",
"2535671654",
"2040923486",
"",
"",
"",
"1513290194",
""
],
"abstract": [
"",
"",
"",
"(1) One of the most striking features in the study of epidemics is the difficulty of finding a causal factor which appears to be adequate to account for the magnitude of the frequent epidemics of disease which visit almost every population. It was with a view to obtaining more insight regarding the effects of the various factors which govern the spread of contagious epidemics that the present investigation was undertaken. Reference may here be made to the work of Ross and Hudson (1915-17) in which the same problem is attacked. The problem is here carried to a further stage, and it is considered from a point of view which is in one sense more general. The problem may be summarised as follows: One (or more) infected person is introduced into a community of individuals, more or less susceptible to the disease in question. The disease spreads from the affected to the unaffected by contact infection. Each infected person runs through the course of his sickness, and finally is removed from the number of those who are sick, by recovery or by death. The chances of recovery or death vary from day to day during the course of his illness. The chances that the affected may convey infection to the unaffected are likewise dependent upon the stage of the sickness. As the epidemic spreads, the number of unaffected members of the community becomes reduced. Since the course of an epidemic is short compared with the life of an individual, the population may be considered as remaining constant, except in as far as it is modified by deaths due to the epidemic disease itself. In the course of time the epidemic may come to an end. One of the most important probems in epidemiology is to ascertain whether this termination occurs only when no susceptible individuals are left, or whether the interplay of the various factors of infectivity, recovery and mortality, may result in termination, whilst many susceptible individuals are still present in the unaffected population. It is difficult to treat this problem in its most general aspect. In the present communication discussion will be limited to the case in which all members of the community are initially equally susceptible to the disease, and it will be further assumed that complete immunity is conferred by a single infection.",
"Abstract Susceptible ( S ) – exposed ( E ) – infectious ( I ) – quarantined ( Q ) – recovered ( R ) model for the transmission of malicious objects in computer network is formulated. Thresholds, equilibria, and their stability are also found with cyber mass action incidence. Threshold R cq determines the outcome of the disease. If R cq ⩽ 1, the infected fraction of the nodes disappear so the disease die out, while if R cq > 1, the infected fraction persists and the feasible region is an asymptotic stability region for the endemic equilibrium state. Numerical methods are employed to solve and simulate the system of equations developed. The effect of quarantine on recovered nodes is analyzed. We have also analyzed the behavior of the susceptible, exposed, infected, quarantine, and recovered nodes in the computer network.",
"",
"The popularity of peer-to-peer (P2P) networks makes them an attractive target to the creators of viruses and other malicious code. Indeed, recently a number of viruses designed specifically to spread via P2P networks have emerged. In this paper we present a model which predicts how a P2P-based virus propagates through a network. This model is a modified version of the S-E-I (susceptible-exposed-infected) model from the field of epidemiology. Our model classifies each peer as falling into one of three categories based on the number of infected files it is sharing. We derive differential equations which comprise the deterministic model and examine the expected behaviour of the P2P network as predicted by these equations",
"An e-epidemic SEIRS model for the transmission of worms in computer network through vertical transmission is formulated. It has been observed that if the basic reproduction number is less than or equal to one, the infected part of the nodes disappear and the worm dies out, but if the basic reproduction number is greater than one, the infected nodes exists and the worms persist at an endemic equilibrium state. Numerical methods are employed to solve and simulate the system of equations developed. We have analyzed the behavior of the susceptible, exposed, infected and recovered nodes in the computer network with real parametric values.",
"",
"",
"",
"A wide variety of practical problems related to the interaction of agents can be examined using biological metaphors. This paper applies the theory of G-networks to agent systems by considering a biological metaphor based on three types of entities: normal cells C, cancerous or bad cells B, and immune defense agents A which are used to destroy the bad cells B, but which sometimes have the effect of being able to destroy the good cells C as well (autoimmune response). Cells of type C can mutate into cells of Type B, and vice-versa. In the presence of probabilities of correct detection and false alarm on the part of agents of Type A, we examine how the dose of agent A will influence the desired outcome which is that most bad cells B are destroyed while the damage to cells C is limited to an acceptable level. In a second part of the paper we illustrate how a similar model can be used to represent a mixture of agents with the ability to cooperate as well as to compete.",
""
]
} |
1608.06757 | 2512822132 | Named entity recognition often fails in idiosyncratic domains. That causes a problem for depending tasks, such as entity linking and relation extraction. We propose a generic and robust approach for high-recall named entity recognition. Our approach is easy to train and offers strong generalization over diverse domain-specific language, such as news documents (e.g. Reuters) or biomedical text (e.g. Medline). Our approach is based on deep contextual sequence learning and utilizes stacked bidirectional LSTM networks. Our model is trained with only few hundred labeled sentences and does not rely on further external knowledge. We report from our results F1 scores in the range of 84-94 on standard datasets. | The task of NER has been extensively studied with various evaluation in the last decades: MUC-6, MUC-7, CoNLL2002, CoNLL2003 and ACE. The standard approach to NER is the application of discriminative tagging @cite_15 to the task of NER @cite_7 , often with linear chain Conditional Random Field (CRF), Hidden Markov (HMM) or Maximum Entropy Hidden Markov Models (MEMM). Later, used continuous-space language models, where type-to-vector word mappings can be learned using backpropagation. achieved a more effective vector representation using the skip-gram model. The model optimizes the likelihood of tokens over a window surrounding a given token. This training process produces a linear classifier that predicts words conditioned on the central token's vector representation. | {
"cite_N": [
"@cite_15",
"@cite_7"
],
"mid": [
"2008652694",
"2141099517"
],
"abstract": [
"We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modification of the proof of convergence of the perceptron algorithm for classification problems. We give experimental results on part-of-speech tagging and base noun phrase chunking, in both cases showing improvements over results for a maximum-entropy tagger.",
"Models for many natural language tasks benefit from the flexibility to use overlapping, non-independent features. For example, the need for labeled data can be drastically reduced by taking advantage of domain knowledge in the form of word lists, part-of-speech tags, character n-grams, and capitalization patterns. While it is difficult to capture such inter-dependent features with a generative probabilistic model, conditionally-trained models, such as conditional maximum entropy models, handle them well. There has been significant work with such models for greedy sequence modeling in NLP (Ratnaparkhi, 1996; , 1998)."
]
} |
1608.06757 | 2512822132 | Named entity recognition often fails in idiosyncratic domains. That causes a problem for depending tasks, such as entity linking and relation extraction. We propose a generic and robust approach for high-recall named entity recognition. Our approach is easy to train and offers strong generalization over diverse domain-specific language, such as news documents (e.g. Reuters) or biomedical text (e.g. Medline). Our approach is based on deep contextual sequence learning and utilizes stacked bidirectional LSTM networks. Our model is trained with only few hundred labeled sentences and does not rely on further external knowledge. We report from our results F1 scores in the range of 84-94 on standard datasets. | Named entity linking is the task to match textual mentions of named entities to a knowledge base @cite_9 . This task requires a set of candidate mentions from sentences. As a result, the recall from the underlying NER system constitutes an upper bound for entity linking accuracy @cite_16 . Moreover, show that state-of-the-art systems are substantially limited by low recall'' and don't perform well especially on idiosyncratic data while highlight that terms with high novelty or high specificity cannot efficiently be linked by current systems. | {
"cite_N": [
"@cite_9",
"@cite_16"
],
"mid": [
"1964189668",
"2135451108"
],
"abstract": [
"The large number of potential applications from bridging web data with knowledge bases have led to an increase in the entity linking research. Entity linking is the task to link entity mentions in text with their corresponding entities in a knowledge base. Potential applications include information extraction, information retrieval, and knowledge base population. However, this task is challenging due to name variations and entity ambiguity. In this survey, we present a thorough overview and analysis of the main approaches to entity linking, and discuss various applications, the evaluation of entity linking systems, and future directions.",
"Named Entity Linking (nel) grounds entity mentions to their corresponding node in a Knowledge Base (kb). Recently, a number of systems have been proposed for linking entity mentions in text to Wikipedia pages. Such systems typically search for candidate entities and then disambiguate them, returning either the best candidate or nil. However, comparison has focused on disambiguation accuracy, making it difficult to determine how search impacts performance. Furthermore, important approaches from the literature have not been systematically compared on standard data sets. We reimplement three seminal nel systems and present a detailed evaluation of search strategies. Our experiments find that coreference and acronym handling lead to substantial improvement, and search strategies account for much of the variation between systems. This is an interesting finding, because these aspects of the problem have often been neglected in the literature, which has focused largely on complex candidate ranking algorithms."
]
} |
1608.06757 | 2512822132 | Named entity recognition often fails in idiosyncratic domains. That causes a problem for depending tasks, such as entity linking and relation extraction. We propose a generic and robust approach for high-recall named entity recognition. Our approach is easy to train and offers strong generalization over diverse domain-specific language, such as news documents (e.g. Reuters) or biomedical text (e.g. Medline). Our approach is based on deep contextual sequence learning and utilizes stacked bidirectional LSTM networks. Our model is trained with only few hundred labeled sentences and does not rely on further external knowledge. We report from our results F1 scores in the range of 84-94 on standard datasets. | We distinguish between three broad categories for generating candidate entities: Babelfy @cite_11 , Entityclassifier.eu @cite_1 , DBpedia Spotlight @cite_5 or TagMe2 @cite_8 spot noun chunks and filter them with dictionaries, often derived from Wikipedia. Stanford NER @cite_10 or LingPipe http: alias-i.com lingpipe utilize discriminative tagging approaches. FOX @cite_21 or NERD-ML @cite_14 combine several approaches in an ensemble learner for enhancing precision. The GENIA tagger http: www.nactem.ac.uk tsujii GENIA tagger is a tagger specifically tuned for biomedical text. It is trained on the GENIA-based BioNLP NLPBA 2004 data set @cite_17 that includes named entity recognition for biomedical text. The biomedical NER system of is built using HMM and an additional SVM with sigmoid. It uses lexical-level features, e.g. word formation and morphological patterns, and utilizes dictionaries. The system of uses a MEMM. use CRF classifiers with syntactical features and synset dictionaries. Basically, all these systems benefit from our work. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_17",
"@cite_5",
"@cite_10",
"@cite_11"
],
"mid": [
"1772044609",
"2123142779",
"2036956884",
"1623072288",
"2047782770",
"2104583100",
"2123442489",
"1614298861"
],
"abstract": [
"Unlike traditional recurrent neural networks, the Long Short-Term Memory (LSTM) model generalizes well when presented with training sequences derived from regular and also simple nonregular languages. Our novel combination of LSTM and the decoupled extended Kalman filter, however, learns even faster and generalizes even better, requiring only the 10 shortest exemplars (n ? 10) of the context sensitive language anbncn to deal correctly with values ofn up to 1000 and more. Even when we consider the relatively high update complexity per timestep, in many cases the hybrid offers faster learning than LSTM by itself.",
"We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.",
"Named Entity Recognition (NER) plays an important role in a variety of online information management tasks including text categorization, document clustering, and faceted search. While recent NER systems can achieve near-human performance on certain documents like news articles, they still remain highly domain-specific and thus cannot effectively identify entities such as original technical concepts in scientific documents. In this work, we propose novel approaches for NER on distinctive document collections (such as scientific articles) based on n-grams inspection and classification. We design and evaluate several entity recognition features---ranging from well-known part-of-speech tags to n-gram co-location statistics and decision trees---to classify candidates. In addition, we show how the use of external knowledge bases (either specific like DBLP or generic like DBPedia) can be leveraged to improve the effectiveness of NER for idiosyncratic collections. We evaluate our system on two test collections created from a set of Computer Science and Physics papers and compare it against state-of-the-art supervised methods. Experimental results show that a careful combination of the features we propose yield up to 85 NER accuracy over scientific collections and substantially outperforms state-of-the-art approaches such as those based on maximum entropy.",
"Transformation-based learning, a technique introduced by Eric Brill (1993b), has been shown to do part-of-speech tagging with fairly high accuracy. This same method can be applied at a higher level of textual interpretation for locating chunks in the tagged text, including non-recursive “baseNP” chunks. For this purpose, it is convenient to view chunking as a tagging problem by encoding the chunk structure in new tags attached to each word. In automatic tests using Treebank-derived data, this technique achieved recall and precision rates of roughly 93 for baseNP chunks (trained on 950K words) and 88 for somewhat more complex chunks that partition the sentence (trained on 200K words). Working in this new application and with larger template and training sets has also required some interesting adaptations to the transformation-based learning approach.",
"We describe here the JNLPBA shared task of bio-entity recognition using an extended version of the GENIA version 3 named entity corpus of MEDLINE abstracts. We provide background information on the task and present a general discussion of the approaches taken by participating systems.",
"Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to configure the annotations to their specific needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation confidence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.",
"We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage.",
""
]
} |
1608.06754 | 2512255109 | Recently, Dynamic Time Division Duplex (TDD) has been proposed to handle the asymmetry of traffic demand between DownLink (DL) and UpLink (UL) in Heterogeneous Networks (HetNets). However, for mixed traffic consisting of best effort traffic and soft Quality of Service (QoS) traffic, the resource allocation problem has not been adequately studied in Dynamic TDD HetNets. In this paper, we focus on such problem in a two-tier HetNet with co-channel deployment of one Macro cell Base Station (MBS) and multiple Small cell Base Stations (SBSs) in hotspots. Different from existing work, we introduce low power almost blank subframes to alleviate MBS-to-SBS interference which is inherent in TDD operation. To tackle the resource allocation problem, we propose a two-step strategy. First, from the view point of base stations, we propose a transmission protocol and perform time resource allocation by formulating and solving a network capacity maximization problem under DL UL traffic demands. Second, from the view point of User Equipments (UEs), we formulate their resource allocation as a Network Utility Maximization (NUM) problem. An efficient iterative algorithm is proposed to solve the NUM problem. Simulations show the advantage of the proposed algorithm in terms of network throughput and UE QoS satisfaction level. | In @cite_14 @cite_28 @cite_21 @cite_24 , ABS based interference mitigation mechanisms have been introduced for dynamic TDD HetNets. In these mechanisms, MBSs blank some subframes as ABS to avoid severe interference to small cells. In @cite_14 , MBSs and SBSs configured synchronous DL and UL transmissions on non-ABS, and SBSs applied dynamic TDD on ABS. Similarly, in @cite_28 @cite_21 @cite_24 , MBSs and SBSs configured synchronous DL transmissions on non-ABS. However, SBSs applied dynamic TDD on not only ABS, but also subframes where MBSs configured UL transmissions. | {
"cite_N": [
"@cite_28",
"@cite_14",
"@cite_21",
"@cite_24"
],
"mid": [
"2072460547",
"2007221528",
"1968081294",
"2314840291"
],
"abstract": [
"Future wireless communication systems feature heterogeneous networks (HetNets), with small cells underlying existing macrocells. In order to maximize the off-loading benefits of small cells and mitigate the interference from the macrocell tier to the small cell tier, cell range expansion (CRE) and almost blank subframes (ABSs) have been designed for small cells and macrocells, respectively. Besides, enhanced 4th generation (4G) networks are also envisaged to adopt dynamic time division duplexing (TDD) transmissions for small cells to adapt their communication service to the fast variation of downlink (DL) and uplink (UL) traffic demands. However, up to now, it is still unclear whether it is technically feasible to introduce dynamic TDD into HetNets. In this paper, we investigate this fundamental problem and propose a feasible scheme to enable small cell dynamic TDD transmissions in HetNets. Simulation results show that compared with the static TDD scheme with CRE and ABS operations, the proposed scheme can achieve superior performance gains in terms of DL and UL packet throughputs when the traffic load is low to medium, at the expense of introducing the DL-to-UL interference cancellation (IC) functionality in macrocell base stations (BSs) and or small cell BSs.",
"Dynamic downlink and uplink resource adaptation in TDD (Time Domain Duplex) systems by flexibly changing the ratio of time slots for downlink and uplink transmissions is studied under Heterogonous Network (HetNet) scenarios. The time-scale of the resource adaptation and the interference management between downlink and uplink transmissions from different cells are important factors in determining the achievable level of resource efficiency, user experience and energy saving in TDD systems. In this paper, we apply the dynamic downlink and uplink resource adaptation in multi-cell macro-pico scenarios with co-channel interference. The performance is evaluated in system level for LTE-Advanced TDD configurations. Reconfiguration rates of 10 msec and 640 msec are considered as the time-scale of the dynamic resource adaptation. In addition, the time domain inter-cell interference coordination between the macro cells and the pico cells is applied for handling the high level of interference from macro cells to pico cells in the downlink multi-cell dynamic simulations.",
"Almost-blank subframe (ABSF) is a time-domain technique, proposed by the 3GPP to handle Inter-Cell Interference (ICI) in heterogeneous network environments (HetNet). We consider a HetNet environment comprised of a macro-cell and femto-cells distributed across the macro-cell area. We propose a novel approach, called ABSF offsetting, to reduce the blanking rate at the femto-cells while preserving the required optimal blanking rate at the macro-cell. We also study the problem of optimal resource partitioning and offset assignment in the ABSF mode. The proposed solution for the problem is based on multistage Nash bargaining. The performance of the optimal resource partitioning, and ABSF offsetting is evaluated through simulations. The results show that the throughput of the macro-cell is improved, while the degradation in the aggregate femto-cell throughput is reduced due to the reduction in the blanking rate due to offsetting. The simulation results also demonstrate the fairness of the ABSF offsetting with the fairness index approaching 1 among the macro-cell UEs at low loads.",
""
]
} |
1608.07068 | 2511171064 | A great video title describes the most salient event compactly and captures the viewer's attention. In contrast, video captioning tends to generate sentences that describe the video as a whole. Although generating a video title automatically is a very useful task, it is much less addressed than video captioning. We address video title generation for the first time by proposing two methods that extend state-of-the-art video captioners to this new task. First, we make video captioners highlight sensitive by priming them with a highlight detector. Our framework allows for jointly training a model for title generation and video highlight localization. Second, we induce high sentence diversity in video captioners, so that the generated titles are also diverse and catchy. This means that a large number of sentences might be required to learn the sentence structure of titles. Hence, we propose a novel sentence augmentation method to train a captioner with additional sentence-only examples that come without corresponding videos. We collected a large-scale Video Titles in the Wild (VTW) dataset of 18100 automatically crawled user-generated videos and titles. On VTW, our methods consistently improve title prediction accuracy, and achieve the best performance in both automatic and human evaluation. Finally, our sentence augmentation method also outperforms the baselines on the M-VAD dataset. | Early work on video captioning @cite_18 @cite_14 @cite_24 @cite_9 @cite_10 @cite_8 @cite_21 typically perform a two-stage procedure. In the first stage, classifiers are used to detect objects, actions, and scenes. In the second stage, a model combining visual confidences with a language model is used to estimate the most likely combination of subject, verb, object, and scene. Then, a sentence is generated according to a predefined template. These methods require a few manual engineered components such as the content to be classified and the template. Hence, the generated sentences are often not as diverse as sentences used in natural human description. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_24",
"@cite_10"
],
"mid": [
"2110933980",
"1995820507",
"2158785151",
"2152984213",
"1601567445",
"2142900973",
"2251353663"
],
"abstract": [
"Humans use rich natural language to describe and communicate visual perceptions. In order to provide natural language descriptions for visual content, this paper combines two important ingredients. First, we generate a rich semantic representation of the visual content including e.g. object and activity labels. To predict the semantic representation we learn a CRF to model the relationships between different components of the visual input. And second, we propose to formulate the generation of natural language as a machine translation problem using the semantic representation as source language and the generated sentences as target language. For this we exploit the power of a parallel corpus of videos and textual descriptions and adapt statistical machine translation to translate between our two languages. We evaluate our video descriptions on the TACoS dataset, which contains video snippets aligned with sentence descriptions. Using automatic evaluation and human judgments we show significant improvements over several baseline approaches, motivated by prior work. Our translation approach also shows improvements over related work on an image description task.",
"The problem of describing images through natural language has gained importance in the computer vision community. Solutions to image description have either focused on a top-down approach of generating language through combinations of object detections and language models or bottom-up propagation of keyword tags from training images to test images through probabilistic or nearest neighbor techniques. In contrast, describing videos with natural language is a less studied problem. In this paper, we combine ideas from the bottom-up and top-down approaches to image description and propose a method for video description that captures the most relevant contents of a video in a natural language description. We propose a hybrid system consisting of a low level multimodal latent topic model for initial keyword annotation, a middle level of concept detectors and a high level module to produce final lingual descriptions. We compare the results of our system to human descriptions in both short and long forms on two datasets, and demonstrate that final system output has greater agreement with the human descriptions than any single level.",
"We present a system that produces sentential descriptions of video: who did what to whom, and where and how they did it. Action class is rendered as a verb, participant objects as noun phrases, properties of those objects as adjectival modifiers in those noun phrases, spatial relations between those participants as prepositional phrases, and characteristics of the event as prepositional-phrase adjuncts and adverbial modifiers. Extracting the information needed to render these linguistic entities requires an approach to event recognition that recovers object tracks, the trackto-role assignments, and changing body posture.",
"We present a holistic data-driven technique that generates natural-language descriptions for videos. We combine the output of state-of-the-art object and activity detectors with \"real-world\" knowledge to select the most probable subject-verb-object triplet for describing a video. We show that this knowledge, automatically mined from web-scale text corpora, enhances the triplet selection algorithm by providing it contextual information and leads to a four-fold increase in activity identification. Unlike previous methods, our approach can annotate arbitrary videos without requiring the expensive collection and annotation of a similar training video corpus. We evaluate our technique against a baseline that does not use text-mined knowledge and show that humans prefer our descriptions 61 of the time.",
"We propose a method for describing human activities from video images based on concept hierarchies of actions. Major difficulty in transforming video images into textual descriptions is how to bridge a semantic gap between them, which is also known as inverse Hollywood problem. In general, the concepts of events or actions of human can be classified by semantic primitives. By associating these concepts with the semantic features extracted from video images, appropriate syntactic components such as verbs, objects, etc. are determined and then translated into natural language sentences. We also demonstrate the performance of the proposed method by several experiments.",
"Despite a recent push towards large-scale object recognition, activity recognition remains limited to narrow domains and small vocabularies of actions. In this paper, we tackle the challenge of recognizing and describing activities in-the-wild''. We present a solution that takes a short video clip and outputs a brief sentence that sums up the main activity in the video, such as the actor, the action and its object. Unlike previous work, our approach works on out-of-domain actions: it does not require training videos of the exact activity. If it cannot find an accurate prediction for a pre-trained model, it finds a less specific answer that is also plausible from a pragmatic standpoint. We use semantic hierarchies learned from the data to help to choose an appropriate level of generalization, and priors learned from Web-scale natural language corpora to penalize unlikely combinations of actors actions objects, we also use a Web-scale language model to fill in'' novel verbs, i.e. when the verb does not appear in the training set. We evaluate our method on a large YouTube corpus and demonstrate it is able to generate short sentence descriptions of video clips better than baseline approaches.",
"This paper integrates techniques in natural language processing and computer vision to improve recognition and description of entities and activities in real-world videos. We propose a strategy for generating textual descriptions of videos by using a factor graph to combine visual detections with language statistics. We use state-of-the-art visual recognition systems to obtain confidences on entities, activities, and scenes present in the video. Our factor graph model combines these detection confidences with probabilistic knowledge mined from text corpora to estimate the most likely subject, verb, object, and place. Results on YouTube videos show that our approach improves both the joint detection of these latent, diverse sentence components and the detection of some individual components when compared to using the vision system alone, as well as over a previous n-gram language-modeling approach. The joint detection allows us to automatically generate more accurate, richer sentential descriptions of videos with a wide array of possible content."
]
} |
1608.07068 | 2511171064 | A great video title describes the most salient event compactly and captures the viewer's attention. In contrast, video captioning tends to generate sentences that describe the video as a whole. Although generating a video title automatically is a very useful task, it is much less addressed than video captioning. We address video title generation for the first time by proposing two methods that extend state-of-the-art video captioners to this new task. First, we make video captioners highlight sensitive by priming them with a highlight detector. Our framework allows for jointly training a model for title generation and video highlight localization. Second, we induce high sentence diversity in video captioners, so that the generated titles are also diverse and catchy. This means that a large number of sentences might be required to learn the sentence structure of titles. Hence, we propose a novel sentence augmentation method to train a captioner with additional sentence-only examples that come without corresponding videos. We collected a large-scale Video Titles in the Wild (VTW) dataset of 18100 automatically crawled user-generated videos and titles. On VTW, our methods consistently improve title prediction accuracy, and achieve the best performance in both automatic and human evaluation. Finally, our sentence augmentation method also outperforms the baselines on the M-VAD dataset. | Recently, image captioning methods @cite_33 @cite_12 @cite_47 @cite_43 @cite_28 @cite_39 begin to adopt the Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) approaches. They learn models directly from a large number of image and sentence pairs. The CNN replaces the predefined features to generate a powerful distributed visual representation. The RNN takes the CNN features as input and learns to decode it into a sentence. These are combined into a large network that can be jointly trained to directly map an image to a sentence. | {
"cite_N": [
"@cite_33",
"@cite_28",
"@cite_39",
"@cite_43",
"@cite_47",
"@cite_12"
],
"mid": [
"2951183276",
"2963109634",
"2963758027",
"2953158660",
"2950178297",
"2951912364"
],
"abstract": [
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox.",
"We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.",
"In this paper, we address the task of learning novel visual concepts, and their interactions with other concepts, from a few images with sentence descriptions. Using linguistic context and visual features, our method is able to efficiently hypothesize the semantic meaning of new words and add them to its word dictionary so that they can be used to describe images which contain these novel concepts. Our method has an image captioning module based on m-RNN with several improvements. In particular, we propose a transposed weight sharing scheme, which not only improves performance on image captioning, but also makes the model more suitable for the novel concept learning task. We propose methods to prevent overfitting the new concepts. In addition, three novel concept datasets are constructed for this new task. In the experiments, we show that our method effectively learns novel visual concepts from a few examples without disturbing the previously learned concepts. The project page is this http URL",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art."
]
} |
1608.07068 | 2511171064 | A great video title describes the most salient event compactly and captures the viewer's attention. In contrast, video captioning tends to generate sentences that describe the video as a whole. Although generating a video title automatically is a very useful task, it is much less addressed than video captioning. We address video title generation for the first time by proposing two methods that extend state-of-the-art video captioners to this new task. First, we make video captioners highlight sensitive by priming them with a highlight detector. Our framework allows for jointly training a model for title generation and video highlight localization. Second, we induce high sentence diversity in video captioners, so that the generated titles are also diverse and catchy. This means that a large number of sentences might be required to learn the sentence structure of titles. Hence, we propose a novel sentence augmentation method to train a captioner with additional sentence-only examples that come without corresponding videos. We collected a large-scale Video Titles in the Wild (VTW) dataset of 18100 automatically crawled user-generated videos and titles. On VTW, our methods consistently improve title prediction accuracy, and achieve the best performance in both automatic and human evaluation. Finally, our sentence augmentation method also outperforms the baselines on the M-VAD dataset. | Most early highlight detection works focus on broadcasting sport videos @cite_2 @cite_22 @cite_6 @cite_16 @cite_45 @cite_35 @cite_36 @cite_44 . Recently, a few methods have been proposed to detect highlights in generic personal videos. @cite_19 automatically harvest user preference to learn a model for identifying highlights in each domain. Instead of generating a video title, @cite_30 utilize video titles to summarize each video. The method requires additional images to be retrieved by title search for learning visual concepts. There are also a few fully unsupervised approaches. Zhao and Xing @cite_29 propose a quasi-real time method to generate short summaries. @cite_41 propose a recurrent auto-encoder to extract video highlights. Our video title generation method is one of the first to combine explicit highlight detection (not soft-attention) with sentence generation. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_22",
"@cite_36",
"@cite_41",
"@cite_29",
"@cite_6",
"@cite_44",
"@cite_19",
"@cite_45",
"@cite_2",
"@cite_16"
],
"mid": [
"1924343884",
"2105636511",
"2143800062",
"2123807687",
"2952694903",
"2032342062",
"1978530472",
"2155635849",
"",
"2115245686",
"123827434",
"2170095155"
],
"abstract": [
"Video summarization is a challenging problem in part because knowing which part of a video is important requires prior knowledge about its main topic. We present TVSum, an unsupervised video summarization framework that uses title-based image search results to find visually important shots. We observe that a video title is often carefully chosen to be maximally descriptive of its main topic, and hence images related to the title can serve as a proxy for important visual concepts of the main topic. However, because titles are free-formed, unconstrained, and often written ambiguously, images searched using the title can contain noise (images irrelevant to video content) and variance (images of different topics). To deal with this challenge, we developed a novel co-archetypal analysis technique that learns canonical visual concepts shared between video and images, but not in either alone, by finding a joint-factorial representation of two data sets. We introduce a new benchmark dataset, TVSum50, that contains 50 videos and their shot-level importance scores annotated via crowdsourcing. Experimental results on two datasets, SumMe and TVSum50, suggest our approach produces superior quality summaries compared to several recently proposed approaches.",
"In this paper, we present a novel approach towards customized and automated generation of sports highlights from its extracted events and semantic concepts. A recorded sports video is first divided into slots, based on the game progress and for each slot, an importance-based concept and event-selection is proposed to include those in the highlights. Using our approach, we have successfully extracted highlights from recorded video of cricket match.",
"In today's fast-paced world, while the number of channels of television programming available is increasing rapidly, the time available to watch them remains the same or is decreasing. Users desire the capability to watch the programs time-shifted (on-demand) and or to watch just the highlights to save time. In this paper we explore how to provide for the latter capability, that is the ability to extract highlights automatically, so that viewing time can be reduced. We focus on the sport of baseball as our initial target—it is a very popular sport, the whole game is quite long, and the exciting portions are few. We focus on detecting highlights using audio-track features alone without relying on expensive-to-compute video-track features. We use a combination of generic sports features and baseball-specific features to obtain our results, but believe that may other sports offer the same opportunity and that the techniques presented here will apply to those sports. We present details on relative performance of various learning algorithms, and a probabilistic framework for combining multiple sources of information. We present results comparing output of our algorithms against human-selected highlights for a diverse collection of baseball games with very encouraging results.",
"This paper addresses the challenge of automatically extracting the highlights from sports TV broadcasts. In particular, we are interested in finding a generic method of highlights extraction, which does not require the development of models for the events that are thought to be interpreted by the users as highlights. Instead, we search for highlights in those video segments that are expected to excite the users most. It is namely realistic to assume that a highlighting event induces a steady increase in a user's excitement, as compared to other, less interesting events. We mimic the expected variations in a user's excitement by observing the temporal behavior of selected audiovisual low-level features and the editing scheme of a video. Relations between this noncontent information and the evoked excitement are drawn partly from psychophysiological research and partly from analyzing the live-video directing practice. The expected variations in a user's excitement are represented by the excitement time curve, which is, subsequently, filtered in an adaptive way to extract the highlights in the prespecified total length and in view of the preferences regarding the highlights strength: extraction can namely be performed with variable sensitivity to capture few \"strong\" highlights or more \"less strong\" ones. We evaluate and discuss the performance of our method on the case study of soccer TV broadcasts.",
"With the growing popularity of short-form video sharing platforms such as Instagram and Vine , there has been an increasing need for techniques that automatically extract highlights from video. Whereas prior works have approached this problem with heuristic rules or supervised learning, we present an unsupervised learning approach that takes advantage of the abundance of user-edited videos on social media websites such as YouTube. Based on the idea that the most significant sub-events within a video class are commonly present among edited videos while less interesting ones appear less frequently, we identify the significant sub-events via a robust recurrent auto-encoder trained on a collection of user-edited videos queried for each particular class of interest. The auto-encoder is trained using a proposed shrinking exponential loss function that makes it robust to noise in the web-crawled training data, and is configured with bidirectional long short term memory (LSTM) LSTM:97 cells to better model the temporal structure of highlight segments. Different from supervised techniques, our method can infer highlights using only a set of downloaded edited videos, without also needing their pre-edited counterparts which are rarely available online. Extensive experiments indicate the promise of our proposed solution in this challenging unsupervised settin",
"With the widespread availability of video cameras, we are facing an ever-growing enormous collection of unedited and unstructured video data. Due to lack of an automatic way to generate summaries from this large collection of consumer videos, they can be tedious and time consuming to index or search. In this work, we propose online video highlighting, a principled way of generating short video summarizing the most important and interesting contents of an unedited and unstructured video, costly both time-wise and financially for manual processing. Specifically, our method learns a dictionary from given video using group sparse coding, and updates atoms in the dictionary on-the-fly. A summary video is then generated by combining segments that cannot be sparsely reconstructed using the learned dictionary. The online fashion of our proposed method enables it to process arbitrarily long videos and start generating summaries before seeing the end of the video. Moreover, the processing time required by our proposed method is close to the original video length, achieving quasi real-time summarization speed. Theoretical analysis, together with experimental results on more than 12 hours of surveillance and YouTube videos are provided, demonstrating the effectiveness of online video highlighting.",
"Advances in the media and entertainment industries, for example streaming audio and digital TV, present new challenges for managing large audio-visual collections. Efficient and effective retrieval from large content collections forms an important component of the business models for content holders and this is driving a need for research in audio-visual search and retrieval. Current content management systems support retrieval using low-level features, such as motion, colour, texture, beat and loudness. However, low-level features often have little meaning for the human users of these systems, who much prefer to identify content using high-level semantic descriptions or concepts. This creates a gap between the system and the user that must be bridged for these systems to be used effectively. The research presented in this paper describes our approach to bridging this gap in a specific content domain, sports video. Our approach is based on a number of automatic techniques for feature detection used in combination with heuristic rules determined through manual observations of sports footage. This has led to a set of models for interesting sporting events-goal segments-that have been implemented as part of an information retrieval system. The paper also presents results comparing output of the system against manually identified goals.",
"In this paper, we propose a novel approach for detecting highlights in sports videos. The videos are temporally decomposed into a series of events based on an unsupervised event discovery and detection framework. The framework solely depends on easy-to-extract low-level visual features such as color histogram (CH) or histogram of oriented gradients (HOG), which can potentially be generalized to different sports. The unigram and bigram statistics of the detected events are then used to provide a compact representation of the video. The effectiveness of the proposed representation is demonstrated on cricket video classification: Highlight vs. Non-Highlight for individual video clips (7000 training and 7000 test instances). We achieve a low equal error rate of 12.1 using event statistics based on CH and HOG features.",
"",
"We propose to use a visual object (e.g., the baseball catcher) detection algorithm to find local, semantic objects in video frames in addition to an audio classification algorithm to find semantic audio objects in the audio track for sports highlights extraction. The highlight candidates are then further grouped into finer-resolution highlight segments, using color or motion information. During the grouping phase, many of the false alarms can be correctly identified and eliminated. Our experimental results with baseball, soccer and golf video are promising.",
"",
"Sports video highlight detection is a popular topic. A multi-layer sport event detection framework is described. In the mid-level of this framework, visual and audio keywords are created from low-level features and the original video is converted into a keyword sequence. In the high-level, the temporal pattern of keyword sequences is analyzed by an HMM classifier. The creation of visual and audio keywords can help to bridge the gap between low-level features and high-level semantics. The use of the HMM classifier can automatically find the temporal change character of the event instead of rule based heuristic modeling to map certain keyword sequences into events. Experiments using our model on soccer games produced some promising results"
]
} |
1608.06864 | 2604972896 | A supercongruence is a congruence between rational numbers modulo a power of a prime. In this paper, we give a technique for finding and algorithmically proving supercongruences by expressing terms as infinite series involving certain generalizations of the harmonic numbers. We apply the technique to derive many new supercongruences. We also provide software for finding and proving supercongruences using our technique. | Several recent works involve computer algorithms for proving congruences. Rowland and Yassawi @cite_24 given an automatic method for proving congruences for diagonal coefficients of multi-variate rational power series. The results of @cite_24 are generalized by Rowland and Zeilberger in @cite_3 . A very recent paper of Chen, Hou, and Zeilberger @cite_16 gives an algorithm for proving congruences modulo @math for certain power series coefficients. | {
"cite_N": [
"@cite_24",
"@cite_16",
"@cite_3"
],
"mid": [
"",
"2007561580",
"2011123341"
],
"abstract": [
"",
"Abstract We give an explicit p -adic expansion of ∑ np j =1, ( j , p )=1 j − r as a power series in n . The coefficients are values of p -adic L -functions.",
"In this paper, which may be considered a sequel to a recent article by Eric Rowland and Reem Yassawi, we present yet another approach for the automatic generation of automata (and an extension that we call congruence linear schemes) for the fast (log-time) determination of congruence properties, modulo small (and not so small!) prime powers, for a wide class of combinatorial sequences. Even more interesting than the new results that could be obtained is the illustrated methodology, that of designing ‘meta-algorithms’ that enable the computer to develop algorithms, that it (or another computer) can then proceed to use to actually prove (potentially!) infinitely many new results. This paper is accompanied by a Maple package, AutoSquared, and numerous sample input and output files, that readers can use as templates for generating their own, thereby proving many new ‘theorems’ about congruence properties of many famous (and, of course, obscure) combinatorial sequences."
]
} |
1608.06864 | 2604972896 | A supercongruence is a congruence between rational numbers modulo a power of a prime. In this paper, we give a technique for finding and algorithmically proving supercongruences by expressing terms as infinite series involving certain generalizations of the harmonic numbers. We apply the technique to derive many new supercongruences. We also provide software for finding and proving supercongruences using our technique. | The present work involves infinite, @math -adically convergent series identities involving multiple harmonic sums, and in Section we look at related series involving truncated multiple polylogarithms. A recent work of Seki @cite_6 investigates a closely related @math -adic series identity for truncated multiple polylogarithms. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2398705731"
],
"abstract": [
"We prove the @math -adic duality theorem for the finite star-multiple polylogarithms. That is a generalization of Hoffman's duality theorem for the finite multiple zeta-star values."
]
} |
1608.06451 | 2507537916 | Most face applications depend heavily on the accuracy of the face and facial landmarks detectors employed. Prediction of attributes such as gender, age, and identity usually completely fail when the faces are badly aligned due to inaccurate facial landmark detection. Despite the impressive recent advances in face and facial landmark detection, little study is on the recovery from and detection of failures or inaccurate predictions. In this work we study two top recent facial landmark detectors and devise confidence models for their outputs. We validate our failure detection approaches on standard benchmarks (AFLW, HELEN) and correctly identify more than 40 of the failures in the outputs of the landmark detectors. Moreover, with our failure detection we can achieve a 12 error reduction on a gender estimation application at the cost of a small increase in computation. | Usually, in face detection and facial landmark literature the reduction of failures is the direct result of trading off the time complexity running time of the methods. More complex models and intensive computations might allow for more robust performance. However, the costs for such reductions within the original models can be prohibitive for practical applications. For example, one might use a set of specialized detectors (or components) deployed together instead of a generic detector, and while the performance potentially can be improved, the time and memory complexities varies with the cardinality of the set @cite_10 @cite_12 @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_10",
"@cite_12"
],
"mid": [
"",
"2047508432",
"1849007038"
],
"abstract": [
"",
"We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).",
"Face detection is a mature problem in computer vision. While diverse high performing face detectors have been proposed in the past, we present two surprising new top performance results. First, we show that a properly trained vanilla DPM reaches top performance, improving over commercial and research systems. Second, we show that a detector based on rigid templates - similar in structure to the Viola&Jones detector - can reach similar top performance on this task. Importantly, we discuss issues with existing evaluation benchmark and propose an improved procedure."
]
} |
1608.06272 | 2514867871 | In this paper, the NGDBF algorithm is implemented on a code that is deployed in the IEEE 802.3an Ethernet standard. The design employs a fully parallel architecture and operates in two-phases: start-up phase and decoding phase. The two phase operation keeps the high latency operations off-line, thereby reducing the decoding latency during the decoding phase. The design is bench-marked with other state-of-the-art designs on the same code that employ different algorithms and architectures. The results indicate that the NGDBF decoder has a better area efficiency and a better energy efficiency compared to other state-of-art decoders. When the design is operated in medium to high signal to noise ratios, the design is able to provide greater than the required minimum throughput of 10Gbps. The design consumes 0.81mm2 of area and has an energy efficiency of 1.7pJ bit, which are the lowest in the reported literature. The design also provides better error performance compared to other simplified decoder implementations and consumes lesser wire-length compared to a recently proposed design. | The next design that was proposed is the fully parallel split-row MS algorithm @cite_8 . This design implements a low complexity version of the Normalized MS algorithm that significantly reduces the routing complexity. The key idea behind the split-row MS algorithm is to partition the original parity check matrix into many sub-matrices, thereby splitting a row processing operation, into multiple row processing operations. Check node computations for each sub-matrix are performed separately, using limited information from other columns. This reduces the routing congestion as it reduces the number of wires between the row and the column processors. However, when the original matrix is broken into 16 sub-matrices, there is a significant performance degradation of @math @math . This design consumes less area compared to the offset MS decoder and is highly energy efficient compared to the offset MS decoder. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2109078033"
],
"abstract": [
"A low-complexity message-passing algorithm, called Split-Row Threshold, is used to implement low-density parity-check (LDPC) decoders with reduced layout routing congestion. Five LDPC decoders that are compatible with the 10GBASE-T standard are implemented using MinSum Normalized and MinSum Split-Row Threshold algorithms. All decoders are built using a standard cell design flow and include all steps through the generation of GDS II layout. An Spn = 16 decoder achieves improvements in area, throughput, and energy efficiency of 4.1 times, 3.3 times, and 4.8 times, respectively, compared to a MinSum Normalized implementation. Postlayout results show that a fully parallel Spn = 16 decoder in 65-nm CMOS operates at 195 MHz at 1.3 V with an average throughput of 92.8 Gbits s with early termination enabled. Low-power operation at 0.7 V gives a worst case throughput of 6.5 Gbits s-just above the 10GBASE-T requirement-and an estimated average power of 62 mW, resulting in 9.5 pj bit. At 0.7 V with early termination enabled, the throughput is 16.6 Gbits s, and the energy is 3.7 pJ bit, which is 5.8× lower than the previously reported lowest energy per bit. The decoder area is 4.84 mm2 with a final postlayout area utilization of 97 ."
]
} |
1608.06272 | 2514867871 | In this paper, the NGDBF algorithm is implemented on a code that is deployed in the IEEE 802.3an Ethernet standard. The design employs a fully parallel architecture and operates in two-phases: start-up phase and decoding phase. The two phase operation keeps the high latency operations off-line, thereby reducing the decoding latency during the decoding phase. The design is bench-marked with other state-of-the-art designs on the same code that employ different algorithms and architectures. The results indicate that the NGDBF decoder has a better area efficiency and a better energy efficiency compared to other state-of-art decoders. When the design is operated in medium to high signal to noise ratios, the design is able to provide greater than the required minimum throughput of 10Gbps. The design consumes 0.81mm2 of area and has an energy efficiency of 1.7pJ bit, which are the lowest in the reported literature. The design also provides better error performance compared to other simplified decoder implementations and consumes lesser wire-length compared to a recently proposed design. | proposed a layered implementation of the offset MS algorithm @cite_10 . In this design, original parity check matrix of the @math GBASE-T code is split into six layers in which each layer has @math rows and @math columns. This enables the check node operation to be time multiplexed and the check node processor to be shared across layers. Since only @math check node processors are active at a time, only @math check node processors are needed to accomplish successful decoding. Another advantage of the layered implementation is that it provides faster convergence. This design was fabricated in a 90 CMOS and achieved a throughput close to the required specification. This design consumes more than a watt of power and is inferior to Zhang's offset MS decoder in terms of energy efficiency. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2170214997"
],
"abstract": [
"A partially parallel low density parity check (LDPC) decoder compliant with the IEEE 802.3an standard for 100BASE-T Ethernet is presented. The design is optimized for minimum silicon area and is based on the layered offset-min-sum algorithm which speeds up the convergence of the message passing decoding algorithm. To avoid routing congestion the decoder architecture employs a novel communication scheme that reduces the critical number of global wires by 50 . The prototype LDPC decoder ASIC, fabricated in 90 nm CMOS, occupies only 5.35 mm2 and achieves a decoding throughput of 11.69 Gb s at 1.2 V with an energy efficiency of 133pJ bit."
]
} |
1608.06495 | 2509983173 | In this paper, we address the problem of searching action proposals in unconstrained video clips. Our approach starts from actionness estimation on frame-level bounding boxes, and then aggregates the bounding boxes belonging to the same actor across frames via linking, associating, tracking to generate spatial-temporal continuous action paths. To achieve the target, a novel actionness estimation method is firstly proposed by utilizing both human appearance and motion cues. Then, the association of the action paths is formulated as a maximum set coverage problem with the results of actionness estimation as a priori. To further promote the performance, we design an improved optimization objective for the problem and provide a greedy search algorithm to solve it. Finally, a tracking-by-detection scheme is designed to further refine the searched action paths. Extensive experiments on two challenging datasets, UCF-Sports and UCF-101, show that the proposed approach advances state-of-the-art proposal generation performance in terms of both accuracy and proposal quantity. | Traditionally, action localization or detection is performed by sliding window based approaches @cite_9 @cite_29 @cite_4 @cite_14 . For instance, Siva al @cite_9 proposed a supervised model based on multiple-instance-learning to slide over subvolumes both spatially and temporally for action detection. Instead of performing an exhaustive search through sliding over the whole video volumes, Oneata al @cite_30 put forward a branch-and-bound search approach to achieve the time-efficiency. The main limitation of these sliding-window based approaches is that the detection results are confined by a video subvolume, and thus can not accurately capture the varying shape of the motion. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_4",
"@cite_9",
"@cite_29"
],
"mid": [
"1559046793",
"914561379",
"2084341401",
"2016208906",
"2129666410"
],
"abstract": [
"Spatio-temporal detection of actions and events in video is a challenging problem. Besides the difficulties related to recognition, a major challenge for detection in video is the size of the search space defined by spatio-temporal tubes formed by sequences of bounding boxes along the frames. Recently methods that generate unsupervised detection proposals have proven to be very effective for object detection in still images. These methods open the possibility to use strong but computationally expensive features since only a relatively small number of detection hypotheses need to be assessed. In this paper we make two contributions towards exploiting detection proposals for spatio-temporal detection problems. First, we extend a recent 2D object proposal method, to produce spatio-temporal proposals by a randomized supervoxel merging process. We introduce spatial, temporal, and spatio-temporal pairwise supervoxel features that are used to guide the merging process. Second, we propose a new efficient supervoxel method. We experimentally evaluate our detection proposals, in combination with our new supervoxel method as well as existing ones. This evaluation shows that our supervoxels lead to more accurate proposals when compared to using existing state-of-the-art supervoxel methods.",
"This paper introduces a state-of-the-art video representation and applies it to efficient action recognition and detection. We first propose to improve the popular dense trajectory features by explicit camera motion estimation. More specifically, we extract feature point matches between frames using SURF descriptors and dense optical flow. The matches are used to estimate a homography with RANSAC. To improve the robustness of homography estimation, a human detector is employed to remove outlier matches from the human body as human motion is not constrained by the camera. Trajectories consistent with the homography are considered as due to camera motion, and thus removed. We also use the homography to cancel out camera motion from the optical flow. This results in significant improvement on motion-based HOF and MBH descriptors. We further explore the recent Fisher vector as an alternative feature encoding approach to the standard bag-of-words (BOW) histogram, and consider different ways to include spatial layout information in these encodings. We present a large and varied set of evaluations, considering (i) classification of short basic actions on six datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that our improved trajectory features significantly outperform previous dense trajectories, and that Fisher vectors are superior to BOW encodings for video recognition tasks. In all three tasks, we show substantial improvements over the state-of-the-art results.",
"We address the problem of localizing actions, such as opening a door, in hours of challenging video data. We propose a model based on a sequence of atomic action units, termed \"actoms,\" that are semantically meaningful and characteristic for the action. Our actom sequence model (ASM) represents an action as a sequence of histograms of actom-anchored visual features, which can be seen as a temporally structured extension of the bag-of-features. Training requires the annotation of actoms for action examples. At test time, actoms are localized automatically based on a nonparametric model of the distribution of actoms, which also acts as a prior on an action's temporal structure. We present experimental results on two recent benchmarks for action localization \"Coffee and Cigarettes\" and the \"DLSBP\" dataset. We also adapt our approach to a classification-by-localization set-up and demonstrate its applicability on the challenging \"Hollywood 2\" dataset. We show that our ASM method outperforms the current state of the art in temporal action localization, as well as baselines that localize actions with a sliding window method.",
"The detection of human action in videos of busy natural scenes with dynamic background is of interest for applications such as video surveillance. Taking a conventional fully supervised approach, the spatio-temporal locations of the action of interest have to be manually annotated frame by frame in the training videos, which is tedious and unreliable. In this paper, for the first time, a weakly supervised action detection method is proposed which only requires binary labels of the videos indicating the presence of the action of interest. Given a training set of binary labelled videos, the weakly supervised learning (WSL) problem is recast as a multiple instance learning (MIL) problem. A novel MIL algorithm is developed which differs from the existing MIL algorithms in that it locates the action of interest spatially and temporally by globally optimising both interand intra-class distance. We demonstrate through experiments that our WSL approach can achieve comparable detection performance to a fully supervised learning approach, and that the proposed MIL algorithm significantly outperforms the existing ones.",
"We address recognition and localization of human actions in realistic scenarios. In contrast to the previous work studying human actions in controlled settings, here we train and test algorithms on real movies with substantial variation of actions in terms of subject appearance, motion, surrounding scenes, viewing angles and spatio-temporal extents. We introduce a new annotated human action dataset and use it to evaluate several existing methods. We in particular focus on boosted space-time window classifiers and introduce \"keyframe priming\" that combines discriminative models of human motion and shape within an action. Keyframe priming is shown to significantly improve the performance of action detection. We present detection results for the action class \"drinking\" evaluated on two episodes of the movie \"Coffee and Cigarettes\"."
]
} |
1608.06495 | 2509983173 | In this paper, we address the problem of searching action proposals in unconstrained video clips. Our approach starts from actionness estimation on frame-level bounding boxes, and then aggregates the bounding boxes belonging to the same actor across frames via linking, associating, tracking to generate spatial-temporal continuous action paths. To achieve the target, a novel actionness estimation method is firstly proposed by utilizing both human appearance and motion cues. Then, the association of the action paths is formulated as a maximum set coverage problem with the results of actionness estimation as a priori. To further promote the performance, we design an improved optimization objective for the problem and provide a greedy search algorithm to solve it. Finally, a tracking-by-detection scheme is designed to further refine the searched action paths. Extensive experiments on two challenging datasets, UCF-Sports and UCF-101, show that the proposed approach advances state-of-the-art proposal generation performance in terms of both accuracy and proposal quantity. | Some research works address the problem by employing segmentation-and-merging strategy. Generally, these methods include three steps: i) segment the video; ii) merge the segments to generate tube proposals; iii) represent tubes with dense motion features and construct action classifier for recognition. For instance, in @cite_24 action tubes are generated by hierarchically merging super-voxels. However, accurate video segmentation is a difficult problem especially under unconstrained environments. To alleviate the difficulty encountered with segmentation, some other methods use a figure-centric based model. In @cite_23 the human and objects are detected first and then their interactions are described. Kl "a ser al @cite_15 detect human on each frame and track the detection results across frames using optical flow. Our approach also utilizes tracking, via a more robust tracking-by-detection approach @cite_25 @cite_21 based on a combined feature representation of color and shape. | {
"cite_N": [
"@cite_21",
"@cite_24",
"@cite_23",
"@cite_15",
"@cite_25"
],
"mid": [
"",
"2018068650",
"1989560997",
"1567708943",
"2098941887"
],
"abstract": [
"",
"This paper considers the problem of action localization, where the objective is to determine when and where certain actions appear. We introduce a sampling strategy to produce 2D+t sequences of bounding boxes, called tubelets. Compared to state-of-the-art alternatives, this drastically reduces the number of hypotheses that are likely to include the action of interest. Our method is inspired by a recent technique introduced in the context of image localization. Beyond considering this technique for the first time for videos, we revisit this strategy for 2D+t sequences obtained from super-voxels. Our sampling strategy advantageously exploits a criterion that reflects how action related motion deviates from background motion. We demonstrate the interest of our approach by extensive experiments on two public datasets: UCF Sports and MSR-II. Our approach significantly outperforms the state-of-the-art on both datasets, while restricting the search of actions to a fraction of possible bounding box sequences.",
"We introduce an approach for learning human actions as interactions between persons and objects in realistic videos. Previous work typically represents actions with low-level features such as image gradients or optical flow. In contrast, we explicitly localize in space and track over time both the object and the person, and represent an action as the trajectory of the object w.r.t. to the person position. Our approach relies on state-of-the-art techniques for human detection [32], object detection [10], and tracking [39]. We show that this results in human and object tracks of sufficient quality to model and localize human-object interactions in realistic videos. Our human-object interaction features capture the relative trajectory of the object w.r.t. the human. Experimental results on the Coffee and Cigarettes dataset [25], the video dataset of [19], and the Rochester Daily Activities dataset [29] show that 1) our explicit human-object model is an informative cue for action recognition; 2) it is complementary to traditional low-level descriptors such as 3D--HOG [23] extracted over human tracks. We show that combining our human-object interaction features with 3D-HOG improves compared to their individual performance as well as over the state of the art [23], [29].",
"We propose a novel human-centric approach to detect and localize human actions in challenging video data, such as Hollywood movies. Our goal is to localize actions in time through the video and spatially in each frame. We achieve this by first obtaining generic spatio-temporal human tracks and then detecting specific actions within these using a sliding window classifier. We make the following contributions: (i) We show that splitting the action localization task into spatial and temporal search leads to an efficient localization algorithm where generic human tracks can be reused to recognize multiple human actions; (ii) We develop a human detector and tracker which is able to cope with a wide range of postures, articulations, motions and camera viewpoints. The tracker includes detection interpolation and a principled classification stage to suppress false positive tracks; (iii) We propose a track-aligned 3D-HOG action representation, investigate its parameters, and show that action localization benefits from using tracks; and (iv) We introduce a new action localization dataset based on Hollywood movies. Results are presented on a number of real-world movies with crowded, dynamic environment, partial occlusion and cluttered background. On the Coffee&Cigarettes dataset we significantly improve over the state of the art. Furthermore, we obtain excellent results on the new Hollywood---Localization dataset.",
"Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (accurate estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we are able to avoid the need for an intermediate classification step. Our method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, we introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased performance."
]
} |
1608.06495 | 2509983173 | In this paper, we address the problem of searching action proposals in unconstrained video clips. Our approach starts from actionness estimation on frame-level bounding boxes, and then aggregates the bounding boxes belonging to the same actor across frames via linking, associating, tracking to generate spatial-temporal continuous action paths. To achieve the target, a novel actionness estimation method is firstly proposed by utilizing both human appearance and motion cues. Then, the association of the action paths is formulated as a maximum set coverage problem with the results of actionness estimation as a priori. To further promote the performance, we design an improved optimization objective for the problem and provide a greedy search algorithm to solve it. Finally, a tracking-by-detection scheme is designed to further refine the searched action paths. Extensive experiments on two challenging datasets, UCF-Sports and UCF-101, show that the proposed approach advances state-of-the-art proposal generation performance in terms of both accuracy and proposal quantity. | Recently, some methods built upon generation of action proposals are presented. Gkioxari al @cite_17 proposed to utilize Selective Search method for proposing actions on each frame, then scored those proposals by using features extracted by a two-streams Convolutional Neural Networks (CNN), and finally, linked them to form ation tubes. Philippe al @cite_10 adopted the same feature extraction procedure, then utilized a tracking-by-detection approach to link frame-level detections, in combination with a class-specific detector. Our method replaces object proposal method and two-stream CNN with the Faster R-CNN model for calculation efficiency. The most related work to ours is that presented in @cite_2 , in which actionness score is calculated for each action path and then a greedy search method is used to generate proposals. Our work differentiates from theirs in the following three aspects: i) we train a Faster R-CNN model for human estimation, which has a stronger ability to differentiate human from backgrounds; ii) compared with the optimization objective they proposed, our improved optimization objective simultaneously maximizing actionness score and member similarity in a path set, thus can effectively cluster the paths from the same actor into a group; iii) we utilize a tracking-by-detection approach to supplement the missing detections. | {
"cite_N": [
"@cite_2",
"@cite_10",
"@cite_17"
],
"mid": [
"1945129080",
"2950966695",
""
],
"abstract": [
"In this paper we target at generating generic action proposals in unconstrained videos. Each action proposal corresponds to a temporal series of spatial bounding boxes, i.e., a spatio-temporal video tube, which has a good potential to locate one human action. Assuming each action is performed by a human with meaningful motion, both appearance and motion cues are utilized to measure the actionness of the video tubes. After picking those spatiotemporal paths of high actionness scores, our action proposal generation is formulated as a maximum set coverage problem, where greedy search is performed to select a set of action proposals that can maximize the overall actionness score. Compared with existing action proposal approaches, our action proposals do not rely on video segmentation and can be generated in nearly real-time. Experimental results on two challenging datasets, MSRII and UCF 101, validate the superior performance of our action proposals as well as competitive results on action detection and search.",
"We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15 , 7 and 12 respectively in mAP.",
""
]
} |
1608.06197 | 2950826633 | Our work proposes a novel deep learning framework for estimating crowd density from static images of highly dense crowds. We use a combination of deep and shallow, fully convolutional networks to predict the density map for a given crowd image. Such a combination is used for effectively capturing both the high-level semantic information (face body detectors) and the low-level features (blob detectors), that are necessary for crowd counting under large scale variations. As most crowd datasets have limited training samples (<100 images) and deep learning based approaches require large amounts of training data, we perform multi-scale data augmentation. Augmenting the training samples in such a manner helps in guiding the CNN to learn scale invariant representations. Our method is tested on the challenging UCF_CC_50 dataset, and shown to outperform the state of the art methods. | Some works in the crowd counting literature experiment on datasets having sparse crowd scenes @cite_14 @cite_10 , such as UCSD dataset @cite_14 , Mall dataset @cite_9 and PETS dataset @cite_1 . In contrast, our method has been evaluated on highly dense crowd images which pose the challenges discussed in the previous section. Methods introduced in @cite_2 and @cite_19 exploit patterns of motion to estimate the count of moving objects. However, these methods rely on motion information which can be obtained only in the case of continuous video streams with a good frame rate, and do not extend to still image crowd counting. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_19",
"@cite_2",
"@cite_10"
],
"mid": [
"2123175289",
"1976959044",
"2152813046",
"2161841955",
"2096229530",
""
],
"abstract": [
"We present a privacy-preserving system for estimating the size of inhomogeneous crowds, composed of pedestrians that travel in different directions, without using explicit object segmentation or tracking. First, the crowd is segmented into components of homogeneous motion, using the mixture of dynamic textures motion model. Second, a set of simple holistic features is extracted from each segmented region, and the correspondence between features and the number of people per segment is learned with Gaussian process regression. We validate both the crowd segmentation algorithm, and the crowd counting system, on a large pedestrian dataset (2000 frames of video, containing 49,885 total pedestrian instances). Finally, we present results of the system running on a full hour of video.",
"This paper presents a multi-output regression model for crowd counting in public scenes. Existing counting by regression methods either learn a single model for global counting, or train a large number of separate regressors for localised density estimation. In contrast, our single regression model based approach is able to estimate people count in spatially localised regions and is more scalable without the need for training a large number of regressors proportional to the number of local regions. In particular, the proposed model automatically learns the functional mapping between interdependent low-level features and multi-dimensional structured outputs. The model is able to discover the inherent importance of different features for people counting at different spatial locations. Extensive evaluations on an existing crowd analysis benchmark dataset and a new more challenging dataset demonstrate the effectiveness of our approach.",
"This paper describes the crowd image analysis challenge that forms part of the PETS 2010 workshop. The aim of this challenge is to use new or existing systems for i) crowd count and density estimation, ii) tracking of individual(s) within a crowd, and iii) detection of separate flows and specific crowd events, in a real-world environment. The dataset scenarios were filmed from multiple cameras and involve multiple actors.",
"In its full generality, motion analysis of crowded objects necessitates recognition and segmentation of each moving entity. The difficulty of these tasks increases considerably with occlusions and therefore with crowding. When the objects are constrained to be of the same kind, however, partitioning of densely crowded semi-rigid objects can be accomplished by means of clustering tracked feature points. We base our approach on a highly parallelized version of the KLT tracker in order to process the video into a set of feature trajectories. While such a set of trajectories provides a substrate for motion analysis, their unequal lengths and fragmented nature present difficulties for subsequent processing. To address this, we propose a simple means of spatially and temporally conditioning the trajectories. Given this representation, we integrate it with a learned object descriptor to achieve a segmentation of the constituent motions. We present experimental results for the problem of estimating the number of moving objects in a dense crowd as a function of time.",
"While crowds of various subjects may offer applicationspecific cues to detect individuals, we demonstrate that for the general case, motion itself contains more information than previously exploited. This paper describes an unsupervised data driven Bayesian clustering algorithm which has detection of individual entities as its primary goal. We track simple image features and probabilistically group them into clusters representing independently moving entities. The numbers of clusters and the grouping of constituent features are determined without supervised learning or any subject-specific model. The new approach is instead, that space-time proximity and trajectory coherence through image space are used as the only probabilistic criteria for clustering. An important contribution of this work is how these criteria are used to perform a one-shot data association without iterating through combinatorial hypotheses of cluster assignments. Our proposed general detection algorithm can be augmented with subject-specific filtering, but is shown to already be effective at detecting individual entities in crowds of people, insects, and animals. This paper and the associated video examine the implementation and experiments of our motion clustering framework.",
""
]
} |
1608.06197 | 2950826633 | Our work proposes a novel deep learning framework for estimating crowd density from static images of highly dense crowds. We use a combination of deep and shallow, fully convolutional networks to predict the density map for a given crowd image. Such a combination is used for effectively capturing both the high-level semantic information (face body detectors) and the low-level features (blob detectors), that are necessary for crowd counting under large scale variations. As most crowd datasets have limited training samples (<100 images) and deep learning based approaches require large amounts of training data, we perform multi-scale data augmentation. Augmenting the training samples in such a manner helps in guiding the CNN to learn scale invariant representations. Our method is tested on the challenging UCF_CC_50 dataset, and shown to outperform the state of the art methods. | The algorithm proposed by Idrees @cite_3 is based on the understanding that it is difficult to obtain an accurate crowd count using a single feature. To overcome this, they use a combination of handcrafted features: HOG based head detections, Fourier analysis, and interest points based counting. The post processing is done using multi-scale Markov Random Field. However, handcrafted features often suffer a drop in accuracy when subjected to variances in illumination, perspective distortion, severe occlusion etc. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2072232009"
],
"abstract": [
"We propose to leverage multiple sources of information to compute an estimate of the number of individuals present in an extremely dense crowd visible in a single image. Due to problems including perspective, occlusion, clutter, and few pixels per person, counting by human detection in such images is almost impossible. Instead, our approach relies on multiple sources such as low confidence head detections, repetition of texture elements (using SIFT), and frequency-domain analysis to estimate counts, along with confidence associated with observing individuals, in an image region. Secondly, we employ a global consistency constraint on counts using Markov Random Field. This caters for disparity in counts in local neighborhoods and across scales. We tested our approach on a new dataset of fifty crowd images containing 64K annotated humans, with the head counts ranging from 94 to 4543. This is in stark contrast to datasets used for existing methods which contain not more than tens of individuals. We experimentally demonstrate the efficacy and reliability of the proposed approach by quantifying the counting performance."
]
} |
1608.06197 | 2950826633 | Our work proposes a novel deep learning framework for estimating crowd density from static images of highly dense crowds. We use a combination of deep and shallow, fully convolutional networks to predict the density map for a given crowd image. Such a combination is used for effectively capturing both the high-level semantic information (face body detectors) and the low-level features (blob detectors), that are necessary for crowd counting under large scale variations. As most crowd datasets have limited training samples (<100 images) and deep learning based approaches require large amounts of training data, we perform multi-scale data augmentation. Augmenting the training samples in such a manner helps in guiding the CNN to learn scale invariant representations. Our method is tested on the challenging UCF_CC_50 dataset, and shown to outperform the state of the art methods. | Though Zhang @cite_5 utilize a deep network to estimate crowd count, their model is trained using perspective maps of images. Generating these perspective maps is a laborious process and is infeasible. We use a simpler approach for training our model, yet obtain a better performance. Wang @cite_16 also train a deep model for crowd count estimation. Their model however is trained to determine only the crowd count and not the crowd density map, which is crucial for crowd analysis. Our network estimates both the crowd count as well as the crowd density distribution. | {
"cite_N": [
"@cite_5",
"@cite_16"
],
"mid": [
"1910776219",
"1978232622"
],
"abstract": [
"Cross-scene crowd counting is a challenging task where no laborious data annotation is required for counting people in new target surveillance crowd scenes unseen in the training set. The performance of most existing crowd counting methods drops significantly when they are applied to an unseen scene. To address this problem, we propose a deep convolutional neural network (CNN) for crowd counting, and it is trained alternatively with two related learning objectives, crowd density and crowd count. This proposed switchable learning approach is able to obtain better local optimum for both objectives. To handle an unseen target crowd scene, we present a data-driven method to finetune the trained CNN model for the target scene. A new dataset including 108 crowd scenes with nearly 200,000 head annotations is introduced to better evaluate the accuracy of cross-scene crowd counting methods. Extensive experiments on the proposed and another two existing datasets demonstrate the effectiveness and reliability of our approach.",
"People counting in extremely dense crowds is an important step for video surveillance and anomaly warning. The problem becomes especially more challenging due to the lack of training samples, severe occlusions, cluttered scenes and variation of perspective. Existing methods either resort to auxiliary human and face detectors or surrogate by estimating the density of crowds. Most of them rely on hand-crafted features, such as SIFT, HOG etc, and thus are prone to fail when density grows or the training sample is scarce. In this paper we propose an end-to-end deep convolutional neural networks (CNN) regression model for counting people of images in extremely dense crowds. Our method has following characteristics. Firstly, it is a deep model built on CNN to automatically learn effective features for counting. Besides, to weaken influence of background like buildings and trees, we purposely enrich the training data with expanded negative samples whose ground truth counting is set as zero. With these negative samples, the robustness can be enhanced. Extensive experimental results show that our method achieves superior performance than the state-of-the-arts in term of the mean and variance of absolute difference."
]
} |
1608.05852 | 2514117879 | While word embeddings are currently predominant for natural language processing, most of existing models learn them solely from their contexts. However, these context-based word embeddings are limited since not all words' meaning can be learned based on only context. Moreover, it is also difficult to learn the representation of the rare words due to data sparsity problem. In this work, we address these issues by learning the representations of words by integrating their intrinsic (descriptive) and extrinsic (contextual) information. To prove the effectiveness of our model, we evaluate it on four tasks, including word similarity, reverse dictionaries,Wiki link prediction, and document classification. Experiment results show that our model is powerful in both word and document modeling. | Distributed word representations are first introduced by , and have been successfully used in a lot of NLP tasks, including language modeling @cite_13 , parsing @cite_16 , disambiguation @cite_21 , and many others. | {
"cite_N": [
"@cite_16",
"@cite_13",
"@cite_21"
],
"mid": [
"2133280805",
"100623710",
"2158899491"
],
"abstract": [
"Natural language parsing has typically been done with small sets of discrete categories such as NP and VP, but this representation does not capture the full syntactic nor semantic richness of linguistic phrases, and attempts to improve on this by lexicalizing phrases or splitting categories only partly address the problem at the cost of huge feature spaces and sparseness. Instead, we introduce a Compositional Vector Grammar (CVG), which combines PCFGs with a syntactically untied recursive neural network that learns syntactico-semantic, compositional vector representations. The CVG improves the PCFG of the Stanford Parser by 3.8 to obtain an F1 score of 90.4 . It is fast to train and implemented approximately as an efficient reranker it is about 20 faster than the current Stanford factored parser. The CVG learns a soft notion of head words and improves performance on the types of ambiguities that require semantic information such as PP attachments.",
"A central goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on several methods to speed-up both training and probability computation, as well as comparative experiments to evaluate the improvements brought by these techniques. We finally describe the incorporation of this new language model into a state-of-the-art speech recognizer of conversational speech.",
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements."
]
} |
1608.05852 | 2514117879 | While word embeddings are currently predominant for natural language processing, most of existing models learn them solely from their contexts. However, these context-based word embeddings are limited since not all words' meaning can be learned based on only context. Moreover, it is also difficult to learn the representation of the rare words due to data sparsity problem. In this work, we address these issues by learning the representations of words by integrating their intrinsic (descriptive) and extrinsic (contextual) information. To prove the effectiveness of our model, we evaluate it on four tasks, including word similarity, reverse dictionaries,Wiki link prediction, and document classification. Experiment results show that our model is powerful in both word and document modeling. | Previously, word embeddings are often the by-product of a language model @cite_13 @cite_12 @cite_3 . However, such methods are often time consuming and involve lots of non-linear computations. Recently, proposed two log-linear models, namely CBOW and Skip-gram, to learn word embedding directly from a large-scale text corpus efficiently. Glove, proposed by , is also an efficient word embedding learning framework, which combines the global word co-occurrence statistics as well as local context window information. | {
"cite_N": [
"@cite_3",
"@cite_13",
"@cite_12"
],
"mid": [
"179875071",
"100623710",
"2117130368"
],
"abstract": [
"A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition",
"A central goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on several methods to speed-up both training and probability computation, as well as comparative experiments to evaluate the improvements brought by these techniques. We finally describe the incorporation of this new language model into a state-of-the-art speech recognizer of conversational speech.",
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance."
]
} |
1608.06009 | 2508685579 | We introduce the Random Access Zipper (RAZ), a simple, purely-functional data structure for editable sequences. A RAZ combines the structure of a zipper with that of a tree: like a zipper, edits at the cursor require constant time; by leveraging tree structure, relocating the edit cursor in the sequence requires logarithmic time. While existing data structures provide these time bounds, none do so with the same simplicity and brevity of code as the RAZ. The simplicity of the RAZ provides the opportunity for more programmers to extend the structure to their own needs, and we provide some suggestions for how to do so. | While similar in asymptotic costs, in settings that demand and or which employ , the 2-3 finger tree and are significantly different; this can impact the asymptotics of comparing sequences for equality. In the presence of hash-consing, structural identity coincides with physical identity, allowing for @math -time equality checks of arbitrarily long sequences. As a result of their approach, 2-3 finger trees are history dependent. This fact makes them unsuitable for settings such as memoization-based incremental computing @cite_3 @cite_0 @cite_6 . | {
"cite_N": [
"@cite_0",
"@cite_6",
"@cite_3"
],
"mid": [
"2106656979",
"2050204069",
"2035829578"
],
"abstract": [
"Many researchers have proposed programming languages that support incremental computation (IC), which allows programs to be efficiently re-executed after a small change to the input. However, existing implementations of such languages have two important drawbacks. First, recomputation is oblivious to specific demands on the program output; that is, if a program input changes, all dependencies will be recomputed, even if an observer no longer requires certain outputs. Second, programs are made incremental as a unit, with little or no support for reusing results outside of their original context, e.g., when reordered. To address these problems, we present λiccdd, a core calculus that applies a demand-driven semantics to incremental computation, tracking changes in a hierarchical fashion in a novel demanded computation graph. λiccdd also formalizes an explicit separation between inner, incremental computations and outer observers. This combination ensures λiccdd programs only recompute computations as demanded by observers, and allows inner computations to be reused more liberally. We present Adapton, an OCaml library implementing λiccdd. We evaluated Adapton on a range of benchmarks, and found that it provides reliable speedups, and in many cases dramatically outperforms state-of-the-art IC approaches.",
"Over the past thirty years, there has been significant progress in developing general-purpose, language-based approaches to incremental computation, which aims to efficiently update the result of a computation when an input is changed. A key design challenge in such approaches is how to provide efficient incremental support for a broad range of programs. In this paper, we argue that first-class names are a critical linguistic feature for efficient incremental computation. Names identify computations to be reused across differing runs of a program, and making them first class gives programmers a high level of control over reuse. We demonstrate the benefits of names by presenting Nominal Adapton, an ML-like language for incremental computation with names. We describe how to use Nominal Adapton to efficiently incrementalize several standard programming patterns---including maps, folds, and unfolds---and show how to build efficient, incremental probabilistic trees and tries. Since Nominal Adapton's implementation is subtle, we formalize it as a core calculus and prove it is from-scratch consistent, meaning it always produces the same answer as simply re-running the computation. Finally, we demonstrate that Nominal Adapton can provide large speedups over both from-scratch computation and Adapton, a previous state-of-the-art incremental computation system.",
""
]
} |
1608.06009 | 2508685579 | We introduce the Random Access Zipper (RAZ), a simple, purely-functional data structure for editable sequences. A RAZ combines the structure of a zipper with that of a tree: like a zipper, edits at the cursor require constant time; by leveraging tree structure, relocating the edit cursor in the sequence requires logarithmic time. While existing data structures provide these time bounds, none do so with the same simplicity and brevity of code as the RAZ. The simplicity of the RAZ provides the opportunity for more programmers to extend the structure to their own needs, and we provide some suggestions for how to do so. | The -Vector @cite_2 uses a balanced tree to represent immutable vectors, focusing on practical issues such as parallel performance and cache locality. These performance considerations are outside the scope of our current work, but are interesting for future work. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2036610497"
],
"abstract": [
"State-of-the-art immutable collections have wildly differing performance characteristics across their operations, often forcing programmers to choose different collection implementations for each task. Thus, changes to the program can invalidate the choice of collections, making code evolution costly. It would be desirable to have a collection that performs well for a broad range of operations. To this end, we present the RRB-Vector, an immutable sequence collection that offers good performance across a large number of sequential and parallel operations. The underlying innovations are: (1) the Relaxed-Radix-Balanced (RRB) tree structure, which allows efficient structural reorganization, and (2) an optimization that exploits spatio-temporal locality on the RRB data structure in order to offset the cost of traversing the tree. In our benchmarks, the RRB-Vector speedup for parallel operations is lower bounded by 7x when executing on 4 CPUs of 8 cores each. The performance for discrete operations, such as appending on either end, or updating and removing elements, is consistently good and compares favorably to the most important immutable sequence collections in the literature and in use today. The memory footprint of RRB-Vector is on par with arrays and an order of magnitude less than competing collections."
]
} |
1608.05971 | 2517503862 | This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets. | Some approaches focus on binary classes such as foreground and background segmentation @cite_40 @cite_3 . This field includes also some work that has a focus on anomaly detection @cite_5 since authors use a single-class classification scheme and constructed an outlier detection method for all other categories. Some other approaches concentrate on multi-class segmentation @cite_17 @cite_43 @cite_38 @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_3",
"@cite_43",
"@cite_40",
"@cite_5",
"@cite_17"
],
"mid": [
"2083542343",
"2029859592",
"2017691720",
"1961270558",
"2264563156",
"1931450083",
"2139086308"
],
"abstract": [
"We formulate a layered model for object detection and image segmentation. We describe a generative probabilistic model that composites the output of a bank of object detectors in order to define shape masks and explain the appearance, depth ordering, and labels of all pixels in an image. Notably, our system estimates both class labels and object instance labels. Building on previous benchmark criteria for object detection and image segmentation, we define a novel score that evaluates both class and instance segmentation. We evaluate our system on the PASCAL 2009 and 2010 segmentation challenge data sets and show good test results with state-of-the-art performance in several categories, including segmenting humans.",
"The desire of enabling computers to learn semantic concepts from large quantities of Internet videos has motivated increasing interests on semantic video understanding, while video segmentation is important yet challenging for understanding videos. The main difficulty of video segmentation arises from the burden of labeling training samples, making the problem largely unsolved. In this paper, we present a novel nearest neighbor-based label transfer scheme for weakly supervised video segmentation. Whereas previous weakly supervised video segmentation methods have been limited to the two-class case, our proposed scheme focuses on more challenging multiclass video segmentation, which finds a semantically meaningful label for every pixel in a video. Our scheme enjoys several favorable properties when compared with conventional methods. First, a weakly supervised hashing procedure is carried out to handle both metric and semantic similarity. Second, the proposed nearest neighbor-based label transfer algorithm effectively avoids overfitting caused by weakly supervised data. Third, a multi-video graph model is built to encourage smoothness between regions that are spatiotemporally adjacent and similar in appearance. We demonstrate the effectiveness of the proposed scheme by comparing it with several other state-of-the-art weakly supervised segmentation methods on one new Wild8 dataset and two other publicly available datasets.",
"We present a novel framework for generating and ranking plausible objects hypotheses in an image using bottom-up processes and mid-level cues. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge about properties of individual object classes, by solving a sequence of constrained parametric min-cut problems (CPMC) on a regular image grid. We then learn to rank the object hypotheses by training a continuous model to predict how plausible the segments are, given their mid-level region properties. We show that this algorithm significantly outperforms the state of the art for low-level segmentation in the VOC09 segmentation dataset. It achieves the same average best segmentation covering as the best performing technique to date [2], 0.61 when using just the top 7 ranked segments, instead of the full hierarchy in [2]. Our method achieves 0.78 average best covering using 154 segments. In a companion paper [18], we also show that the algorithm achieves state-of-the art results when used in a segmentation-based recognition pipeline.",
"We address the problem of integrating object reasoning with supervoxel labeling in multiclass semantic video segmentation. To this end, we first propose an object-augmented dense CRF in spatio-temporal domain, which captures long-range dependency between supervoxels, and imposes consistency between object and supervoxel labels. We develop an efficient mean field inference algorithm to jointly infer the supervoxel labels, object activations and their occlusion relations for a moderate number of object hypotheses. To scale up our method, we adopt an active inference strategy to improve the efficiency, which adaptively selects object subgraphs in the object-augmented dense CRF. We formulate the problem as a Markov Decision Process, which learns an approximate optimal policy based on a reward of accuracy improvement and a set of well-designed model and input features. We evaluate our method on three publicly available multiclass video semantic segmentation datasets and demonstrate superior efficiency and accuracy.",
"Pixel-wise street segmentation of photographs taken from a drivers perspective is important for self-driving cars and can also support other object recognition tasks. A framework called SST was developed to examine the accuracy and execution time of different neural networks. The best neural network achieved an @math -score of 89.5 with a simple feedforward neural network which trained to solve a regression task.",
"In this paper, we propose a method for real-time anomaly detection and localization in crowded scenes. Each video is defined as a set of non-overlapping cubic patches, and is described using two local and global descriptors. These descriptors capture the video properties from different aspects. By incorporating simple and cost-effective Gaussian classifiers, we can distinguish normal activities and anomalies in videos. The local and global features are based on structure similarity between adjacent patches and the features learned in an unsupervised way, using a sparse auto-encoder. Experimental results show that our algorithm is comparable to a state-of-the-art procedure on UCSD ped2 and UMN benchmarks, but even more time-efficient. The experiments confirm that our system can reliably detect and localize anomalies as soon as they happen in a video.",
"The effective propagation of pixel labels through the spatial and temporal domains is vital to many computer vision and multimedia problems, yet little attention have been paid to the temporal video domain propagation in the past. Previous video label propagation algorithms largely avoided the use of dense optical flow estimation due to their computational costs and inaccuracies, and relied heavily on complex (and slower) appearance models. We show in this paper the limitations of pure motion and appearance based propagation methods alone, especially the fact that their performances vary on different type of videos. We propose a probabilistic framework that estimates the reliability of the sources and automatically adjusts the weights between them. Our experiments show that the “dragging effect” of pure optical-flow-based methods are effectively avoided, while the problems of pure appearance-based methods such the large intra-class variance is also effectively handled."
]
} |
1608.05971 | 2517503862 | This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets. | Recently created video datasets provide typically image data in RGB format. Correspondingly, there is no recent research on gray-scale semantic video segmentation; the use of RGB data is common standard, see @cite_8 @cite_45 @cite_43 @cite_38 @cite_28 . There are also some segmentation approaches that use RGB-D datasets @cite_37 @cite_41 @cite_39 . | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_8",
"@cite_28",
"@cite_41",
"@cite_39",
"@cite_43",
"@cite_45"
],
"mid": [
"2029859592",
"2950436315",
"1994356125",
"1920142129",
"",
"1903208982",
"1961270558",
"2024938892"
],
"abstract": [
"The desire of enabling computers to learn semantic concepts from large quantities of Internet videos has motivated increasing interests on semantic video understanding, while video segmentation is important yet challenging for understanding videos. The main difficulty of video segmentation arises from the burden of labeling training samples, making the problem largely unsolved. In this paper, we present a novel nearest neighbor-based label transfer scheme for weakly supervised video segmentation. Whereas previous weakly supervised video segmentation methods have been limited to the two-class case, our proposed scheme focuses on more challenging multiclass video segmentation, which finds a semantically meaningful label for every pixel in a video. Our scheme enjoys several favorable properties when compared with conventional methods. First, a weakly supervised hashing procedure is carried out to handle both metric and semantic similarity. Second, the proposed nearest neighbor-based label transfer algorithm effectively avoids overfitting caused by weakly supervised data. Third, a multi-video graph model is built to encourage smoothness between regions that are spatiotemporally adjacent and similar in appearance. We demonstrate the effectiveness of the proposed scheme by comparing it with several other state-of-the-art weakly supervised segmentation methods on one new Wild8 dataset and two other publicly available datasets.",
"We propose a novel superpixel-based multi-view convolutional neural network for semantic image segmentation. The proposed network produces a high quality segmentation of a single image by leveraging information from additional views of the same scene. Particularly in indoor videos such as captured by robotic platforms or handheld and bodyworn RGBD cameras, nearby video frames provide diverse viewpoints and additional context of objects and scenes. To leverage such information, we first compute region correspondences by optical flow and image boundary-based superpixels. Given these region correspondences, we propose a novel spatio-temporal pooling layer to aggregate information over space and time. We evaluate our approach on the NYU--Depth--V2 and the SUN3D datasets and compare it to various state-of-the-art single-view and multi-view approaches. Besides a general improvement over the state-of-the-art, we also show the benefits of making use of unlabeled frames during training for multi-view as well as single-view prediction.",
"Computational and memory costs restrict spectral techniques to rather small graphs, which is a serious limitation especially in video segmentation. In this paper, we propose the use of a reduced graph based on superpixels. In contrast to previous work, the reduced graph is reweighted such that the resulting segmentation is equivalent, under certain assumptions, to that of the full graph. We consider equivalence in terms of the normalized cut and of its spectral clustering relaxation. The proposed method reduces runtime and memory consumption and yields on par results in image and video segmentation. Further, it enables an efficient data representation and update for a new streaming video segmentation approach that also achieves state-of-the-art performance.",
"Semantic object segmentation in video is an important step for large-scale multimedia analysis. In many cases, however, semantic objects are only tagged at video-level, making them difficult to be located and segmented. To address this problem, this paper proposes an approach to segment semantic objects in weakly labeled video via object detection. In our approach, a novel video segmentation-by-detection framework is proposed, which first incorporates object and region detectors pre-trained on still images to generate a set of detection and segmentation proposals. Based on the noisy proposals, several object tracks are then initialized by solving a joint binary optimization problem with min-cost flow. As such tracks actually provide rough configurations of semantic objects, we thus refine the object segmentation while preserving the spatiotemporal consistency by inferring the shape likelihoods of pixels from the statistical information of tracks. Experimental results on Youtube-Objects dataset and SegTrack v2 dataset demonstrate that our method outperforms state-of-the-arts and shows impressive results.",
"",
"We propose a new approach for semantic segmentation of 3D city models. Starting from an SfM reconstruction of a street-side scene, we perform classification and facade splitting purely in 3D, obviating the need for slow image-based semantic segmentation methods. We show that a properly trained pure-3D approach produces high quality labelings, with significant speed benefits (20x faster) allowing us to analyze entire streets in a matter of minutes. Additionally, if speed is not of the essence, the 3D labeling can be combined with the results of a state-of-the-art 2D classifier, further boosting the performance. Further, we propose a novel facade separation based on semantic nuances between facades. Finally, inspired by the use of architectural principles for 2D facade labeling, we propose new 3D-specific principles and an efficient optimization scheme based on an integer quadratic programming formulation.",
"We address the problem of integrating object reasoning with supervoxel labeling in multiclass semantic video segmentation. To this end, we first propose an object-augmented dense CRF in spatio-temporal domain, which captures long-range dependency between supervoxels, and imposes consistency between object and supervoxel labels. We develop an efficient mean field inference algorithm to jointly infer the supervoxel labels, object activations and their occlusion relations for a moderate number of object hypotheses. To scale up our method, we adopt an active inference strategy to improve the efficiency, which adaptively selects object subgraphs in the object-augmented dense CRF. We formulate the problem as a Markov Decision Process, which learns an approximate optimal policy based on a reward of accuracy improvement and a set of well-designed model and input features. We evaluate our method on three publicly available multiclass video semantic segmentation datasets and demonstrate superior efficiency and accuracy.",
"Video segmentation has become an important and active research area with a large diversity of proposed approaches. Graph-based methods, enabling top-performance on recent benchmarks, consist of three essential components: 1. powerful features account for object appearance and motion similarities; 2. spatio-temporal neighborhoods of pixels or superpixels (the graph edges) are modeled using a combination of those features; 3. video segmentation is formulated as a graph partitioning problem. While a wide variety of features have been explored and various graph partition algorithms have been proposed, there is surprisingly little research on how to construct a graph to obtain the best video segmentation performance. This is the focus of our paper. We propose to combine features by means of a classifier, use calibrated classifier outputs as edge weights and define the graph topology by edge selection. By learning the graph (without changes to the graph partitioning method), we improve the results of the best performing video segmentation algorithm by 6 on the challenging VSB100 benchmark, while reducing its runtime by 55 , as the learnt graph is much sparser."
]
} |
1608.05971 | 2517503862 | This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets. | We recall briefly some common local or global feature extraction methods in the semantic segmentation field. These feature extraction methods are commonly used after having super-voxels extracted from video frames @cite_38 . | {
"cite_N": [
"@cite_38"
],
"mid": [
"2029859592"
],
"abstract": [
"The desire of enabling computers to learn semantic concepts from large quantities of Internet videos has motivated increasing interests on semantic video understanding, while video segmentation is important yet challenging for understanding videos. The main difficulty of video segmentation arises from the burden of labeling training samples, making the problem largely unsolved. In this paper, we present a novel nearest neighbor-based label transfer scheme for weakly supervised video segmentation. Whereas previous weakly supervised video segmentation methods have been limited to the two-class case, our proposed scheme focuses on more challenging multiclass video segmentation, which finds a semantically meaningful label for every pixel in a video. Our scheme enjoys several favorable properties when compared with conventional methods. First, a weakly supervised hashing procedure is carried out to handle both metric and semantic similarity. Second, the proposed nearest neighbor-based label transfer algorithm effectively avoids overfitting caused by weakly supervised data. Third, a multi-video graph model is built to encourage smoothness between regions that are spatiotemporally adjacent and similar in appearance. We demonstrate the effectiveness of the proposed scheme by comparing it with several other state-of-the-art weakly supervised segmentation methods on one new Wild8 dataset and two other publicly available datasets."
]
} |
1608.05971 | 2517503862 | This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets. | Pixel color features are features used in almost every semantic segmentation system @cite_8 @cite_45 @cite_43 @cite_38 @cite_39 . Those includes three channel values for RGB or HSV images, and also values obtained by histogram equalization methods. The histogram of oriented gradients (HOG) defines a set of features combining at sets of pixels approximated gradient values for partial derivatives in @math or @math direction @cite_45 @cite_43 . Some approaches also used other histogram definitions such as the hue color histogram or a texton histogram @cite_28 . | {
"cite_N": [
"@cite_38",
"@cite_8",
"@cite_28",
"@cite_39",
"@cite_43",
"@cite_45"
],
"mid": [
"2029859592",
"1994356125",
"1920142129",
"1903208982",
"1961270558",
"2024938892"
],
"abstract": [
"The desire of enabling computers to learn semantic concepts from large quantities of Internet videos has motivated increasing interests on semantic video understanding, while video segmentation is important yet challenging for understanding videos. The main difficulty of video segmentation arises from the burden of labeling training samples, making the problem largely unsolved. In this paper, we present a novel nearest neighbor-based label transfer scheme for weakly supervised video segmentation. Whereas previous weakly supervised video segmentation methods have been limited to the two-class case, our proposed scheme focuses on more challenging multiclass video segmentation, which finds a semantically meaningful label for every pixel in a video. Our scheme enjoys several favorable properties when compared with conventional methods. First, a weakly supervised hashing procedure is carried out to handle both metric and semantic similarity. Second, the proposed nearest neighbor-based label transfer algorithm effectively avoids overfitting caused by weakly supervised data. Third, a multi-video graph model is built to encourage smoothness between regions that are spatiotemporally adjacent and similar in appearance. We demonstrate the effectiveness of the proposed scheme by comparing it with several other state-of-the-art weakly supervised segmentation methods on one new Wild8 dataset and two other publicly available datasets.",
"Computational and memory costs restrict spectral techniques to rather small graphs, which is a serious limitation especially in video segmentation. In this paper, we propose the use of a reduced graph based on superpixels. In contrast to previous work, the reduced graph is reweighted such that the resulting segmentation is equivalent, under certain assumptions, to that of the full graph. We consider equivalence in terms of the normalized cut and of its spectral clustering relaxation. The proposed method reduces runtime and memory consumption and yields on par results in image and video segmentation. Further, it enables an efficient data representation and update for a new streaming video segmentation approach that also achieves state-of-the-art performance.",
"Semantic object segmentation in video is an important step for large-scale multimedia analysis. In many cases, however, semantic objects are only tagged at video-level, making them difficult to be located and segmented. To address this problem, this paper proposes an approach to segment semantic objects in weakly labeled video via object detection. In our approach, a novel video segmentation-by-detection framework is proposed, which first incorporates object and region detectors pre-trained on still images to generate a set of detection and segmentation proposals. Based on the noisy proposals, several object tracks are then initialized by solving a joint binary optimization problem with min-cost flow. As such tracks actually provide rough configurations of semantic objects, we thus refine the object segmentation while preserving the spatiotemporal consistency by inferring the shape likelihoods of pixels from the statistical information of tracks. Experimental results on Youtube-Objects dataset and SegTrack v2 dataset demonstrate that our method outperforms state-of-the-arts and shows impressive results.",
"We propose a new approach for semantic segmentation of 3D city models. Starting from an SfM reconstruction of a street-side scene, we perform classification and facade splitting purely in 3D, obviating the need for slow image-based semantic segmentation methods. We show that a properly trained pure-3D approach produces high quality labelings, with significant speed benefits (20x faster) allowing us to analyze entire streets in a matter of minutes. Additionally, if speed is not of the essence, the 3D labeling can be combined with the results of a state-of-the-art 2D classifier, further boosting the performance. Further, we propose a novel facade separation based on semantic nuances between facades. Finally, inspired by the use of architectural principles for 2D facade labeling, we propose new 3D-specific principles and an efficient optimization scheme based on an integer quadratic programming formulation.",
"We address the problem of integrating object reasoning with supervoxel labeling in multiclass semantic video segmentation. To this end, we first propose an object-augmented dense CRF in spatio-temporal domain, which captures long-range dependency between supervoxels, and imposes consistency between object and supervoxel labels. We develop an efficient mean field inference algorithm to jointly infer the supervoxel labels, object activations and their occlusion relations for a moderate number of object hypotheses. To scale up our method, we adopt an active inference strategy to improve the efficiency, which adaptively selects object subgraphs in the object-augmented dense CRF. We formulate the problem as a Markov Decision Process, which learns an approximate optimal policy based on a reward of accuracy improvement and a set of well-designed model and input features. We evaluate our method on three publicly available multiclass video semantic segmentation datasets and demonstrate superior efficiency and accuracy.",
"Video segmentation has become an important and active research area with a large diversity of proposed approaches. Graph-based methods, enabling top-performance on recent benchmarks, consist of three essential components: 1. powerful features account for object appearance and motion similarities; 2. spatio-temporal neighborhoods of pixels or superpixels (the graph edges) are modeled using a combination of those features; 3. video segmentation is formulated as a graph partitioning problem. While a wide variety of features have been explored and various graph partition algorithms have been proposed, there is surprisingly little research on how to construct a graph to obtain the best video segmentation performance. This is the focus of our paper. We propose to combine features by means of a classifier, use calibrated classifier outputs as edge weights and define the graph topology by edge selection. By learning the graph (without changes to the graph partitioning method), we improve the results of the best performing video segmentation algorithm by 6 on the challenging VSB100 benchmark, while reducing its runtime by 55 , as the learnt graph is much sparser."
]
} |
1608.05971 | 2517503862 | This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets. | Further appearance-based features are defined as across-boundary appearance features, texture features, or spatio-temporal appearance features; see @cite_8 @cite_45 @cite_43 @cite_38 . Some approaches that use RGB-D datasets, also include 3-dimensional (3D) positions or 3D optical flow features @cite_41 @cite_39 . Recently, some approaches are published that use CNNs for feature extraction; using pre-trained models for feature representation is common in @cite_31 @cite_37 @cite_0 . | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_8",
"@cite_41",
"@cite_39",
"@cite_0",
"@cite_43",
"@cite_45",
"@cite_31"
],
"mid": [
"2029859592",
"2950436315",
"1994356125",
"",
"1903208982",
"1952506714",
"1961270558",
"2024938892",
"1910657905"
],
"abstract": [
"The desire of enabling computers to learn semantic concepts from large quantities of Internet videos has motivated increasing interests on semantic video understanding, while video segmentation is important yet challenging for understanding videos. The main difficulty of video segmentation arises from the burden of labeling training samples, making the problem largely unsolved. In this paper, we present a novel nearest neighbor-based label transfer scheme for weakly supervised video segmentation. Whereas previous weakly supervised video segmentation methods have been limited to the two-class case, our proposed scheme focuses on more challenging multiclass video segmentation, which finds a semantically meaningful label for every pixel in a video. Our scheme enjoys several favorable properties when compared with conventional methods. First, a weakly supervised hashing procedure is carried out to handle both metric and semantic similarity. Second, the proposed nearest neighbor-based label transfer algorithm effectively avoids overfitting caused by weakly supervised data. Third, a multi-video graph model is built to encourage smoothness between regions that are spatiotemporally adjacent and similar in appearance. We demonstrate the effectiveness of the proposed scheme by comparing it with several other state-of-the-art weakly supervised segmentation methods on one new Wild8 dataset and two other publicly available datasets.",
"We propose a novel superpixel-based multi-view convolutional neural network for semantic image segmentation. The proposed network produces a high quality segmentation of a single image by leveraging information from additional views of the same scene. Particularly in indoor videos such as captured by robotic platforms or handheld and bodyworn RGBD cameras, nearby video frames provide diverse viewpoints and additional context of objects and scenes. To leverage such information, we first compute region correspondences by optical flow and image boundary-based superpixels. Given these region correspondences, we propose a novel spatio-temporal pooling layer to aggregate information over space and time. We evaluate our approach on the NYU--Depth--V2 and the SUN3D datasets and compare it to various state-of-the-art single-view and multi-view approaches. Besides a general improvement over the state-of-the-art, we also show the benefits of making use of unlabeled frames during training for multi-view as well as single-view prediction.",
"Computational and memory costs restrict spectral techniques to rather small graphs, which is a serious limitation especially in video segmentation. In this paper, we propose the use of a reduced graph based on superpixels. In contrast to previous work, the reduced graph is reweighted such that the resulting segmentation is equivalent, under certain assumptions, to that of the full graph. We consider equivalence in terms of the normalized cut and of its spectral clustering relaxation. The proposed method reduces runtime and memory consumption and yields on par results in image and video segmentation. Further, it enables an efficient data representation and update for a new streaming video segmentation approach that also achieves state-of-the-art performance.",
"",
"We propose a new approach for semantic segmentation of 3D city models. Starting from an SfM reconstruction of a street-side scene, we perform classification and facade splitting purely in 3D, obviating the need for slow image-based semantic segmentation methods. We show that a properly trained pure-3D approach produces high quality labelings, with significant speed benefits (20x faster) allowing us to analyze entire streets in a matter of minutes. Additionally, if speed is not of the essence, the 3D labeling can be combined with the results of a state-of-the-art 2D classifier, further boosting the performance. Further, we propose a novel facade separation based on semantic nuances between facades. Finally, inspired by the use of architectural principles for 2D facade labeling, we propose new 3D-specific principles and an efficient optimization scheme based on an integer quadratic programming formulation.",
"In this paper, we propose an approach that exploits object segmentation in order to improve the accuracy of object detection. We frame the problem as inference in a Markov Random Field, in which each detection hypothesis scores object appearance as well as contextual information using Convolutional Neural Networks, and allows the hypothesis to choose and score a segment out of a large pool of accurate object segmentation proposals. This enables the detector to incorporate additional evidence when it is available and thus results in more accurate detections. Our experiments show an improvement of 4.1 in mAP over the R-CNN baseline on PASCAL VOC 2010, and 3.4 over the current state-of-the-art, demonstrating the power of our approach.",
"We address the problem of integrating object reasoning with supervoxel labeling in multiclass semantic video segmentation. To this end, we first propose an object-augmented dense CRF in spatio-temporal domain, which captures long-range dependency between supervoxels, and imposes consistency between object and supervoxel labels. We develop an efficient mean field inference algorithm to jointly infer the supervoxel labels, object activations and their occlusion relations for a moderate number of object hypotheses. To scale up our method, we adopt an active inference strategy to improve the efficiency, which adaptively selects object subgraphs in the object-augmented dense CRF. We formulate the problem as a Markov Decision Process, which learns an approximate optimal policy based on a reward of accuracy improvement and a set of well-designed model and input features. We evaluate our method on three publicly available multiclass video semantic segmentation datasets and demonstrate superior efficiency and accuracy.",
"Video segmentation has become an important and active research area with a large diversity of proposed approaches. Graph-based methods, enabling top-performance on recent benchmarks, consist of three essential components: 1. powerful features account for object appearance and motion similarities; 2. spatio-temporal neighborhoods of pixels or superpixels (the graph edges) are modeled using a combination of those features; 3. video segmentation is formulated as a graph partitioning problem. While a wide variety of features have been explored and various graph partition algorithms have been proposed, there is surprisingly little research on how to construct a graph to obtain the best video segmentation performance. This is the focus of our paper. We propose to combine features by means of a classifier, use calibrated classifier outputs as edge weights and define the graph topology by edge selection. By learning the graph (without changes to the graph partitioning method), we improve the results of the best performing video segmentation algorithm by 6 on the challenging VSB100 benchmark, while reducing its runtime by 55 , as the learnt graph is much sparser.",
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN and also with the well known DeepLab-LargeFOV, DeconvNet architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. We show that SegNet provides good performance with competitive inference time and more efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at this http URL"
]
} |
1608.05971 | 2517503862 | This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets. | Some researches wanted to propose a (very) general image segmentation approach. For this reason, they concentrated on using unsupervised segmentation. This field includes clustering algorithms such as k-means and mean-shift @cite_34 , or graph-based algorithms @cite_8 @cite_41 @cite_45 @cite_36 . | {
"cite_N": [
"@cite_8",
"@cite_41",
"@cite_36",
"@cite_45",
"@cite_34"
],
"mid": [
"1994356125",
"",
"2109154902",
"2024938892",
"2049612634"
],
"abstract": [
"Computational and memory costs restrict spectral techniques to rather small graphs, which is a serious limitation especially in video segmentation. In this paper, we propose the use of a reduced graph based on superpixels. In contrast to previous work, the reduced graph is reweighted such that the resulting segmentation is equivalent, under certain assumptions, to that of the full graph. We consider equivalence in terms of the normalized cut and of its spectral clustering relaxation. The proposed method reduces runtime and memory consumption and yields on par results in image and video segmentation. Further, it enables an efficient data representation and update for a new streaming video segmentation approach that also achieves state-of-the-art performance.",
"",
"Weakly supervised image segmentation is a challenging problem in computer vision field. In this paper, we present a new weakly supervised image segmentation algorithm by learning the distribution of spatially structured super pixel sets from image-level labels. Specifically, we first extract graph lets from each image where a graph let is a small-sized graph consisting of super pixels as its nodes and it encapsulates the spatial structure of those super pixels. Then, a manifold embedding algorithm is proposed to transform graph lets of different sizes into equal-length feature vectors. Thereafter, we use GMM to learn the distribution of the post-embedding graph lets. Finally, we propose a novel image segmentation algorithm, called graph let cut, that leverages the learned graph let distribution in measuring the homogeneity of a set of spatially structured super pixels. Experimental results show that the proposed approach outperforms state-of-the-art weakly supervised image segmentation methods, and its performance is comparable to those of the fully supervised segmentation models.",
"Video segmentation has become an important and active research area with a large diversity of proposed approaches. Graph-based methods, enabling top-performance on recent benchmarks, consist of three essential components: 1. powerful features account for object appearance and motion similarities; 2. spatio-temporal neighborhoods of pixels or superpixels (the graph edges) are modeled using a combination of those features; 3. video segmentation is formulated as a graph partitioning problem. While a wide variety of features have been explored and various graph partition algorithms have been proposed, there is surprisingly little research on how to construct a graph to obtain the best video segmentation performance. This is the focus of our paper. We propose to combine features by means of a classifier, use calibrated classifier outputs as edge weights and define the graph topology by edge selection. By learning the graph (without changes to the graph partitioning method), we improve the results of the best performing video segmentation algorithm by 6 on the challenging VSB100 benchmark, while reducing its runtime by 55 , as the learnt graph is much sparser.",
"In this paper, we propose a novel Weakly-Supervised Dual Clustering (WSDC) approach for image semantic segmentation with image-level labels, i.e., collaboratively performing image segmentation and tag alignment with those regions. The proposed approach is motivated from the observation that super pixels belonging to an object class usually exist across multiple images and hence can be gathered via the idea of clustering. In WSDC, spectral clustering is adopted to cluster the super pixels obtained from a set of over-segmented images. At the same time, a linear transformation between features and labels as a kind of discriminative clustering is learned to select the discriminative features among different classes. The both clustering outputs should be consistent as much as possible. Besides, weakly-supervised constraints from image-level labels are imposed to restrict the labeling of super pixels. Finally, the non-convex and non-smooth objective function are efficiently optimized using an iterative CCCP procedure. Extensive experiments conducted on MSRC and Label Me datasets demonstrate the encouraging performance of our method in comparison with some state-of-the-arts."
]
} |
1608.05971 | 2517503862 | This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets. | A random decision forest (RDF) can be used for defining another segmentation method that is a kind of a classifier composed of multiple classifiers which are trained and enhanced by using randomness extensively @cite_26 @cite_46 . The support vector machine (SVM) @cite_4 or a Markov random field (MRF) @cite_42 @cite_21 are further methods used for segmentation but not as popular as the conditional random field (CRF) that is in widespread use in recent work @cite_19 @cite_43 @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_4",
"@cite_21",
"@cite_42",
"@cite_19",
"@cite_43",
"@cite_46"
],
"mid": [
"2170325868",
"1565402342",
"1937812750",
"2083620454",
"1915480574",
"2027693089",
"1961270558",
"986585644"
],
"abstract": [
"Recent trends in semantic image segmentation have pushed for holistic scene understanding models that jointly reason about various tasks such as object detection, scene recognition, shape analysis, contextual reasoning. In this work, we are interested in understanding the roles of these different tasks in aiding semantic segmentation. Towards this goal, we \"plug-in\" human subjects for each of the various components in a state-of-the-art conditional random field model (CRF) on the MSRC dataset. Comparisons among various hybrid human-machine CRFs give us indications of how much \"head room\" there is to improve segmentation by focusing research efforts on each of the tasks. One of the interesting findings from our slew of studies was that human classification of isolated super-pixels, while being worse than current machine classifiers, provides a significant boost in performance when plugged into the CRF! Fascinated by this finding, we conducted in depth analysis of the human generated potentials. This inspired a new machine potential which significantly improves state-of-the-art performance on the MRSC dataset.",
"In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an average precision of 37.3 , which is a 56 relative improvement over existing methods. We then focus on the task of instance segmentation where we label pixels belonging to object instances found by our detector. For this task, we propose a decision forest approach that classifies pixels in the detection window as foreground or background using a family of unary and binary tests that query shape and geocentric pose features. Finally, we use the output from our object detectors in an existing superpixel classification framework for semantic scene segmentation and achieve a 24 relative improvement over current state-of-the-art for the object categories that we study. We believe advances such as those represented in this paper will facilitate the use of perception in fields like robotics.",
"Traditionally, land-cover mapping from remote sensing images is performed by classifying each atomic region in the image in isolation and by enforcing simple smoothing priors via random fields models as two independent steps. In this paper, we propose to model the segmentation problem by a discriminatively trained Conditional Random Field (CRF). To this end, we employ Structured Support Vector Machines (SSVM) to learn the weights of an informative set of appearance descriptors jointly with local class interactions. We propose a principled strategy to learn pairwise potentials encoding local class preferences from sparsely annotated ground truth. We show that this approach outperform standard baselines and more expressive CRF models, improving by 4–6 points the average class accuracy on a challenging dataset involving urban high resolution satellite imagery.",
"This paper proposes a novel object-based Markov random field model (OMRF) for semantic segmentation of remote sensing images. First, the method employs the region size and edge information to build a weighted region adjacency graph (WRAG) for capturing the complicated interactions among objects. Thereafter, aimed at modeling object interactions in the OMRF, the size and edge information are further introduced into the Gibbs joint distribution of the random field as regional penalties. Finally, the semantic segmentation is achieved through a principled probabilistic inference of the OMRF with regional penalties. The proposed method is compared with other MRF-based methods and some state-of-the-art methods. Experiments are conducted on a series of synthetic and real-world images. Segmentation results demonstrate that our method provides better performance (an accuracy improvement about 3 ). Moreover, we further discuss the application of the proposed method for classification.",
"This paper proposes a learning-based approach to scene parsing inspired by the deep Recursive Context Propagation Network (RCPN). RCPN is a deep feed-forward neural network that utilizes the contextual information from the entire image, through bottom-up followed by top-down context propagation via random binary parse trees. This improves the feature representation of every super-pixel in the image for better classification into semantic categories. We analyze RCPN and propose two novel contributions to further improve the model. We first analyze the learning of RCPN parameters and discover the presence of bypass error paths in the computation graph of RCPN that can hinder contextual propagation. We propose to tackle this problem by including the classification loss of the internal nodes of the random parse trees in the original RCPN loss function. Secondly, we use an MRF on the parse tree nodes to model the hierarchical dependency present in the output. Both modifications provide performance boosts over the original RCPN and the new system achieves state-of-the-art performance on Stanford Background, SIFT-Flow and Daimler urban datasets.",
"We present an approach MSIL-CRF that incorporates multiple instance learning (MIL) into conditional random fields (CRFs). It can generalize CRFs to work on training data with uncertain labels by the principle of MIL. In this work, it is applied to saving manual efforts on annotating training data for semantic segmentation. Specifically, we consider the setting in which the training dataset for semantic segmentation is a mixture of a few object segments and an abundant set of objects' bounding boxes. Our goal is to infer the unknown object segments enclosed by the bounding boxes so that they can serve as training data for semantic segmentation. To this end, we generate multiple segment hypotheses for each bounding box with the assumption that at least one hypothesis is close to the ground truth. By treating a bounding box as a bag with its segment hypotheses as structured instances, MSIL-CRF selects the most likely segment hypotheses by leveraging the knowledge derived from both the labeled and uncertain training data. The experimental results on the Pascal VOC segmentation task demonstrate that MSIL-CRF can provide effective alternatives to manually labeled segments for semantic segmentation.",
"We address the problem of integrating object reasoning with supervoxel labeling in multiclass semantic video segmentation. To this end, we first propose an object-augmented dense CRF in spatio-temporal domain, which captures long-range dependency between supervoxels, and imposes consistency between object and supervoxel labels. We develop an efficient mean field inference algorithm to jointly infer the supervoxel labels, object activations and their occlusion relations for a moderate number of object hypotheses. To scale up our method, we adopt an active inference strategy to improve the efficiency, which adaptively selects object subgraphs in the object-augmented dense CRF. We formulate the problem as a Markov Decision Process, which learns an approximate optimal policy based on a reward of accuracy improvement and a set of well-designed model and input features. We evaluate our method on three publicly available multiclass video semantic segmentation datasets and demonstrate superior efficiency and accuracy.",
"We consider the task of pixel-wise semantic segmentation given a small set of labeled training images. Among two of the most popular techniques to address this task are Random Forests (RF) and Neural Networks (NN). The main contribution of this work is to explore the relationship between two special forms of these techniques: stacked RFs and deep Convolutional Neural Networks (CNN). We show that there exists a mapping from stacked RF to deep CNN, and an approximate mapping back. This insight gives two major practical benefits: Firstly, deep CNNs can be intelligently constructed and initialized, which is crucial when dealing with a limited amount of training data. Secondly, it can be utilized to create a new stacked RF with improved performance. Furthermore, this mapping yields a new CNN architecture, that is well suited for pixel-wise semantic labeling. We experimentally verify these practical benefits for two different application scenarios in computer vision and biology, where the layout of parts is important: Kinect-based body part labeling from depth images, and somite segmentation in microscopy images of developing zebrafish."
]
} |
1608.05971 | 2517503862 | This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets. | Neural networks are a very popular method for image segmentation, especially with the recent success of using convolutional neural network in the semantic segmentation field. Like for many other vision tasks, neural networks have become very useful @cite_31 @cite_20 @cite_37 @cite_6 @cite_25 @cite_0 . | {
"cite_N": [
"@cite_37",
"@cite_6",
"@cite_0",
"@cite_31",
"@cite_25",
"@cite_20"
],
"mid": [
"2950436315",
"586034241",
"1952506714",
"1910657905",
"1903029394",
"2102605133"
],
"abstract": [
"We propose a novel superpixel-based multi-view convolutional neural network for semantic image segmentation. The proposed network produces a high quality segmentation of a single image by leveraging information from additional views of the same scene. Particularly in indoor videos such as captured by robotic platforms or handheld and bodyworn RGBD cameras, nearby video frames provide diverse viewpoints and additional context of objects and scenes. To leverage such information, we first compute region correspondences by optical flow and image boundary-based superpixels. Given these region correspondences, we propose a novel spatio-temporal pooling layer to aggregate information over space and time. We evaluate our approach on the NYU--Depth--V2 and the SUN3D datasets and compare it to various state-of-the-art single-view and multi-view approaches. Besides a general improvement over the state-of-the-art, we also show the benefits of making use of unlabeled frames during training for multi-view as well as single-view prediction.",
"We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches with much less training images with strong annotations in PASCAL VOC dataset.",
"In this paper, we propose an approach that exploits object segmentation in order to improve the accuracy of object detection. We frame the problem as inference in a Markov Random Field, in which each detection hypothesis scores object appearance as well as contextual information using Convolutional Neural Networks, and allows the hypothesis to choose and score a segment out of a large pool of accurate object segmentation proposals. This enables the detector to incorporate additional evidence when it is available and thus results in more accurate detections. Our experiments show an improvement of 4.1 in mAP over the R-CNN baseline on PASCAL VOC 2010, and 3.4 over the current state-of-the-art, demonstrating the power of our approach.",
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN and also with the well known DeepLab-LargeFOV, DeconvNet architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. We show that SegNet provides good performance with competitive inference time and more efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at this http URL",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1608.05971 | 2517503862 | This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets. | Fully convolutional networks (FCNs) are one of the topics that interest researchers recently. An FCN is based on the idea of extending a convolutional network (ConvNet) for arbitrary-sized inputs @cite_25 . On the way of its development, it has been used for 1-dimensional (1D) and 2-dimensional (2D) inputs @cite_33 @cite_32 , and for solving various tasks such as image restoration, sliding window detection, depth estimation, boundary prediction, or semantic segmentation. In recent years, many approaches use ConvNets as feature extractor @cite_31 @cite_37 @cite_0 . Some approaches turn ConvNets into FCNs by discarding the final classifier layer, and convert all fully connected layers into convolutions. By this change, authors use a front-end module for solving their vision tasks @cite_31 @cite_20 @cite_37 @cite_6 @cite_25 @cite_0 . | {
"cite_N": [
"@cite_37",
"@cite_33",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_31",
"@cite_25",
"@cite_20"
],
"mid": [
"2950436315",
"2100921332",
"2166559794",
"586034241",
"1952506714",
"1910657905",
"1903029394",
"2102605133"
],
"abstract": [
"We propose a novel superpixel-based multi-view convolutional neural network for semantic image segmentation. The proposed network produces a high quality segmentation of a single image by leveraging information from additional views of the same scene. Particularly in indoor videos such as captured by robotic platforms or handheld and bodyworn RGBD cameras, nearby video frames provide diverse viewpoints and additional context of objects and scenes. To leverage such information, we first compute region correspondences by optical flow and image boundary-based superpixels. Given these region correspondences, we propose a novel spatio-temporal pooling layer to aggregate information over space and time. We evaluate our approach on the NYU--Depth--V2 and the SUN3D datasets and compare it to various state-of-the-art single-view and multi-view approaches. Besides a general improvement over the state-of-the-art, we also show the benefits of making use of unlabeled frames during training for multi-view as well as single-view prediction.",
"We present a feed-forward network architecture for recognizing an unconstrained handwritten multi-digit string. This is an extension of previous work on recognizing isolated digits. In this architecture a single digit recognizer is replicated over the input. The output layer of the network is coupled to a Viterbi alignment module that chooses the best interpretation of the input. Training errors are propagated through the Viterbi module. The novelty in this procedure is that segmentation is done on the feature maps developed in the Space Displacement Neural Network (SDNN) rather than the input (pixel) space.",
"This paper describes the use of a convolutional neural network to perform address block location on machine-printed mail pieces. Locating the address block is a difficult object recognition problem because there is often a large amount of extraneous printing on a mail piece and because address blocks vary dramatically in size and shape. We used a convolutional locator network with four outputs, each trained to find a different corner of the address block. A simple set of rules was used to generate ABL candidates from the network output. The system performs very well: when allowed five guesses, the network will tightly bound the address delivery information in 98.2 of the cases.",
"We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches with much less training images with strong annotations in PASCAL VOC dataset.",
"In this paper, we propose an approach that exploits object segmentation in order to improve the accuracy of object detection. We frame the problem as inference in a Markov Random Field, in which each detection hypothesis scores object appearance as well as contextual information using Convolutional Neural Networks, and allows the hypothesis to choose and score a segment out of a large pool of accurate object segmentation proposals. This enables the detector to incorporate additional evidence when it is available and thus results in more accurate detections. Our experiments show an improvement of 4.1 in mAP over the R-CNN baseline on PASCAL VOC 2010, and 3.4 over the current state-of-the-art, demonstrating the power of our approach.",
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN and also with the well known DeepLab-LargeFOV, DeconvNet architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. We show that SegNet provides good performance with competitive inference time and more efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at this http URL",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1608.05971 | 2517503862 | This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets. | Recently, a new convolutional network module has been introduced by Yu and Fisher @cite_12 that is especially designed for dense prediction. It uses dilated convolutions for multi-scale contextual information aggregation, and achieves some enhancements in semantic segmentation compared to previous methods. Kundu and Abhijit @cite_7 optimized the mapping of pixels into a Euclidean feature space; they achieve even better results for semantic segmentation than @cite_12 by using a graphical CRF model. | {
"cite_N": [
"@cite_7",
"@cite_12"
],
"mid": [
"2461677039",
"2963840672"
],
"abstract": [
"We present an approach to long-range spatio-temporal regularization in semantic video segmentation. Temporal regularization in video is challenging because both the camera and the scene may be in motion. Thus Euclidean distance in the space-time volume is not a good proxy for correspondence. We optimize the mapping of pixels to a Euclidean feature space so as to minimize distances between corresponding points. Structured prediction is performed by a dense CRF that operates on the optimized features. Experimental results demonstrate that the presented approach increases the accuracy and temporal consistency of semantic video segmentation.",
"Abstract: State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy."
]
} |
1608.05971 | 2517503862 | This paper presents a novel method to involve both spatial and temporal features for semantic video segmentation. Current work on convolutional neural networks(CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for both image and video analysis, especially for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenting video data. We propose a module based on a long short-term memory (LSTM) architecture of a recurrent neural network for interpreting the temporal characteristics of video frames over time. Our system takes as input frames of a video and produces a correspondingly-sized output; for segmenting the video our method combines the use of three components: First, the regional spatial features of frames are extracted using a CNN; then, using LSTM the temporal features are added; finally, by deconvolving the spatio-temporal features we produce pixel-wise predictions. Our key insight is to build spatio-temporal convolutional networks (spatio-temporal CNNs) that have an end-to-end architecture for semantic video segmentation. We adapted fully some known convolutional network architectures (such as FCN-AlexNet and FCN-VGG16), and dilated convolution into our spatio-temporal CNNs. Our spatio-temporal CNNs achieve state-of-the-art semantic segmentation, as demonstrated for the Camvid and NYUDv2 datasets. | Many approaches that have been introduced in this field have not yet used temporal features, especially in the field of deep CNNs @cite_8 @cite_41 @cite_45 @cite_43 @cite_38 @cite_28 . These approaches cannot be identified as being end-to-end methods, which points to an essential disadvantage when applying these approaches. Some approaches use deep CNNs @cite_37 @cite_7 by introducing an end-to-end architecture for also using spatio-temporal features for semantic labeling. However, none of them can change the size of time windows dynamically. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_7",
"@cite_8",
"@cite_41",
"@cite_28",
"@cite_43",
"@cite_45"
],
"mid": [
"2029859592",
"2950436315",
"2461677039",
"1994356125",
"",
"1920142129",
"1961270558",
"2024938892"
],
"abstract": [
"The desire of enabling computers to learn semantic concepts from large quantities of Internet videos has motivated increasing interests on semantic video understanding, while video segmentation is important yet challenging for understanding videos. The main difficulty of video segmentation arises from the burden of labeling training samples, making the problem largely unsolved. In this paper, we present a novel nearest neighbor-based label transfer scheme for weakly supervised video segmentation. Whereas previous weakly supervised video segmentation methods have been limited to the two-class case, our proposed scheme focuses on more challenging multiclass video segmentation, which finds a semantically meaningful label for every pixel in a video. Our scheme enjoys several favorable properties when compared with conventional methods. First, a weakly supervised hashing procedure is carried out to handle both metric and semantic similarity. Second, the proposed nearest neighbor-based label transfer algorithm effectively avoids overfitting caused by weakly supervised data. Third, a multi-video graph model is built to encourage smoothness between regions that are spatiotemporally adjacent and similar in appearance. We demonstrate the effectiveness of the proposed scheme by comparing it with several other state-of-the-art weakly supervised segmentation methods on one new Wild8 dataset and two other publicly available datasets.",
"We propose a novel superpixel-based multi-view convolutional neural network for semantic image segmentation. The proposed network produces a high quality segmentation of a single image by leveraging information from additional views of the same scene. Particularly in indoor videos such as captured by robotic platforms or handheld and bodyworn RGBD cameras, nearby video frames provide diverse viewpoints and additional context of objects and scenes. To leverage such information, we first compute region correspondences by optical flow and image boundary-based superpixels. Given these region correspondences, we propose a novel spatio-temporal pooling layer to aggregate information over space and time. We evaluate our approach on the NYU--Depth--V2 and the SUN3D datasets and compare it to various state-of-the-art single-view and multi-view approaches. Besides a general improvement over the state-of-the-art, we also show the benefits of making use of unlabeled frames during training for multi-view as well as single-view prediction.",
"We present an approach to long-range spatio-temporal regularization in semantic video segmentation. Temporal regularization in video is challenging because both the camera and the scene may be in motion. Thus Euclidean distance in the space-time volume is not a good proxy for correspondence. We optimize the mapping of pixels to a Euclidean feature space so as to minimize distances between corresponding points. Structured prediction is performed by a dense CRF that operates on the optimized features. Experimental results demonstrate that the presented approach increases the accuracy and temporal consistency of semantic video segmentation.",
"Computational and memory costs restrict spectral techniques to rather small graphs, which is a serious limitation especially in video segmentation. In this paper, we propose the use of a reduced graph based on superpixels. In contrast to previous work, the reduced graph is reweighted such that the resulting segmentation is equivalent, under certain assumptions, to that of the full graph. We consider equivalence in terms of the normalized cut and of its spectral clustering relaxation. The proposed method reduces runtime and memory consumption and yields on par results in image and video segmentation. Further, it enables an efficient data representation and update for a new streaming video segmentation approach that also achieves state-of-the-art performance.",
"",
"Semantic object segmentation in video is an important step for large-scale multimedia analysis. In many cases, however, semantic objects are only tagged at video-level, making them difficult to be located and segmented. To address this problem, this paper proposes an approach to segment semantic objects in weakly labeled video via object detection. In our approach, a novel video segmentation-by-detection framework is proposed, which first incorporates object and region detectors pre-trained on still images to generate a set of detection and segmentation proposals. Based on the noisy proposals, several object tracks are then initialized by solving a joint binary optimization problem with min-cost flow. As such tracks actually provide rough configurations of semantic objects, we thus refine the object segmentation while preserving the spatiotemporal consistency by inferring the shape likelihoods of pixels from the statistical information of tracks. Experimental results on Youtube-Objects dataset and SegTrack v2 dataset demonstrate that our method outperforms state-of-the-arts and shows impressive results.",
"We address the problem of integrating object reasoning with supervoxel labeling in multiclass semantic video segmentation. To this end, we first propose an object-augmented dense CRF in spatio-temporal domain, which captures long-range dependency between supervoxels, and imposes consistency between object and supervoxel labels. We develop an efficient mean field inference algorithm to jointly infer the supervoxel labels, object activations and their occlusion relations for a moderate number of object hypotheses. To scale up our method, we adopt an active inference strategy to improve the efficiency, which adaptively selects object subgraphs in the object-augmented dense CRF. We formulate the problem as a Markov Decision Process, which learns an approximate optimal policy based on a reward of accuracy improvement and a set of well-designed model and input features. We evaluate our method on three publicly available multiclass video semantic segmentation datasets and demonstrate superior efficiency and accuracy.",
"Video segmentation has become an important and active research area with a large diversity of proposed approaches. Graph-based methods, enabling top-performance on recent benchmarks, consist of three essential components: 1. powerful features account for object appearance and motion similarities; 2. spatio-temporal neighborhoods of pixels or superpixels (the graph edges) are modeled using a combination of those features; 3. video segmentation is formulated as a graph partitioning problem. While a wide variety of features have been explored and various graph partition algorithms have been proposed, there is surprisingly little research on how to construct a graph to obtain the best video segmentation performance. This is the focus of our paper. We propose to combine features by means of a classifier, use calibrated classifier outputs as edge weights and define the graph topology by edge selection. By learning the graph (without changes to the graph partitioning method), we improve the results of the best performing video segmentation algorithm by 6 on the challenging VSB100 benchmark, while reducing its runtime by 55 , as the learnt graph is much sparser."
]
} |
1608.05856 | 2604717658 | In this paper, we propose a non-convex formulation to recover the authentic structure from the corrupted real data. Typically, the specific structure is assumed to be low rank, which holds for a wide range of data, such as images and videos. Meanwhile, the corruption is assumed to be sparse. In the literature, such a problem is known as Robust Principal Component Analysis (RPCA), which usually recovers the low rank structure by approximating the rank function with a nuclear norm and penalizing the error by an 1-norm. Although RPCA is a convex formulation and can be solved effectively, the introduced norms are not tight approximations, which may cause the solution to deviate from the authentic one. Therefore, we consider here a non-convex relaxation, consisting of a Schatten-p norm and an q-norm that promote low rank and sparsity respectively. We derive a proximal iteratively reweighted algorithm (PIRA) to solve the problem. Our algorithm is based on an alternating direction method of multipliers, where in each iteration we linearize the underlying objective function that allows us to have a closed form solution. We demonstrate that solutions produced by the linearized approximation always converge and have a tighter approximation than the convex counterpart. Experimental results on benchmarks show encouraging results of our approach. | As the RPCA model is capable of recovering the low rank components from grossly corrupted data and theoretical conditions to ensure the perfect recovery have been analyzed in depth, RPCA and its extensions have been applied to many applications, including background modeling @cite_11 , image alignment @cite_36 and subspace segmentation @cite_5 . Specifically, presented a patch-based algorithm using low-rank matrix recovery @cite_39 . studied the problem of aligning correlated images by decomposing the matrix of corrupted images as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images @cite_25 . proposed a truncated nuclear norm regularization for estimating missing values from corrupted images @cite_21 . | {
"cite_N": [
"@cite_36",
"@cite_21",
"@cite_39",
"@cite_5",
"@cite_25",
"@cite_11"
],
"mid": [
"2114113040",
"1969698720",
"2109240917",
"",
"2054485004",
"2145962650"
],
"abstract": [
"This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of l1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments with both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions.",
"Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.",
"Most existing video denoising algorithms assume a single statistical model of image noise, e.g. additive Gaussian white noise, which often is violated in practice. In this paper, we present a new patch-based video denoising algorithm capable of removing serious mixed noise from the video data. By grouping similar patches in both spatial and temporal domain, we formulate the problem of removing mixed noise as a low-rank matrix completion problem, which leads to a denoising scheme without strong assumptions on the statistical properties of noise. The resulting nuclear norm related minimization problem can be efficiently solved by many recently developed methods. The robustness and effectiveness of our proposed denoising algorithm on removing mixed noise, e.g. heavy Gaussian noise mixed with impulsive noise, is validated in the experiments and our proposed approach compares favorably against some existing video denoising algorithms.",
"",
"In this paper, we propose a robust temporal-spatial decomposition (RTSD) model and discuss its applications in video processing. A video sequence usually possesses high correlations among and within its frames. Fully exploiting the temporal and spatial correlations enables efficient processing and better understanding of the video sequence. Considering that the video sequence typically contains slowly changing background and rapidly changing foreground as well as noise, we propose to decompose the video frames into three parts: the temporal-spatially correlated part, the feature compensation part, and the sparse noise part. Accordingly, the decomposition problem can be formulated as the minimization of a convex function, which consists of a nuclear norm, a total variation (TV)-like norm, and an l1 norm. Since the minimization is nontrivial to handle, we develop a two-stage strategy to solve this decomposition problem, and discuss different alternatives to fulfil each stage of decomposition. The RTSD model treats video frames as a unity from both the temporal and spatial point of view, and demonstrates robustness to noise and certain background variations. Experiments on video denoising and scratch detection applications verify the effectiveness of the proposed RTSD model and the developed algorithms.",
"This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces."
]
} |
1608.05856 | 2604717658 | In this paper, we propose a non-convex formulation to recover the authentic structure from the corrupted real data. Typically, the specific structure is assumed to be low rank, which holds for a wide range of data, such as images and videos. Meanwhile, the corruption is assumed to be sparse. In the literature, such a problem is known as Robust Principal Component Analysis (RPCA), which usually recovers the low rank structure by approximating the rank function with a nuclear norm and penalizing the error by an 1-norm. Although RPCA is a convex formulation and can be solved effectively, the introduced norms are not tight approximations, which may cause the solution to deviate from the authentic one. Therefore, we consider here a non-convex relaxation, consisting of a Schatten-p norm and an q-norm that promote low rank and sparsity respectively. We derive a proximal iteratively reweighted algorithm (PIRA) to solve the problem. Our algorithm is based on an alternating direction method of multipliers, where in each iteration we linearize the underlying objective function that allows us to have a closed form solution. We demonstrate that solutions produced by the linearized approximation always converge and have a tighter approximation than the convex counterpart. Experimental results on benchmarks show encouraging results of our approach. | There are several works aimed at improving the low-rank and sparse matrix recovery. @cite_10 proposed an Accelerated RPCA using random projection. Zhou and Tao @cite_12 developed a fast solver for low-rank and sparse matrix recovery with hard constraints on both @math and @math . To alleviate the challenges raised by coherent data, most recently, recovered the coherent data by Low-Rank Representation (LRR) @cite_7 . developed a fast first-order algorithm to solve the SPCP problem @cite_1 . Fazel suggested to reformulating the rank optimization problem as a Semi-Definite Programming (SDP) problem @cite_15 . An accelerated proximal gradient optimization technique was applied to solve the nuclear norm regularized least squares @cite_14 @cite_32 . | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_1",
"@cite_32",
"@cite_15",
"@cite_10",
"@cite_12"
],
"mid": [
"2339666411",
"1997201895",
"2158231959",
"1998635907",
"",
"1970833585",
"2142077116"
],
"abstract": [
"The a‐ne rank minimization problem, which consists of flnding a matrix of minimum rank subject to linear equality constraints, has been proposed in many areas of engineering and science. A speciflc rank minimization problem is the matrix completion problem, in which we wish to recover a (low-rank) data matrix from incomplete samples of its entries. A recent convex relaxation of the rank minimization problem minimizes the nuclear norm instead of the rank of the matrix. Another possible model for the rank minimization problem is the nuclear norm regularized linear least squares problem. This regularized problem is a special case of an unconstrained nonsmooth convex optimization problem, in which the objective function is the sum of a convex smooth function with Lipschitz continuous gradient and a convex function on a set of matrices. In this paper, we propose an accelerated proximal gradient algorithm, which terminates in O(1= p †) iterations with an †-optimal solution, to solve this unconstrained nonsmooth convex optimization problem, and in particular, the nuclear norm regularized linear least squares problem. We report numerical results for solving large-scale randomly generated matrix completion problems. The numerical results suggest that our algorithm is e‐cient and robust in solving large-scale random matrix completion problems. In particular, we are able to solve random matrix completion problems with matrix dimensions up to 10 5 each in less than 10 minutes on a modest PC.",
"In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.",
"The stable principal component pursuit (SPCP) problem is a non-smooth convex optimization problem, the solution of which has been shown both in theory and in practice to enable one to recover the low rank and sparse components of a matrix whose elements have been corrupted by Gaussian noise. In this paper, we show how several fast first-order methods can be applied to this problem very efficiently. Specifically, we show that the subproblems that arise when applying optimal gradient methods of Nesterov, alternating linearization methods and alternating direction augmented Lagrangian methods to the SPCP problem either have closed-form solutions or have solutions that can be obtained with very modest effort. All but one of the methods analyzed require at least one of the non-smooth terms in the objective function to be smoothed and obtain an eps-optimal solution to the SPCP problem in O(1 eps) iterations. The method that works directly with the fully non-smooth objective function, is proved to be convergent under mild conditions on the sequence of parameters it uses. Our preliminary computational tests show that the latter method, although its complexity is not known, is fastest and substantially outperforms existing methods for the SPCP problem. To best of our knowledge, an algorithm for the SPCP problem that has O(1 eps) iteration complexity and has a per iteration complexity equal to that of a singular value decomposition is given for the first time.",
"We consider the minimization of a smooth loss function regularized by the trace norm of the matrix variable. Such formulation finds applications in many machine learning tasks including multi-task learning, matrix classification, and matrix completion. The standard semidefinite programming formulation for this problem is computationally expensive. In addition, due to the non-smooth nature of the trace norm, the optimal first-order black-box method for solving such class of problems converges as O(1 √k), where k is the iteration counter. In this paper, we exploit the special structure of the trace norm, based on which we propose an extended gradient algorithm that converges as O(1 k). We further propose an accelerated gradient algorithm, which achieves the optimal convergence rate of O(1 k2) for smooth problems. Experiments on multi-task learning problems demonstrate the efficiency of the proposed algorithms.",
"",
"Exact recovery from contaminated visual data plays an important role in various tasks. By assuming the observed data matrix as the addition of a low-rank matrix and a sparse matrix, theoretic guarantee exists under mild conditions for exact data recovery. Practically matrix nuclear norm is adopted as a convex surrogate of the non-convex matrix rank function to encourage low-rank property and serves as the major component of recently-proposed Robust Principal Component Analysis (R-PCA). Recent endeavors have focused on enhancing the scalability of R-PCA to large-scale datasets, especially mitigating the computational burden of frequent large-scale Singular Value Decomposition (SVD) inherent with the nuclear norm optimization. In our proposed scheme, the nuclear norm of an auxiliary matrix is minimized instead, which is related to the original low-rank matrix by random projection. By design, the modified optimization entails SVD on matrices of much smaller scale, as compared to the original optimization problem. Theoretic analysis well justifies the proposed scheme, along with greatly reduced optimization complexity. Both qualitative and quantitative studies are provided on various computer vision benchmarks to validate its effectiveness, including facial shadow removal, surveillance background modeling and large-scale image tag transduction. It is also highlighted that the proposed solution can serve as a general principal to accelerate many other nuclear norm oriented problems in numerous tasks.",
"Low-rank and sparse structures have been profoundly studied in matrix completion and compressed sensing. In this paper, we develop \"Go Decomposition\" (GoDec) to efficiently and robustly estimate the low-rank part L and the sparse part S of a matrix X = L + S + G with noise G. GoDec alternatively assigns the low-rank approximation of X - S to L and the sparse approximation of X - L to S. The algorithm can be significantly accelerated by bilateral random projections (BRP). We also propose GoDec for matrix completion as an important variant. We prove that the objective value ||X - L - S||2F converges to a local minimum, while L and S linearly converge to local optimums. Theoretically, we analyze the influence of L, S and G to the asymptotic convergence speeds in order to discover the robustness of GoDec. Empirical studies suggest the efficiency, robustness and effectiveness of GoDec comparing with representative matrix decomposition and completion tools, e.g., Robust PCA and OptSpace."
]
} |
1608.05856 | 2604717658 | In this paper, we propose a non-convex formulation to recover the authentic structure from the corrupted real data. Typically, the specific structure is assumed to be low rank, which holds for a wide range of data, such as images and videos. Meanwhile, the corruption is assumed to be sparse. In the literature, such a problem is known as Robust Principal Component Analysis (RPCA), which usually recovers the low rank structure by approximating the rank function with a nuclear norm and penalizing the error by an 1-norm. Although RPCA is a convex formulation and can be solved effectively, the introduced norms are not tight approximations, which may cause the solution to deviate from the authentic one. Therefore, we consider here a non-convex relaxation, consisting of a Schatten-p norm and an q-norm that promote low rank and sparsity respectively. We derive a proximal iteratively reweighted algorithm (PIRA) to solve the problem. Our algorithm is based on an alternating direction method of multipliers, where in each iteration we linearize the underlying objective function that allows us to have a closed form solution. We demonstrate that solutions produced by the linearized approximation always converge and have a tighter approximation than the convex counterpart. Experimental results on benchmarks show encouraging results of our approach. | For the @math -norm, many non-convex surrogate functions have been proposed, e.g., @math -norm with @math @cite_30 , and Smoothly Clipped Absolute Deviation (SCAD) @cite_33 . @cite_8 used the Alternate Direction Method (ADM) to solve a similar problem for the non-convex matrix completion problem. Cand ' @cite_20 proposed an algorithm to solve the reweighted @math minimization problem, which could better recover the @math -norm. The condition of sparse vector recovery has been given in @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_20",
"@cite_8"
],
"mid": [
"2012961725",
"2074682976",
"2107861471",
"2014237985"
],
"abstract": [
"Abstract We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal l q -quasinorm is also the sparsest one. This generalizes, and slightly improves, a similar result for the l 1 -norm. We then introduce a simple numerical scheme to compute solutions with minimal l q -quasinorm, and we study its convergence. Finally, we display the results of some experiments which indicate that the l q -method performs better than other available methods.",
"Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence intervals for estimated parameters. The proposed approaches are distinguished from others in that the penalty functions are symmetric, nonconcave on (0, ∞), and have singularities at the origin to produce sparse solutions. Furthermore, the penalty functions should be bounded by a constant to reduce bias and satisfy certain conditions to yield continuous solutions. A new algorithm is proposed for optimizing penalized likelihood functions. The proposed ideas are widely applicable. They are readily applied to a variety of ...",
"It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained l1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms l1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted l1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the l1 norm of the coefficient sequence as is common, but by reweighting the l1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as Compressive Sensing.",
"The low-rank matrix completion problem is a fundamental machine learning problem with many important applications. The standard low-rank matrix completion methods relax the rank minimization problem by the trace norm minimization. However, this relaxation may make the solution seriously deviate from the original solution. Meanwhile, most completion methods minimize the squared prediction errors on the observed entries, which is sensitive to outliers. In this paper, we propose a new robust matrix completion method to address these two problems. The joint Schatten @math -norm and @math -norm are used to better approximate the rank minimization problem and enhance the robustness to outliers. The extensive experiments are performed on both synthetic data and real world applications in collaborative filtering and social network link prediction. All empirical results show our new method outperforms the standard matrix completion methods."
]
} |
1608.05859 | 2514713644 | We study the topmost weight matrix of neural network language models. We show that this matrix constitutes a valid word embedding. When training language models, we recommend tying the input embedding and this output embedding. We analyze the resulting update rules and show that the tied embedding evolves in a more similar way to the output embedding than to the input embedding in the untied model. We also offer a new method of regularizing the output embedding. Our methods lead to a significant reduction in perplexity, as we are able to show on a variety of neural network language models. Finally, we show that weight tying can reduce the size of neural translation models to less than half of their original size without harming their performance. | Neural network language models (NNLMs) assign probabilities to word sequences. Their resurgence was initiated by @cite_5 . Recurrent neural networks were first used for language modeling in @cite_10 and @cite_9 . The first model that implemented language modeling with LSTMs @cite_11 was @cite_19 . Following that, @cite_25 introduced a dropout @cite_27 augmented NNLM. @cite_14 @cite_0 proposed a new dropout method, which is referred to as Bayesian Dropout below, that improves on the results of @cite_25 . | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_5",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"2212703438",
"1889624880",
"582134693",
"2402268235",
"2183112036",
"2132339004",
"",
"1591801644",
""
],
"abstract": [
"Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning.",
"In this paper, we explore different ways to extend a recurrent neural network (RNN) to a RNN. We start by arguing that the concept of depth in an RNN is not as clear as it is in feedforward neural networks. By carefully analyzing and understanding the architecture of an RNN, however, we find three points of an RNN which may be made deeper; (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. Based on this observation, we propose two novel architectures of a deep RNN which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an alternative interpretation of these deep RNNs using a novel framework based on neural operators. The proposed deep RNNs are empirically evaluated on the tasks of polyphonic music prediction and language modeling. The experimental result supports our claim that the proposed deep RNNs benefit from the depth and outperform the conventional, shallow RNNs.",
"Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.",
"Neural networks have become increasingly popular for the task of language modeling. Whereas feed-forward networks only exploit a fixed context length to predict the next word of a sequence, conceptually, standard recurrent neural networks can take into account all of the predecessor words. On the other hand, it is well known that recurrent networks are difficult to train and therefore are unlikely to show the full potential of recurrent models. These problems are addressed by a the Long Short-Term Memory neural network architecture. In this work, we analyze this type of network on an English and a large French language modeling task. Experiments show improvements of about 8 relative in perplexity over standard recurrent neural network LMs. In addition, we gain considerable improvements in WER on top of a state-of-the-art speech recognition system.",
"",
"A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"",
"We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.",
""
]
} |
1608.05859 | 2514713644 | We study the topmost weight matrix of neural network language models. We show that this matrix constitutes a valid word embedding. When training language models, we recommend tying the input embedding and this output embedding. We analyze the resulting update rules and show that the tied embedding evolves in a more similar way to the output embedding than to the input embedding in the untied model. We also offer a new method of regularizing the output embedding. Our methods lead to a significant reduction in perplexity, as we are able to show on a variety of neural network language models. Finally, we show that weight tying can reduce the size of neural translation models to less than half of their original size without harming their performance. | The skip-gram word2vec model introduced in @cite_22 @cite_7 learns representations of words. This model learns a representation for each word in its vocabulary, both in an input embedding matrix and in an output embedding matrix. When training is complete, the vectors that are returned are the input embeddings. The output embedding is typically ignored, although @cite_24 @cite_35 use both the output and input embeddings of words in order to compute word similarity. Recently, @cite_12 argued that the output embedding of the word2vec skip-gram model needs to be different than the input embedding. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_7",
"@cite_24",
"@cite_12"
],
"mid": [
"2553303224",
"1614298861",
"2950133940",
"2260194779",
"2131571251"
],
"abstract": [
"Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.",
"",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"A fundamental goal of search engines is to identify, given a query, documents that have relevant text. This is intrinsically difficult because the query and the document may use different vocabulary, or the document may contain query words without being relevant. We investigate neural word embeddings as a source of evidence in document ranking. We train a word2vec embedding model on a large unlabelled query corpus, but in contrast to how the model is commonly used, we retain both the input and the output projections, allowing us to leverage both the embedding spaces to derive richer distributional relationships. During ranking we map the query words into the input space and the document words into the output space, and compute a query-document relevance score by aggregating the cosine similarities across all the query-document word pairs. We postulate that the proposed Dual Embedding Space Model (DESM) captures evidence on whether a document is about a query term in addition to what is modelled by traditional term-frequency based approaches. Our experiments show that the DESM can re-rank top documents returned by a commercial Web search engine, like Bing, better than a term-matching based signal like TF-IDF. However, when ranking a larger set of candidate documents, we find the embeddings-based approach is prone to false positives, retrieving documents that are only loosely related to the query. We demonstrate that this problem can be solved effectively by ranking based on a linear mixture of the DESM and the word counting features.",
"The word2vec software of Tomas Mikolov and colleagues (this https URL ) has gained a lot of traction lately, and provides state-of-the-art word embeddings. The learning models behind the software are described in two research papers. We found the description of the models in these papers to be somewhat cryptic and hard to follow. While the motivations and presentation may be obvious to the neural-networks language-modeling crowd, we had to struggle quite a bit to figure out the rationale behind the equations. This note is an attempt to explain equation (4) (negative sampling) in \"Distributed Representations of Words and Phrases and their Compositionality\" by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado and Jeffrey Dean."
]
} |
1608.05859 | 2514713644 | We study the topmost weight matrix of neural network language models. We show that this matrix constitutes a valid word embedding. When training language models, we recommend tying the input embedding and this output embedding. We analyze the resulting update rules and show that the tied embedding evolves in a more similar way to the output embedding than to the input embedding in the untied model. We also offer a new method of regularizing the output embedding. Our methods lead to a significant reduction in perplexity, as we are able to show on a variety of neural network language models. Finally, we show that weight tying can reduce the size of neural translation models to less than half of their original size without harming their performance. | In neural machine translation (NMT) models @cite_29 @cite_8 @cite_13 @cite_33 , the decoder, which generates the translation of the input sentence in the target language, is a language model that is conditioned on both the previous words of the output sentence and on the source sentence. State of the art results in NMT have recently been achieved by systems that segment the source and target words into subword units @cite_1 . One such method @cite_41 is based on the byte pair encoding (BPE) compression algorithm @cite_26 . BPE segments rare words into their more commonly appearing subwords. | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_8",
"@cite_41",
"@cite_29",
"@cite_1",
"@cite_13"
],
"mid": [
"46679369",
"2133564696",
"2950635152",
"1816313093",
"1753482797",
"2418388682",
"2949888546"
],
"abstract": [
"",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character n-gram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.1 and 1.3 BLEU, respectively.",
"We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43 lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations.",
"We participated in the WMT 2016 shared news translation task by building neural translation systems for four language pairs, each trained in both directions: English Czech, English German, English Romanian and English Russian. Our systems are based on an attentional encoder-decoder, using BPE subword segmentation for open-vocabulary translation with a fixed vocabulary. We experimented with using automatic back-translations of the monolingual News corpus as additional training data, pervasive dropout, and target-bidirectional models. All reported methods give substantial improvements, and we see improvements of 4.3--11.2 BLEU over our baseline systems. In the human evaluation, our systems were the (tied) best constrained system for 7 out of 8 translation directions in which we participated.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
]
} |
1608.05859 | 2514713644 | We study the topmost weight matrix of neural network language models. We show that this matrix constitutes a valid word embedding. When training language models, we recommend tying the input embedding and this output embedding. We analyze the resulting update rules and show that the tied embedding evolves in a more similar way to the output embedding than to the input embedding in the untied model. We also offer a new method of regularizing the output embedding. Our methods lead to a significant reduction in perplexity, as we are able to show on a variety of neural network language models. Finally, we show that weight tying can reduce the size of neural translation models to less than half of their original size without harming their performance. | Weight tying was previously used in the log-bilinear model of @cite_23 , but the decision to use it was not explained, and its effect on the model's performance was not tested. Independently and concurrently with our work @cite_4 presented an explanation for weight tying in NNLMs based on @cite_3 . | {
"cite_N": [
"@cite_3",
"@cite_4",
"@cite_23"
],
"mid": [
"1821462560",
"2549416390",
"2131462252"
],
"abstract": [
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.",
"Recurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification framework, where the model is trained against one-hot targets, and each word is represented both as an input and as an output in isolation. This causes inefficiencies in learning both in terms of utilizing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learning in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the number of trainable variables. Our framework leads to state of the art performance on the Penn Treebank with a variety of network models.",
"Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the non-hierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a word tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models."
]
} |
1608.05889 | 1612277053 | Online selection of dynamic features has attracted intensive interest in recent years. However, existing online feature selection methods evaluate features individually and ignore the underlying structure of a feature stream. For instance, in image analysis, features are generated in groups which represent color, texture, and other visual information. Simply breaking the group structure in feature selection may degrade performance. Motivated by this observation, we formulate the problem as an online group feature selection. The problem assumes that features are generated individually but there are group structures in the feature stream. To the best of our knowledge, this is the first time that the correlation among streaming features has been considered in the online feature selection process. To solve this problem, we develop a novel online group feature selection method named OGFS. Our proposed approach consists of two stages: online intra-group selection and online inter-group selection. In the intra-group selection, we design a criterion based on spectral analysis to select discriminative features in each group. In the inter-group selection, we utilize a linear regression model to select an optimal subset. This two-stage procedure continues until there are no more features arriving or some predefined stopping conditions are met. Finally, we apply our method to multiple tasks including image classification and face verification. Extensive empirical studies performed on real-world and benchmark data sets demonstrate that our method outperforms other state-of-the-art online feature selection methods. | Generally, the feature selection methods fall into three classes based on how the label information is used. Most existing methods are supervised which evaluate the correlation among features and the label variable. Due to the difficulty in obtaining labeled data, unsupervised feature selection has attracted increasing attention in recent years @cite_40 . Unsupervised feature selection methods usually select features that preserve the data similarity or manifold structure @cite_50 . Semi-supervised feature selection, so called small-labeled sample problem'', makes use of label information and manifold structure corresponding to labeled data and unlabeled data @cite_34 . | {
"cite_N": [
"@cite_40",
"@cite_34",
"@cite_50"
],
"mid": [
"2099322651",
"2158933803",
"2009501510"
],
"abstract": [
"In this paper, we identify two issues involved in developing an automated feature subset selection algorithm for unlabeled data: the need for finding the number of clusters in conjunction with feature selection, and the need for normalizing the bias of feature selection criteria with respect to dimension. We explore the feature selection problem and these issues through FSSEM (Feature Subset Selection using Expectation-Maximization (EM) clustering) and through two different performance criteria for evaluating candidate feature subsets: scatter separability and maximum likelihood. We present proofs on the dimensionality biases of these feature criteria, and present a cross-projection normalization scheme that can be applied to any criterion to ameliorate these biases. Our experiments show the need for feature selection, the need for addressing these two issues, and the effectiveness of our proposed solutions.",
"Feature selection aims to reduce dimensionality for building comprehensible learning models with good generalization performance. Feature selection algorithms are largely studied separately according to the type of learning: supervised or unsupervised. This work exploits intrinsic properties underlying supervised and unsupervised feature selection algorithms, and proposes a unified framework for feature selection based on spectral graph theory. The proposed framework is able to generate families of algorithms for both supervised and unsupervised feature selection. And we show that existing powerful algorithms such as ReliefF (supervised) and Laplacian Score (unsupervised) are special cases of the proposed framework. To the best of our knowledge, this work is the first attempt to unify supervised and unsupervised feature selection, and enable their joint study under a general framework. Experiments demonstrated the efficacy of the novel algorithms derived from the framework.",
"Compared with supervised learning for feature selection, it is much more difficult to select the discriminative features in unsupervised learning due to the lack of label information. Traditional unsupervised feature selection algorithms usually select the features which best preserve the data distribution, e.g., manifold structure, of the whole feature set. Under the assumption that the class label of input data can be predicted by a linear classifier, we incorporate discriminative analysis and l2,1-norm minimization into a joint framework for unsupervised feature selection. Different from existing unsupervised feature selection algorithms, our algorithm selects the most discriminative feature subset from the whole feature set in batch mode. Extensive experiment on different data types demonstrates the effectiveness of our algorithm."
]
} |
1608.05889 | 1612277053 | Online selection of dynamic features has attracted intensive interest in recent years. However, existing online feature selection methods evaluate features individually and ignore the underlying structure of a feature stream. For instance, in image analysis, features are generated in groups which represent color, texture, and other visual information. Simply breaking the group structure in feature selection may degrade performance. Motivated by this observation, we formulate the problem as an online group feature selection. The problem assumes that features are generated individually but there are group structures in the feature stream. To the best of our knowledge, this is the first time that the correlation among streaming features has been considered in the online feature selection process. To solve this problem, we develop a novel online group feature selection method named OGFS. Our proposed approach consists of two stages: online intra-group selection and online inter-group selection. In the intra-group selection, we design a criterion based on spectral analysis to select discriminative features in each group. In the inter-group selection, we utilize a linear regression model to select an optimal subset. This two-stage procedure continues until there are no more features arriving or some predefined stopping conditions are met. Finally, we apply our method to multiple tasks including image classification and face verification. Extensive empirical studies performed on real-world and benchmark data sets demonstrate that our method outperforms other state-of-the-art online feature selection methods. | The existing feature selection methods can be categorized as embedded, filter and wrapper approaches based on the methodologies @cite_27 @cite_1 @cite_3 @cite_47 @cite_41 . The filter methods evaluate the features by certain criterion and select features by ranking their evaluation values. The correlation criteria proposed for feature selection include mutual information, maximum margin @cite_2 , kernel alignment @cite_19 , and the Hilbert Schmidt independence criterion @cite_18 . The development of filtering methods involves taking consideration of multiple criteria to overcome redundancy. The most representative algorithm is mRMR @cite_35 in the principle of max-dependency, max-relevance and min-redundancy. It aims to find a subset in which the features are with large dependency on the target class and with low redundancy among each other. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_41",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_47"
],
"mid": [
"2154053567",
"2101267652",
"2158281641",
"2132379769",
"1586444875",
"2147735646",
"2025568499",
"2119479037",
"1949281989"
],
"abstract": [
"Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy.",
"We introduce a framework for filtering features that employs the Hilbert-Schmidt Independence Criterion (HSIC) as a measure of dependence between the features and the labels. The key idea is that good features should maximise such dependence. Feature selection for various supervised learning problems (including classification and regression) is unified under this framework, and the solutions can be approximated using a backward-elimination algorithm. We demonstrate the usefulness of our method on both artificial and real world datasets.",
"A large family of algorithms for dimensionality reduction end with solving a Trace Ratio problem in the form of arg maxW Tr(WT SPW) Tr(WT SIW)1, which is generally transformed into the corresponding Ratio Trace form arg maxW Tr[ (WTSIW)-1 (WTSPW) ] for obtaining a closed-form but inexact solution. In this work, an efficient iterative procedure is presented to directly solve the Trace Ratio problem. In each step, a Trace Difference problem arg maxW Tr [WT (SP - lambdaSI) W] is solved with lambda being the trace ratio value computed from the previous step. Convergence of the projection matrix W, as well as the global optimum of the trace ratio value lambda, are proven based on point-to-set map theories. In addition, this procedure is further extended for solving trace ratio problems with more general constraint WTCW=I and providing exact solutions for kernel-based subspace learning problems. Extensive experiments on faces and UCI data demonstrate the high convergence speed of the proposed solution, as well as its superiority in classification capability over corresponding solutions to the ratio trace problem.",
"In the literature of feature selection, different criteria have been proposed to evaluate the goodness of features. In our investigation, we notice that a number of existing selection criteria implicitly select features that preserve sample similarity, and can be unified under a common framework. We further point out that any feature selection criteria covered by this framework cannot handle redundant features, a common drawback of these criteria. Motivated by these observations, we propose a new \"Similarity Preserving Feature Selection” framework in an explicit and rigorous way. We show, through theoretical analysis, that the proposed framework not only encompasses many widely used feature selection criteria, but also naturally overcomes their common weakness in handling feature redundancy. In developing this new framework, we begin with a conventional combinatorial optimization formulation for similarity preserving feature selection, then extend it with a sparse multiple-output regression formulation to improve its efficiency and effectiveness. A set of three algorithms are devised to efficiently solve the proposed formulations, each of which has its own advantages in terms of computational complexity and selection performance. As exhibited by our extensive experimental study, the proposed framework achieves superior feature selection performance and attractive properties.",
"In this paper, we examine the advantages and disadvantages of filter and wrapper methods for feature selection and propose a new hybrid algorithm that uses boosting and incorporates some of the features of wrapper methods into a fast filter method for feature selection. Empirical results are reported on six real-world datasets from the UCI repository, showing that our hybrid algorithm is competitive with wrapper methods while being much faster, and scales well to datasets with thousands of features.",
"Text categorization, which consists of automatically assigning documents to a set of categories, usually involves the management of a huge number of features. Most of them are irrelevant and others introduce noise which could mislead the classifiers. Thus, feature reduction is often performed in order to increase the efficiency and effectiveness of the classification. In this paper, we propose to select relevant features by means of a family of linear filtering measures which are simpler than the usual measures applied for this purpose. We carry out experiments over two different corpora and find that the proposed measures perform better than the existing ones.",
"Reducing the dimensionality of the data has been a challenging task in data mining and machine learning applications. In these applications, the existence of irrelevant and redundant features negatively affects the efficiency and effectiveness of different learning algorithms. Feature selection is one of the dimension reduction techniques, which has been used to allow a better understanding of data and improve the performance of other learning tasks. Although the selection of relevant features has been extensively studied in supervised learning, feature selection in the absence of class labels is still a challenging task. This paper proposes a novel method for unsupervised feature selection, which efficiently selects features in a greedy manner. The paper first defines an effective criterion for unsupervised feature selection that measures the reconstruction error of the data matrix based on the selected subset of features. The paper then presents a novel algorithm for greedily minimizing the reconstruction error based on the features selected so far. The greedy algorithm is based on an efficient recursive formula for calculating the reconstruction error. Experiments on real data sets demonstrate the effectiveness of the proposed algorithm in comparison with the state-of-the-art methods for unsupervised feature selection.",
"Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods.",
"We study an interesting and challenging problem, online streaming feature selection, in which the size of the feature set is unknown, and not all features are available for learning while leaving the number of observations constant. In this problem, the candidate features arrive one at a time, and the learner's task is to select a \"best so far\" set of features from streaming features. Standard feature selection methods cannot perform well in this scenario. Thus, we present a novel framework based on feature relevance. Under this framework, a promising alternative method, Online Streaming Feature Selection (OSFS), is presented to online select strongly relevant and non-redundant features. In addition to OSFS, a faster Fast-OSFS algorithm is proposed to further improve the selection efficiency. Experimental results show that our algorithms achieve more compactness and better accuracy than existing streaming feature selection algorithms on various datasets."
]
} |
1608.05889 | 1612277053 | Online selection of dynamic features has attracted intensive interest in recent years. However, existing online feature selection methods evaluate features individually and ignore the underlying structure of a feature stream. For instance, in image analysis, features are generated in groups which represent color, texture, and other visual information. Simply breaking the group structure in feature selection may degrade performance. Motivated by this observation, we formulate the problem as an online group feature selection. The problem assumes that features are generated individually but there are group structures in the feature stream. To the best of our knowledge, this is the first time that the correlation among streaming features has been considered in the online feature selection process. To solve this problem, we develop a novel online group feature selection method named OGFS. Our proposed approach consists of two stages: online intra-group selection and online inter-group selection. In the intra-group selection, we design a criterion based on spectral analysis to select discriminative features in each group. In the inter-group selection, we utilize a linear regression model to select an optimal subset. This two-stage procedure continues until there are no more features arriving or some predefined stopping conditions are met. Finally, we apply our method to multiple tasks including image classification and face verification. Extensive empirical studies performed on real-world and benchmark data sets demonstrate that our method outperforms other state-of-the-art online feature selection methods. | @cite_6 proposed an online algorithm for the group Lasso. The weight vector @math is updated by the arrival of a new sample. Important features corresponding to large values in @math are selected in a group manner. Thus, the algorithm is suitable for sequential samples, especially for the applications with large scale data. | {
"cite_N": [
"@cite_6"
],
"mid": [
"179182636"
],
"abstract": [
"We develop a novel online learning algorithm for the group lasso in order to efficiently find the important explanatory factors in a grouped manner. Different from traditional batch-mode group lasso algorithms, which suffer from the inefficiency and poor scalability, our proposed algorithm performs in an online mode and scales well: at each iteration one can update the weight vector according to a closed-form solution based on the average of previous subgradients. Therefore, the proposed online algorithm can be very efficient and scalable. This is guaranteed by its low worst-case time complexity and memory cost both in the order of O(d), where d is the number of dimensions. Moreover, in order to achieve more sparsity in both the group level and the individual feature level, we successively extend our online system to efficiently solve a number of variants of sparse group lasso models. We also show that the online system is applicable to other group lasso models, such as the group lasso with overlap and graph lasso. Finally, we demonstrate the merits of our algorithm by experimenting with both synthetic and real-world datasets."
]
} |
1608.05889 | 1612277053 | Online selection of dynamic features has attracted intensive interest in recent years. However, existing online feature selection methods evaluate features individually and ignore the underlying structure of a feature stream. For instance, in image analysis, features are generated in groups which represent color, texture, and other visual information. Simply breaking the group structure in feature selection may degrade performance. Motivated by this observation, we formulate the problem as an online group feature selection. The problem assumes that features are generated individually but there are group structures in the feature stream. To the best of our knowledge, this is the first time that the correlation among streaming features has been considered in the online feature selection process. To solve this problem, we develop a novel online group feature selection method named OGFS. Our proposed approach consists of two stages: online intra-group selection and online inter-group selection. In the intra-group selection, we design a criterion based on spectral analysis to select discriminative features in each group. In the inter-group selection, we utilize a linear regression model to select an optimal subset. This two-stage procedure continues until there are no more features arriving or some predefined stopping conditions are met. Finally, we apply our method to multiple tasks including image classification and face verification. Extensive empirical studies performed on real-world and benchmark data sets demonstrate that our method outperforms other state-of-the-art online feature selection methods. | Online feature selection assumes that features arrive in by streams. It is different from classical online learning which lets samples flow in dynamically. Thus, at time step @math , there is only one feature descriptor @math of all samples available. The goal of online feature selection is to justify whether the feature @math should be accepted by their arrival. To this end, some related works have been proposed, including Grafting @cite_38 , Alpha-investing @cite_9 and OSFS (Online Streaming Feature Selection) @cite_32 . | {
"cite_N": [
"@cite_38",
"@cite_9",
"@cite_32"
],
"mid": [
"1887132526",
"",
"2153338628"
],
"abstract": [
"In the standard feature selection problem, we are given a fixed set of candidate features for use in a learning problem, and must select a subset that will be used to train a model that is \"as good as possible\" according to some criterion. In this paper, we present an interesting and useful variant, the online feature selection problem, in which, instead of all features being available from the start, features arrive one at a time. The learner's task is to select a subset of features and return a corresponding model at each time step which is as good as possible given the features seen so far. We argue that existing feature selection methods do not perform well in this scenario, and describe a promising alternative method, based on a stagewise gradient descent technique which we call grafting.",
"",
"We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms."
]
} |
1608.06065 | 2510220948 | In this paper, we examine the benefits of multiple antenna communication in random wireless networks, the topology of which is modeled by stochastic geometry. The setting is that of the Poisson bipolar model introduced in [1], which is a natural model for ad-hoc and device-to-device (D2D) networks. The primary finding is that, with knowledge of channel state information between a receiver and its associated transmitter, by zero-forcing successive interference cancellation, and for appropriate antenna configurations, the ergodic spectral efficiency can be made to scale linearly with both 1) the minimum of the number of transmit and receive antennas, 2) the density of nodes and 3) the path-loss exponent. This linear gain is achieved by using the transmit antennas to send multiple data streams (e.g. through an open-loop transmission method) and by exploiting the receive antennas to cancel interference. Furthermore, when a receiver is able to learn channel state information from a certain number of near interferers, higher scaling gains can be achieved when using a successive interference cancellation method. A major implication of the derived scaling laws is that spatial multiplexing transmission methods are essential for obtaining better and eventually optimal scaling laws in multiple antenna random wireless networks. Simulation results support this analysis. | There has been extensive work on the capacity of MIMO-MANETs. MIMO-MANETs can be modeled as MIMO interference networks in which a finite number of transmit-and-receiver pairs communicate by sharing the same spectrum, without transmitter cooperation. @cite_29 studied the capacity of a MIMO-MANET by treating inter-node interference as additional noise at a receiver, and derived the optimal power allocation strategy for the MIMO transmission. For instance, in a certain range of interference-to-noise ratios, it turns out that allocating the whole power to one antenna (i.e., using a single stream transmission) is optimal. @cite_26 and @cite_16 extended the result of @cite_29 , and demonstrated that the asymptotic spectral efficiency is improved by sending multiple data streams. A common assumption of these studies is that the distances between any two nodes in the network are deterministic @cite_29 or identical @cite_26 , which is unrealistic to model MANETs in practice. This approach cannot be used to assess which MIMO transmission techniques provide the highest gains in large random MANETs. | {
"cite_N": [
"@cite_29",
"@cite_26",
"@cite_16"
],
"mid": [
"2149787135",
"2051577697",
"1942924892"
],
"abstract": [
"System capacity is considered for a group of interfering users employing single-user detection and multiple transmit and receive antennas for flat Rayleigh-fading channels with independent fading coefficients for each path. The focus is on the case where there is no channel state information at the transmitter, but channel state information is assumed at the receiver. It is shown that the optimum signaling is sometimes different from cases where the users do not interfere with each other. In particular, the optimum signaling will sometimes put all power into a single transmitting antenna, rather than divide power equally between independent streams from the different antennas. If the interference is either sufficiently weak or sufficiently strong, we show that either the optimum interference-free approach, which puts equal power into each antenna, or the approach that puts all power into a single antenna is optimum and we show how to find the regions where each approach is best.",
"We study in this paper the network spectral efficiency of a multiple-input multiple-output (MIMO) ad hoc network with K simultaneous communicating transmitter-receiver pairs. Assuming that each transmitter is equipped with t antennas and each receiver with r antennas and each receiver implements single-user detection, we show that in the absence of channel state information (CSI) at the transmitters, the asymptotic network spectral efficiency is limited by r nats s Hz as Krarrinfin and is independent of t and the transmit power. With CSI corresponding to the intended receiver available at the transmitter, we demonstrate that the asymptotic spectral efficiency is at least t+r+2radictr nats s Hz. Asymptotically optimum signaling is also derived under the same CSI assumption, i.e., each transmitter knows the channel corresponding to its desired receiver only. Further capacity improvement is possible with stronger CSI assumption; we demonstrate this using a heuristic interference suppression transmit beamforming approach. The conventional orthogonal transmission approach is also analyzed. In particular, we show that with idealized medium access control, the channelized transmission has unbounded asymptotic spectral efficiency under the constant per-user power constraint. The impact of different power constraints on the asymptotic spectral efficiency is also carefully examined. Finally, numerical examples are given that confirm our analysis",
"We compute the capacity of wireless ad hoc networks when all the nodes in the network are endowed with M antennas. The derivation is based on a new communication scheme for wireless ad hoc networks utilizing the concept of cooperative many-to-many communications, as opposed to the traditional approach that emphasizes on one-to-one communications. We show that the upper bound average asymptotic capacity of each cell is 2 spl pi P sub t MC sub cell [1-exp(-C sub cell spl theta )], for network parameters C sub cell spl ges 1, 0 spl les spl theta spl les 1, and transmit power P sub t ."
]
} |
1608.06002 | 2514985987 | The design of distributed gathering and convergence algorithms for tiny robots has recently received much attention. In particular, it has been shown that convergence problems can even be solved for very weak, robots: robots which cannot maintain state from one round to the next. The oblivious robot model is hence attractive from a self-stabilization perspective, where state is subject to adversarial manipulation. However, to the best of our knowledge, all existing robot convergence protocols rely on the assumption that robots, despite being "weak", can measure distances. We in this paper initiate the study of convergence protocols for even simpler robots, called : robots which cannot measure distances. In particular, we introduce two natural models which relax the assumptions on the robots' cognitive capabilities: (1) a Locality Detection ( @math ) model in which a robot can only detect whether another robot is closer than a given constant distance or not, (2) an Orthogonal Line Agreement ( @math ) model in which robots only agree on a pair of orthogonal lines (say North-South and West-East, but without knowing which is which). The problem turns out to be non-trivial, and simple median and angle bisection strategies can easily increase the distances among robots (e.g., the area of the enclosing convex hull) over time. Our main contribution are deterministic self-stabilizing convergence algorithms for these two models, together with a complexity analysis. We also show that in some sense, the assumptions made in our models are minimal: by relaxing the assumptions on the further, we run into impossibility results. | The problems of gathering @cite_7 , where all the robots gather at a single point, convergence @cite_2 , where robots come very close to each other and Pattern formation @cite_8 @cite_7 have been studied intensively in the literature. | {
"cite_N": [
"@cite_8",
"@cite_7",
"@cite_2"
],
"mid": [
"2135835260",
"2044484214",
"2563454818"
],
"abstract": [
"The distributed coordination and control of a set of autonomous mobile robots is a problem widely studied in a variety of fields, such as engineering, artificial intelligence, artificial life, robotics. Generally, in these areas the problem is studied mostly from an empirical point of view. In contrast, we aim to understand the fundamental limitations on what a set of autonomous mobile robots can achieve. We describe the current investigations on what autonomous mobile robots can and can not do with respect to some coordination problems.",
"In this note we make a minor correction to a scheme for robots to broadcast their private information. All major results of the paper [I. Suzuki and M. Yamashita, SIAM J. Comput., 28 (1999), pp. 1347-1363] hold with this correction.",
"The common theoretical model adopted in recent studies on algorithms for systems of autonomous mobile robots assumes that the positional input of the robots is obtained by perfectly accurate visual sensors, that robot movements are accurate, and that internal calculations performed by the robots on (real) coordinates are perfectly accurate as well. The current paper concentrates on the effect of weakening this rather strong set of assumptions, and replacing it with the more realistic assumption that the robot sensors, movement and internal calculations may have slight inaccuracies. Specifically, the paper concentrates on the ability of robot systems with inaccurate sensors, movements and calculations to carry out the task of convergence. The paper presents several impossibility results, limiting the inaccuracy allowing convergence. The main positive result is an algorithm for convergence under bounded measurement, movement and calculation errors."
]
} |
1608.06002 | 2514985987 | The design of distributed gathering and convergence algorithms for tiny robots has recently received much attention. In particular, it has been shown that convergence problems can even be solved for very weak, robots: robots which cannot maintain state from one round to the next. The oblivious robot model is hence attractive from a self-stabilization perspective, where state is subject to adversarial manipulation. However, to the best of our knowledge, all existing robot convergence protocols rely on the assumption that robots, despite being "weak", can measure distances. We in this paper initiate the study of convergence protocols for even simpler robots, called : robots which cannot measure distances. In particular, we introduce two natural models which relax the assumptions on the robots' cognitive capabilities: (1) a Locality Detection ( @math ) model in which a robot can only detect whether another robot is closer than a given constant distance or not, (2) an Orthogonal Line Agreement ( @math ) model in which robots only agree on a pair of orthogonal lines (say North-South and West-East, but without knowing which is which). The problem turns out to be non-trivial, and simple median and angle bisection strategies can easily increase the distances among robots (e.g., the area of the enclosing convex hull) over time. Our main contribution are deterministic self-stabilizing convergence algorithms for these two models, together with a complexity analysis. We also show that in some sense, the assumptions made in our models are minimal: by relaxing the assumptions on the further, we run into impossibility results. | @cite_4 introduced the CORDA or Asynchronous (ASYNC) scheduling model for weak robots. @cite_0 have introduced the ATOM or Semi-synchronous (SSYNC) model. In @cite_7 , impossibility of gathering for @math without assumptions on local coordinate system agreement for and is proved. Also, for @math it is impossible to solve gathering without assumptions on either coordinate system agreement or multiplicity detection @cite_5 . Cohen and Peleg @cite_1 have proposed a center of gravity algorithm for convergence of two robots in ASYNC and any number of robots in SSYNC. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_1",
"@cite_0",
"@cite_5"
],
"mid": [
"1961973183",
"2044484214",
"1483270512",
"",
"2119517221"
],
"abstract": [
"In this paper we aim at an understanding of the fundamental algorithmic limitations on what a set of autonomous mobile robots can or cannot achieve. We study a hard task for a set of weak robots. The task is for the robots in the plane to form any arbitrary pattern that is given in advance. The robots are weak in several aspects. They are anonymous; they cannot explicitly communicate with each other, but only observe the positions of the others; they cannot remember the past; they operate in a very strong form of asynchronicity. We show that the tasks that such a system of robots can perform depend strongly on their common knowledge about their environment, i.e., the readings of their environment sensors.",
"In this note we make a minor correction to a scheme for robots to broadcast their private information. All major results of the paper [I. Suzuki and M. Yamashita, SIAM J. Comput., 28 (1999), pp. 1347-1363] hold with this correction.",
"Consider a group of N robots aiming to converge towards a single point. The robots cannot communicate, and their only input is obtained by visual sensors. A natural algorithm for the problem is based on requiring each robot to move towards the robots’ center of gravity. The paper proves the correctness of the center-of-gravity algorithm in the semi-synchronous model for any number of robots, and its correctness in the fully asynchronous model for two robots.",
"",
"Given a set of n autonomous mobile robots that can freely move on a two dimensional plane, they are required to gather in a position on the plane not fixed in advance (Gathering Problem). The main research question we address in this paper is: Under which conditions can this task be accomplished by the robots? The studied robots are quite simple: they are anonymous, totally asynchronous, they do not have any memory of past computations, they cannot explicitly communicate between each other. We show that this simple task cannot be in general accomplished by the considered system of robots."
]
} |
1608.06002 | 2514985987 | The design of distributed gathering and convergence algorithms for tiny robots has recently received much attention. In particular, it has been shown that convergence problems can even be solved for very weak, robots: robots which cannot maintain state from one round to the next. The oblivious robot model is hence attractive from a self-stabilization perspective, where state is subject to adversarial manipulation. However, to the best of our knowledge, all existing robot convergence protocols rely on the assumption that robots, despite being "weak", can measure distances. We in this paper initiate the study of convergence protocols for even simpler robots, called : robots which cannot measure distances. In particular, we introduce two natural models which relax the assumptions on the robots' cognitive capabilities: (1) a Locality Detection ( @math ) model in which a robot can only detect whether another robot is closer than a given constant distance or not, (2) an Orthogonal Line Agreement ( @math ) model in which robots only agree on a pair of orthogonal lines (say North-South and West-East, but without knowing which is which). The problem turns out to be non-trivial, and simple median and angle bisection strategies can easily increase the distances among robots (e.g., the area of the enclosing convex hull) over time. Our main contribution are deterministic self-stabilizing convergence algorithms for these two models, together with a complexity analysis. We also show that in some sense, the assumptions made in our models are minimal: by relaxing the assumptions on the further, we run into impossibility results. | Any kind of pattern formation requires these robots to move to a particular point of the pattern. Since the monoculus robots cannot figure out locations, they cannot stop at a particular point. Hence any kind of pattern formation algorithm described in the previous works which requires location information as input are obsolete. Gathering problem is nothing but the point formation problem @cite_7 . Hence gathering is also not possible for the monoculus robots. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2044484214"
],
"abstract": [
"In this note we make a minor correction to a scheme for robots to broadcast their private information. All major results of the paper [I. Suzuki and M. Yamashita, SIAM J. Comput., 28 (1999), pp. 1347-1363] hold with this correction."
]
} |
1608.05866 | 2512539472 | Most distributed systems require coordination between all components involved. With the steady growth of such systems, the probability of failures increases, which necessitates fault-tolerant agreement protocols. The most common practical agreement protocol, for such scenarios, is leader-based atomic broadcast. In this work, we propose AllConcur, a distributed system that provides agreement through a leaderless concurrent atomic broadcast algorithm, thus, not suffering from the bottleneck of a central coordinator. In AllConcur, all components exchange messages concurrently through a logical overlay network that employs early termination to minimize the agreement latency. Our implementation of AllConcur supports standard sockets-based TCP as well as high-performance InfiniBand Verbs communications. AllConcur can handle up to 135 million requests per second and achieves 17x higher throughput than today's standard leader-based protocols, such as Libpaxos. Therefore, AllConcur not only offers significant improvements over existing solutions, but enables novel hitherto unattainable system designs in a variety of fields. | Many existing algorithms and systems can be used to implement atomic broadcast; we discuss here only the most relevant subset. D 'e fago, Schiper, and Urb ' a n provide a general overview of atomic broadcast algorithms @cite_26 . They define a classification based on how total order is established: by the sender, by a sequencer or by the destinations @cite_23 . uses destinations agreement to achieve total order, i.e., agreement on a message set. Yet, unlike other destinations agreement algorithms, is entirely decentralized and requires no leader. | {
"cite_N": [
"@cite_26",
"@cite_23"
],
"mid": [
"2130264930",
"2133943294"
],
"abstract": [
"Total order broadcast and multicast (also called atomic broadcast multicast) present an important problem in distributed systems, especially with respect to fault-tolerance. In short, the primitive ensures that messages sent to a set of processes are, in turn, delivered by all those processes in the same total order.",
"We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [ 1992]."
]
} |
1608.05866 | 2512539472 | Most distributed systems require coordination between all components involved. With the steady growth of such systems, the probability of failures increases, which necessitates fault-tolerant agreement protocols. The most common practical agreement protocol, for such scenarios, is leader-based atomic broadcast. In this work, we propose AllConcur, a distributed system that provides agreement through a leaderless concurrent atomic broadcast algorithm, thus, not suffering from the bottleneck of a central coordinator. In AllConcur, all components exchange messages concurrently through a logical overlay network that employs early termination to minimize the agreement latency. Our implementation of AllConcur supports standard sockets-based TCP as well as high-performance InfiniBand Verbs communications. AllConcur can handle up to 135 million requests per second and achieves 17x higher throughput than today's standard leader-based protocols, such as Libpaxos. Therefore, AllConcur not only offers significant improvements over existing solutions, but enables novel hitherto unattainable system designs in a variety of fields. | Lamport's classic Paxos algorithm @cite_1 @cite_10 is often used to implement atomic broadcast. Several practical systems have been proposed @cite_54 @cite_25 @cite_3 @cite_47 . Also, a series of optimizations were proposed, such as distributing the load among all servers or out-of-order processing of not-interfering requests @cite_19 @cite_21 @cite_51 . Yet, the commonly employed simple replication scheme is not designed to scale to hundreds of instances. | {
"cite_N": [
"@cite_54",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_51",
"@cite_47",
"@cite_10",
"@cite_25"
],
"mid": [
"",
"2106670435",
"",
"2013409485",
"2067740651",
"1814543774",
"",
"2182688186",
"2051707209"
],
"abstract": [
"",
"Theoretician’s Abstract Consensus has been regarded as the fundamental problem that must be solved to implement a fault-tolerant distributed system. However, only a weaker problem than traditional consensus need be solved. We generalize the consensus problem to include both traditional consensus and this weaker version. A straightforward generalization of the Paxos consensus algorithm implements general consensus. The generalizations of consensus and of the Paxos algorithm require a mathematical detour de force into a type of object called a command-structure set.",
"",
"Spanner is Google’s scalable, multiversion, globally distributed, and synchronously replicated database. It is the first system to distribute data at global scale and support externally-consistent distributed transactions. This article describes how Spanner is structured, its feature set, the rationale underlying various design decisions, and a novel time API that exposes clock uncertainty. This API and its implementation are critical to supporting external consistency and a variety of powerful features: nonblocking reads in the past, lock-free snapshot transactions, and atomic schema changes, across all of Spanner.",
"This paper describes the design and implementation of Egalitarian Paxos (EPaxos), a new distributed consensus algorithm based on Paxos. EPaxos achieves three goals: (1) optimal commit latency in the wide-area when tolerating one and two failures, under realistic conditions; (2) uniform load balancing across all replicas (thus achieving high throughput); and (3) graceful performance degradation when replicas are slow or crash. Egalitarian Paxos is to our knowledge the first protocol to achieve the previously stated goals efficiently---that is, requiring only a simple majority of replicas to be non-faulty, using a number of messages linear in the number of replicas to choose a command, and committing commands after just one communication round (one round trip) in the common case or after at most two rounds in any case. We prove Egalitarian Paxos's properties theoretically and demonstrate its advantages empirically through an implementation running on Amazon EC2.",
"We present a protocol for general state machine replication - a method that provides strong consistency - that has high performance in a wide-area network. In particular, our protocol Mencius has high throughput under high client load and low latency under low client load even under changing wide-area network environment and client load. We develop our protocol as a derivation from the well-known protocol Paxos. Such a development can be changed or further refined to take advantage of specific network or application requirements.",
"",
"",
"This paper presents an overview of Paxos for System Builders, a complete specification of the Paxos replication protocol such that system builders can understand it and implement it. We evaluate the performance of a prototype implementation and detail the safety and liveness properties guaranteed by our specification of Paxos."
]
} |
1608.05866 | 2512539472 | Most distributed systems require coordination between all components involved. With the steady growth of such systems, the probability of failures increases, which necessitates fault-tolerant agreement protocols. The most common practical agreement protocol, for such scenarios, is leader-based atomic broadcast. In this work, we propose AllConcur, a distributed system that provides agreement through a leaderless concurrent atomic broadcast algorithm, thus, not suffering from the bottleneck of a central coordinator. In AllConcur, all components exchange messages concurrently through a logical overlay network that employs early termination to minimize the agreement latency. Our implementation of AllConcur supports standard sockets-based TCP as well as high-performance InfiniBand Verbs communications. AllConcur can handle up to 135 million requests per second and achieves 17x higher throughput than today's standard leader-based protocols, such as Libpaxos. Therefore, AllConcur not only offers significant improvements over existing solutions, but enables novel hitherto unattainable system designs in a variety of fields. | State machine replication protocols are similar to Paxos but often claim to be simpler to understand and implement. Practical implementations include ZooKeeper @cite_42 , Viewstamped Replication @cite_37 , Raft @cite_41 , Chubby @cite_27 and DARE @cite_29 among others. These systems commonly employ a leader-based approach which makes them fundamentally unscalable. Increasing scalability comes often at the cost of relaxing the consistency model @cite_14 @cite_45 . Moreover, even when scalable strong consistency is provided @cite_9 , these systems aim to increase data reliability, an objective conceptually different than distributed agreement. | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_41",
"@cite_29",
"@cite_42",
"@cite_9",
"@cite_27",
"@cite_45"
],
"mid": [
"1549820118",
"2168067900",
"2156580773",
"2019966882",
"192446467",
"2149623556",
"1992479210",
"2153704625"
],
"abstract": [
"This paper presents an updated version of Viewstamped Replication, a replication technique that handles failures in which nodes crash. It describes how client requests are handled, how the group reorganizes when a replica fails, and how a failed replica is able to rejoin the group. The paper also describes a number of important optimizations and presents a protocol for handling reconfigurations that can change both the group membership and the number of failures the group is able to handle.",
"This paper presents ZHT, a zero-hop distributed hash table, which has been tuned for the requirements of high-end computing systems. ZHT aims to be a building block for future distributed systems, such as parallel and distributed file systems, distributed job management systems, and parallel programming systems. The goals of ZHT are delivering high availability, good fault tolerance, high throughput, and low latencies, at extreme scales of millions of nodes. ZHT has some important properties, such as being light-weight, dynamically allowing nodes join and leave, fault tolerant through replication, persistent, scalable, and supporting unconventional operations such as append (providing lock-free concurrent key value modifications) in addition to insert lookup remove. We have evaluated ZHT's performance under a variety of systems, ranging from a Linux cluster with 512-cores, to an IBM Blue Gene P supercomputer with 160K-cores. Using micro-benchmarks, we scaled ZHT up to 32K-cores with latencies of only 1.1ms and 18M operations sec throughput. This work provides three real systems that have integrated with ZHT, and evaluate them at modest scales. 1) ZHT was used in the FusionFS distributed file system to deliver distributed meta-data management at over 60K operations (e.g. file create) per second at 2K-core scales. 2) ZHT was used in the IStore, an information dispersal algorithm enabled distributed object storage system, to manage chunk locations, delivering more than 500 chunks sec at 32-nodes scales. 3) ZHT was also used as a building block to MATRIX, a distributed job scheduling system, delivering 5000 jobs sec throughputs at 2K-core scales. We compared ZHT against other distributed hash tables and key value stores and found it offers superior performance for the features and portability it supports.",
"Raft is a consensus algorithm for managing a replicated log. It produces a result equivalent to (multi-)Paxos, and it is as efficient as Paxos, but its structure is different from Paxos; this makes Raft more understandable than Paxos and also provides a better foundation for building practical systems. In order to enhance understandability, Raft separates the key elements of consensus, such as leader election, log replication, and safety, and it enforces a stronger degree of coherency to reduce the number of states that must be considered. Results from a user study demonstrate that Raft is easier for students to learn than Paxos. Raft also includes a new mechanism for changing the cluster membership, which uses overlapping majorities to guarantee safety.",
"The increasing amount of data that needs to be collected and analyzed requires large-scale datacenter architectures that are naturally more susceptible to faults of single components. One way to offer consistent services on such unreliable systems are replicated state machines (RSMs). Yet, traditional RSM protocols cannot deliver the needed latency and request rates for future large-scale systems. In this paper, we propose a new set of protocols based on Remote Direct Memory Access (RDMA) primitives. To asses these mechanisms, we use a strongly consistent key-value store; the evaluation shows that our simple protocols improve RSM performance by more than an order of magnitude. Furthermore, we show that RDMA introduces various new options, such as log access management. Our protocols enable operators to fully utilize the new capabilities of the quickly growing number of RDMA-capable datacenter networks.",
"In this paper, we describe ZooKeeper, a service for coordinating processes of distributed applications. Since ZooKeeper is part of critical infrastructure, ZooKeeper aims to provide a simple and high performance kernel for building more complex coordination primitives at the client. It incorporates elements from group messaging, shared registers, and distributed lock services in a replicated, centralized service. The interface exposed by Zoo-Keeper has the wait-free aspects of shared registers with an event-driven mechanism similar to cache invalidations of distributed file systems to provide a simple, yet powerful coordination service. The ZooKeeper interface enables a high-performance service implementation. In addition to the wait-free property, ZooKeeper provides a per client guarantee of FIFO execution of requests and linearizability for all requests that change the ZooKeeper state. These design decisions enable the implementation of a high performance processing pipeline with read requests being satisfied by local servers. We show for the target workloads, 2:1 to 100:1 read to write ratio, that ZooKeeper can handle tens to hundreds of thousands of transactions per second. This performance allows ZooKeeper to be used extensively by client applications.",
"Distributed storage systems often trade off strong semantics for improved scalability. This paper describes the design, implementation, and evaluation of Scatter, a scalable and consistent distributed key-value storage system. Scatter adopts the highly decentralized and self-organizing structure of scalable peer-to-peer systems, while preserving linearizable consistency even under adverse circumstances. Our prototype implementation demonstrates that even with very short node lifetimes, it is possible to build a scalable and consistent system with practical performance.",
"We describe our experiences with the Chubby lock service, which is intended to provide coarse-grained locking as well as reliable (though low-volume) storage for a loosely-coupled distributed system. Chubby provides an interface much like a distributed file system with advisory locks, but the design emphasis is on availability and reliability, as opposed to high performance. Many instances of the service have been used for over a year, with several of them each handling a few tens of thousands of clients concurrently. The paper describes the initial design and expected use, compares it with actual use, and explains how the design had to be modified to accommodate the differences.",
"Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The Amazon.com platform, which provides services for many web sites worldwide, is implemented on top of an infrastructure of tens of thousands of servers and network components located in many datacenters around the world. At this scale, small and large components fail continuously and the way persistent state is managed in the face of these failures drives the reliability and scalability of the software systems. This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide an \"always-on\" experience. To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. It makes extensive use of object versioning and application-assisted conflict resolution in a manner that provides a novel interface for developers to use."
]
} |
1608.05866 | 2512539472 | Most distributed systems require coordination between all components involved. With the steady growth of such systems, the probability of failures increases, which necessitates fault-tolerant agreement protocols. The most common practical agreement protocol, for such scenarios, is leader-based atomic broadcast. In this work, we propose AllConcur, a distributed system that provides agreement through a leaderless concurrent atomic broadcast algorithm, thus, not suffering from the bottleneck of a central coordinator. In AllConcur, all components exchange messages concurrently through a logical overlay network that employs early termination to minimize the agreement latency. Our implementation of AllConcur supports standard sockets-based TCP as well as high-performance InfiniBand Verbs communications. AllConcur can handle up to 135 million requests per second and achieves 17x higher throughput than today's standard leader-based protocols, such as Libpaxos. Therefore, AllConcur not only offers significant improvements over existing solutions, but enables novel hitherto unattainable system designs in a variety of fields. | Bitcoin @cite_28 offers an alternative solution to the (Byzantine fault-tolerant) atomic broadcast problem: It uses to order the transactions on a distributed ledger. In a nutshell, a server must solve a cryptographic puzzle in order to add a block of transactions to the ledger. Yet, Bitcoin does not guarantee @cite_34 ---multiple servers solving the puzzle may lead to a fork (conflict), resulting in branches. Forks are eventually solved by adding new blocks. Eventually one branch outpaces the others, thereby becoming the ledger all servers agree upon. To avoid frequent forks, Bitcoin controls the expected puzzle solution time to 10 minutes and currently limits the block size to 1MB, resulting in limited performance, i.e., around seven transactions per second. To increase performance, Bitcoin-NG @cite_55 uses proof-of-work to elect a leader that can add blocks until a new leader is elected. Yet, conflicts are still possible and consensus finality is not ensured. | {
"cite_N": [
"@cite_28",
"@cite_55",
"@cite_34"
],
"mid": [
"1897250020",
"2964262836",
"2486460265"
],
"abstract": [
"",
"Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoinderived blockchain protocols have inherent scalability limits that trade off between throughput and latency, which withhold the realization of this potential. This paper presents Bitcoin-NG (Next Generation), a new blockchain protocol designed to scale. Bitcoin-NG is a Byzantine fault tolerant blockchain protocol that is robust to extreme churn and shares the same trust model as Bitcoin. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15 the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"Bitcoin cryptocurrency demonstrated the utility of global consensus across thousands of nodes, changing the world of digital transactions forever. In the early days of Bitcoin, the performance of its probabilistic proof-of-work (PoW) based consensus fabric, also known as blockchain, was not a major issue. Bitcoin became a success story, despite its consensus latencies on the order of an hour and the theoretical peak throughput of only up to 7 transactions per second."
]
} |
1608.05288 | 2513129975 | Discrete optimization is a central problem in artificial intelligence. The optimization of the aggregated cost of a network of cost functions arises in a variety of problems including Weighted Constraint Programs (WCSPs), Distributed Constraint Optimization (DCOP), as well as optimization in stochastic variants such as the tasks of finding the most probable explanation (MPE) in belief networks. Inference-based algorithms are powerful techniques for solving discrete optimization problems, which can be used independently or in combination with other techniques. However, their applicability is often limited by their compute intensive nature and their space requirements. This paper proposes the design and implementation of a novel inference-based technique, which exploits modern massively parallel architectures, such as those found in Graphical Processing Units (GPUs), to speed up the resolution of exact and approximated inference-based algorithms for discrete optimization. The paper studies the proposed algorithm in both centralized and distributed optimization contexts. The paper demonstrates that the use of GPUs provides significant advantages in terms of runtime and scalability, achieving up to two orders of magnitude in speedups and showing a considerable reduction in execution time (up to 345 times faster) with respect to a sequential version. | In the distributed constraint optimization context, GPU parallelism has been applied to speed up several DCOP solving techniques. Fioretto @cite_10 proposed a multi-variable agent decomposition strategy to solve DCOPs with complex local subproblems, which makes use of GPUs to implement a search-based and a sampling-based algorithm to speed up the agents' local subproblems resolution. Le @cite_12 studied a GPU accelerated algorithm in the context of stochastic DCOPs---DCOPs where the values of the cost tables are stochastic. The authors used SIMT-style parallelism on a DP-based approach, which resulted in a speedup of up to two orders of magnitude. Recently, a combination of GPUs with (MCMC) sampling algorithms has been proposed in the context of solving DCOPs @cite_24 , where the authors adopted GPUs to accelerate the computation of the normalization constants used in the MCMC sampling process as well as to compute several samples in parallel, resulting in a speedup of up to one order of magnitude. | {
"cite_N": [
"@cite_24",
"@cite_10",
"@cite_12"
],
"mid": [
"2508079243",
"2564357136",
"2398389300"
],
"abstract": [
"The field of Distributed Constraint Optimization (DCOP) has gained momentum in recent years, thanks to its ability to address various applications related to multi-agent coordination. Nevertheless, solving DCOPs is computationally challenging. Thus, in large scale, complex applications, incomplete DCOP algorithms are necessary. Recently, researchers have introduced a promising class of incomplete DCOP algorithms, based on sampling. However, this paradigm requires a multitude of samples to ensure convergence. This paper exploits the property that sampling is amenable to parallelization, and introduces a general framework, called Distributed MCMC (DMCMC), that is based on a dynamic programming procedure and uses Markov Chain Monte Carlo (MCMC) sampling algorithms to solve DCOPs. Additionally, DMCMC harnesses the parallel computing power of Graphical Processing Units (GPUs) to speed-up the sampling process. The experimental results show that DMCMC can find good solutions up to two order of magnitude faster than other incomplete DCOP algorithms.",
"The application of DCOP models to large problems faces two main limitations: (i) Modeling limitations, as each agent can handle only a single variable of the problem; and (ii) Resolution limitations, as current approaches do not exploit the local problem structure within each agent. This paper proposes a novel Multi-Variable Agent (MVA) DCOP decomposition technique, which: (i) Exploits the co-locality of each agent's variables, allowing us to adopt efficient centralized techniques within each agent; (ii) Enables the use of hierarchical parallel models and proposes the use of GPUs; and (iii) Reduces the amount of computation and communication required in several classes of DCOP algorithms.",
"Distributed Constraint Optimization Problems (DCOPs) have been used to model a number of multi-agent coordination problems. In DCOPs, agents are assumed to have complete information about the utility of their possible actions. However, in many real-world applications, such utilities are stochastic due to the presence of exogenous events that are beyond the direct control of the agents. This paper addresses this issue by extending the standard DCOP model to Expected Regret DCOP (ER-DCOP) for DCOP applications with uncertainty in constraint utilities. Different from other approaches, ER-DCOPs aim at minimizing the overall expected regret of the problem. The paper proposes the ER-DPOP algorithm for solving ER-DCOPs, which is complete and requires a linear number of messages with respect to the number of agents in the problem. We further present two implementations of ER-DPOP---GPU- and ASP-based implementations---that orthogonally exploit the problem structure and present their evaluations on random networks and power network problems."
]
} |
1608.05404 | 2949755313 | In [1], we proposed a graph-based formulation that links and clusters person hypotheses over time by solving a minimum cost subgraph multicut problem. In this paper, we modify and extend [1] in three ways: 1) We introduce a novel local pairwise feature based on local appearance matching that is robust to partial occlusion and camera motion. 2) We perform extensive experiments to compare different pairwise potentials and to analyze the robustness of the tracking formulation. 3) We consider a plain multicut problem and remove outlying clusters from its solution. This allows us to employ an efficient primal feasible optimization algorithm that is not applicable to the subgraph multicut problem of [1]. Unlike the branch-and-cut algorithm used there, this efficient algorithm used here is applicable to long videos and many detections. Together with the novel feature, it eliminates the need for the intermediate tracklet representation of [1]. We demonstrate the effectiveness of our overall approach on the MOT16 benchmark [2], achieving state-of-art performance. | Perhaps closest to our work are methods that aim to recover people tracks by optimizing a global objective function @cite_2 @cite_18 @cite_12 . @cite_18 proposes a continuous formulation that analytically models effects such as mutual occlusions, dynamics and trajectory continuity, but utilizes a simple color appearance model. @cite_2 finds tracks by solving instances of a generalized minimum clique problem, but due to model complexity resorts to a greedy iterative optimization scheme that finds one track at a time whereas we jointly recover solutions for all tracks. We build on the multi-cut formulation proposed in @cite_12 and generalize it to large scale sequences based on the extensions discussed below. | {
"cite_N": [
"@cite_18",
"@cite_12",
"@cite_2"
],
"mid": [
"2083049794",
"2007352603",
"1528063097"
],
"abstract": [
"Many recent advances in multiple target tracking aim at finding a (nearly) optimal set of trajectories within a temporal window. To handle the large space of possible trajectory hypotheses, it is typically reduced to a finite set by some form of data-driven or regular discretization. In this work, we propose an alternative formulation of multitarget tracking as minimization of a continuous energy. Contrary to recent approaches, we focus on designing an energy that corresponds to a more complete representation of the problem, rather than one that is amenable to global optimization. Besides the image evidence, the energy function takes into account physical constraints, such as target dynamics, mutual exclusion, and track persistence. In addition, partial image evidence is handled with explicit occlusion reasoning, and different targets are disambiguated with an appearance model. To nevertheless find strong local minima of the proposed nonconvex energy, we construct a suitable optimization scheme that alternates between continuous conjugate gradient descent and discrete transdimensional jump moves. These moves, which are executed such that they always reduce the energy, allow the search to escape weak minima and explore a much larger portion of the search space of varying dimensionality. We demonstrate the validity of our approach with an extensive quantitative evaluation on several public data sets.",
"Tracking multiple targets in a video, based on a finite set of detection hypotheses, is a persistent problem in computer vision. A common strategy for tracking is to first select hypotheses spatially and then to link these over time while maintaining disjoint path constraints [14, 15, 24]. In crowded scenes multiple hypotheses will often be similar to each other making selection of optimal links an unnecessary hard optimization problem due to the sequential treatment of space and time. Embracing this observation, we propose to link and cluster plausible detections jointly across space and time. Specifically, we state multi-target tracking as a Minimum Cost Subgraph Multicut Problem. Evidence about pairs of detection hypotheses is incorporated whether the detections are in the same frame, neighboring frames or distant frames. This facilitates long-range re-identification and within-frame clustering. Results for published benchmark sequences demonstrate the superiority of this approach.",
"Data association is an essential component of any human tracking system. The majority of current methods, such as bipartite matching, incorporate a limited-temporal-locality of the sequence into the data association problem, which makes them inherently prone to IDswitches and difficulties caused by long-term occlusion, cluttered background, and crowded scenes.We propose an approach to data association which incorporates both motion and appearance in a global manner. Unlike limited-temporal-locality methods which incorporate a few frames into the data association problem, we incorporate the whole temporal span and solve the data association problem for one object at a time, while implicitly incorporating the rest of the objects. In order to achieve this, we utilize Generalized Minimum Clique Graphs to solve the optimization problem of our data association method. Our proposed method yields a better formulated approach to data association which is supported by our superior results. Experiments show the proposed method makes significant improvements in tracking in the diverse sequences of Town Center [1], TUD-crossing [2], TUD-Stadtmitte [2], PETS2009 [3], and a new sequence called Parking Lot compared to the state of the art methods."
]
} |
1608.05477 | 2517797371 | We propose a novel recurrent encoder-decoder network model for real-time video-based face alignment. Our proposed model predicts 2D facial point maps regularized by a regression loss, while uniquely exploiting recurrent learning at both spatial and temporal dimensions. At the spatial level, we add a feedback loop connection between the combined output response map and the input, in order to enable iterative coarse-to-fine face alignment using a single network model. At the temporal level, we first decouple the features in the bottleneck of the network into temporal-variant factors, such as pose and expression, and temporal-invariant factors, such as identity information. Temporal recurrent learning is then applied to the decoupled temporal-variant features, yielding better generalization and significantly more accurate results at test time. We perform a comprehensive experimental analysis, showing the importance of each component of our proposed model, as well as superior results over the state-of-the-art in standard datasets. | Face alignment has been advanced in last decades. Remarkably, regression based methods @cite_32 @cite_46 @cite_52 @cite_3 @cite_11 @cite_2 @cite_5 @cite_48 @cite_0 @cite_6 @cite_7 significantly boost the generalization performance of face landmark detection, compared to algorithms based on statistical models such as Active shape models @cite_8 @cite_24 and Active appearance models @cite_38 . A regression-based approach directly regresses landmark locations where features extracted from face images serve as regressors. Landmark models are learned either in an independent manner, or in a joint fashion @cite_3 . This paper performs landmark detection via both a classification model and a regression model. Different from most of the previous methods, this work deals with face alignment in a video. It jointly optimizes detection output by utilizing multiple observations from the same person. | {
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_8",
"@cite_48",
"@cite_52",
"@cite_32",
"@cite_3",
"@cite_0",
"@cite_6",
"@cite_24",
"@cite_2",
"@cite_5",
"@cite_46",
"@cite_11"
],
"mid": [
"2097711399",
"2964014798",
"2073039128",
"1915668717",
"",
"2101866605",
"1990937109",
"2465108587",
"2963479408",
"2102512156",
"2121684305",
"1960706641",
"1976948919",
""
],
"abstract": [
"Active appearance model (AAM) is a powerful generative method for modeling deformable objects. The model decouples the shape and the texture variations of objects, which is followed by an efficient gradient-based model fitting method. Due to the flexible and simple framework, AAM has been widely applied in the fields of computer vision. However, difficulties are met when it is applied to various practical issues, which lead to a lot of prominent improvements to the model. Nevertheless, these difficulties and improvements have not been studied systematically. This motivates us to review the recent advances of AAM. This paper focuses on the improvements in the literature in turns of the problems suffered by AAM in practical applications. Therefore, these algorithms are summarized from three aspects, i.e., efficiency, discrimination, and robustness. Additionally, some applications and implementations of AAM are also enumerated. The main purpose of this paper is to serve as a guide for further research.",
"Face alignment, which fits a face model to an image and extracts the semantic meanings of facial pixels, has been an important topic in CV community. However, most algorithms are designed for faces in small to medium poses (below 45), lacking the ability to align faces in large poses up to 90. The challenges are three-fold: Firstly, the commonly used landmark-based face model assumes that all the landmarks are visible and is therefore not suitable for profile views. Secondly, the face appearance varies more dramatically across large poses, ranging from frontal view to profile view. Thirdly, labelling landmarks in large poses is extremely challenging since the invisible landmarks have to be guessed. In this paper, we propose a solution to the three problems in an new alignment framework, called 3D Dense Face Alignment (3DDFA), in which a dense 3D face model is fitted to the image via convolutional neutral network (CNN). We also propose a method to synthesize large-scale training samples in profile views to solve the third problem of data labelling. Experiments on the challenging AFLW database show that our approach achieves significant improvements over state-of-the-art methods.",
"We describe ‘Active Shape Models’ which iteratively adapt to refine estimates of the pose, scale and shape of models of image objects. The method uses flexible models derived from sets of training examples. These models, known as Point Distribution Models, represent objects as sets of labelled points. An initial estimate of the location of the model points in an image is improved by attempting to move each point to a better position nearby. Adjustments to the pose variables and shape parameters are calculated. Limits are placed on the shape parameters ensuring that the example can only deform into shapes conforming to global constraints imposed by the training set. An iterative procedure deforms the model example to find the best fit to the image object. Results of applying the method are described. The technique is shown to be a powerful method for refining estimates of object shape and location.",
"Cascaded regression approaches have been recently shown to achieve state-of-the-art performance for many computer vision tasks. Beyond its connection to boosting, cascaded regression has been interpreted as a learning-based approach to iterative optimization methods like the Newton's method. However, in prior work, the connection to optimization theory is limited only in learning a mapping from image features to problem parameters. In this paper, we consider the problem of facial deformable model fitting using cascaded regression and make the following contributions: (a) We propose regression to learn a sequence of averaged Jacobian and Hessian matrices from data, and from them descent directions in a fashion inspired by Gauss-Newton optimization. (b) We show that the optimization problem in hand has structure and devise a learning strategy for a cascaded regression approach that takes the problem structure into account. By doing so, the proposed method learns and employs a sequence of averaged Jacobians and descent directions in a subspace orthogonal to the facial appearance variation; hence, we call it Project-Out Cascaded Regression (PO-CR). (c) Based on the principles of PO-CR, we built a face alignment system that produces remarkably accurate results on the challenging iBUG data set outperforming previously proposed systems by a large margin. Code for our system is available from http: www.cs.nott.ac.uk ∼yzt .",
"",
"We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, unlike the holistic texture based features used in the discriminative AAM approaches, the response map can be represented by a small set of parameters and these parameters can be very efficiently used for reconstructing unseen response maps. Furthermore, we show that by adopting very simple off-the-shelf regression techniques, it is possible to learn robust functions from response maps to the shape parameters updates. The experiments, conducted on Multi-PIE, XM2VTS and LFPW database, show that the proposed DRMF method outperforms state-of-the-art algorithms for the task of generic face fitting. Moreover, the DRMF method is computationally very efficient and is real-time capable. The current MATLAB implementation takes 1 second per image. To facilitate future comparisons, we release the MATLAB code and the pre-trained models for research purposes.",
"We present a very efficient, highly accurate, \"Explicit Shape Regression\" approach for face alignment. Unlike previous regression-based approaches, we directly learn a vectorial regression function to infer the whole facial shape (a set of facial landmarks) from the image and explicitly minimize the alignment errors over the training data. The inherent shape constraint is naturally encoded into the regressor in a cascaded learning framework and applied from coarse to fine during the test, without using a fixed parametric shape model as in most previous methods. To make the regression more effective and efficient, we design a two-level boosted regression, shape indexed features and a correlation-based feature selection method. This combination enables us to learn accurate models from large training data in a short time (20 min for 2,000 training images), and run regression extremely fast in test (15 ms for a 87 landmarks shape). Experiments on challenging data show that our approach significantly outperforms the state-of-the-art in terms of both accuracy and efficiency.",
"Large-pose face alignment is a very challenging problem in computer vision, which is used as a prerequisite for many important vision tasks, e.g, face recognition and 3D face reconstruction. Recently, there have been a few attempts to solve this problem, but still more research is needed to achieve highly accurate results. In this paper, we propose a face alignment method for large-pose face images, by combining the powerful cascaded CNN regressor method and 3DMM. We formulate the face alignment as a 3DMM fitting problem, where the camera projection matrix and 3D shape parameters are estimated by a cascade of CNN-based regressors. The dense 3D shape allows us to design pose-invariant appearance features for effective CNN learning. Extensive experiments are conducted on the challenging databases (AFLW and AFW), with comparison to the state of the art.",
"Cascade regression framework has been shown to be effective for facial landmark detection. It starts from an initial face shape and gradually predicts the face shape update from the local appearance features to generate the facial landmark locations in the next iteration until convergence. In this paper, we improve upon the cascade regression framework and propose the Constrained Joint Cascade Regression Framework (CJCRF) for simultaneous facial action unit recognition and facial landmark detection, which are two related face analysis tasks, but are seldomly exploited together. In particular, we first learn the relationships among facial action units and face shapes as a constraint. Then, in the proposed constrained joint cascade regression framework, with the help from the constraint, we iteratively update the facial landmark locations and the action unit activation probabilities until convergence. Experimental results demonstrate that the intertwined relationships of facial action units and face shapes boost the performances of both facial action unit recognition and facial landmark detection. The experimental results also demonstrate the effectiveness of the proposed method comparing to the state-of-the-art works.",
"We make some simple extensions to the Active Shape Model of [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using two- instead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series.",
"The development of facial databases with an abundance of annotated facial data captured under unconstrained 'in-the-wild' conditions have made discriminative facial deformable models the de facto choice for generic facial landmark localization. Even though very good performance for the facial landmark localization has been shown by many recently proposed discriminative techniques, when it comes to the applications that require excellent accuracy, such as facial behaviour analysis and facial motion capture, the semi-automatic person-specific or even tedious manual tracking is still the preferred choice. One way to construct a person-specific model automatically is through incremental updating of the generic model. This paper deals with the problem of updating a discriminative facial deformable model, a problem that has not been thoroughly studied in the literature. In particular, we study for the first time, to the best of our knowledge, the strategies to update a discriminative model that is trained by a cascade of regressors. We propose very efficient strategies to update the model and we show that is possible to automatically construct robust discriminative person and imaging condition specific models 'in-the-wild' that outperform state-of-the-art generic face alignment strategies.",
"We present a novel face alignment framework based on coarse-to-fine shape searching. Unlike the conventional cascaded regression approaches that start with an initial shape and refine the shape in a cascaded manner, our approach begins with a coarse search over a shape space that contains diverse shapes, and employs the coarse solution to constrain subsequent finer search of shapes. The unique stage-by-stage progressive and adaptive search i) prevents the final solution from being trapped in local optima due to poor initialisation, a common problem encountered by cascaded regression approaches; and ii) improves the robustness in coping with large pose variations. The framework demonstrates real-time performance and state-of-the-art results on various benchmarks including the challenging 300-W dataset.",
"We propose a new approach for estimation of the positions of facial key points with three-level carefully designed convolutional networks. At each level, the outputs of multiple networks are fused for robust and accurate estimation. Thanks to the deep structures of convolutional networks, global high-level features are extracted over the whole face region at the initialization stage, which help to locate high accuracy key points. There are two folds of advantage for this. First, the texture context information over the entire face is utilized to locate each key point. Second, since the networks are trained to predict all the key points simultaneously, the geometric constraints among key points are implicitly encoded. The method therefore can avoid local minimum caused by ambiguity and data corruption in difficult image samples due to occlusions, large pose variations, and extreme lightings. The networks at the following two levels are trained to locally refine initial predictions and their inputs are limited to small regions around the initial predictions. Several network structures critical for accurate and robust facial point detection are investigated. Extensive experiments show that our approach outperforms state-of-the-art methods in both detection accuracy and reliability.",
""
]
} |
1608.05594 | 2510753236 | In the graph database literature the term "join" does not refer to an operator used to merge two graphs. In particular, a counterpart of the relational join is not present in existing graph query languages, and consequently no efficient algorithms have been developed for this operator. This paper provides two main contributions. First, we define a binary graph join operator that acts on the vertices as a standard relational join and combines the edges according to a user-defined semantics. Then we propose the "CoGrouped Graph Conjunctive @math -Join" algorithm running over data indexed in secondary memory. Our implementation outperforms the execution of the same operation in Cypher and SPARQL on major existing graph database management systems by at least one order of magnitude, also including indexing and loading time. | GraphLOG The @cite_19 query language subsumes a property graph data structure () where no properties are associated, neither to vertices nor to edges. Such query language is conceived to be visually representable, and hence path queries are represented as graphs, where simple regular expressions can be associated to the edges. The concept of visually representing graph traversal queries involving path regex-es was later on adopted in @cite_13 , where some algorithms are showed for implementing such query language in polynomial time. Such language does not support some path summarization queries that were introduced in GraphLOG @cite_22 . | {
"cite_N": [
"@cite_19",
"@cite_13",
"@cite_22"
],
"mid": [
"1997520998",
"1752606664",
""
],
"abstract": [
"We present a query language called GraphLog, based on a graph representation of both data and queries. Queries are graph patterns. Edges in queries represent edges or paths in the database. Regular expressions are used to qualify these paths. We characterize the expressive power of the language and show that it is equivalent to stratified linear Datalog, first order logic with transitive closure, and non-deterministic logarithmic space (assuming ordering on the domain). The fact that the latter three classes coincide was not previously known. We show how GraphLog can be extended to incorporate aggregates and path summarization, and describe briefly our current prototype implementation.",
"It is increasingly common to find graphs in which edges are of different types, indicating a variety of relationships. For such graphs we propose a class of reachability queries and a class of graph patterns, in which an edge is specified with a regular expression of a certain form, expressing the connectivity of a data graph via edges of various types. In addition, we define graph pattern matching based on a revised notion of graph simulation. On graphs in emerging applications such as social networks, we show that these queries are capable of finding more sensible information than their traditional counterparts. Better still, their increased expressive power does not come with extra complexity. Indeed, (1) we investigate their containment and minimization problems, and show that these fundamental problems are in quadratic time for reachability queries and are in cubic time for pattern queries. (2) We develop an algorithm for answering reachability queries, in quadratic time as for their traditional counterpart. (3) We provide two cubic-time algorithms for evaluating graph pattern queries, as opposed to the NP-completeness of graph pattern matching via subgraph isomorphism. (4) The effectiveness and efficiency of these algorithms are experimentally verified using real-life data and synthetic data.",
""
]
} |
1608.05594 | 2510753236 | In the graph database literature the term "join" does not refer to an operator used to merge two graphs. In particular, a counterpart of the relational join is not present in existing graph query languages, and consequently no efficient algorithms have been developed for this operator. This paper provides two main contributions. First, we define a binary graph join operator that acts on the vertices as a standard relational join and combines the edges according to a user-defined semantics. Then we propose the "CoGrouped Graph Conjunctive @math -Join" algorithm running over data indexed in secondary memory. Our implementation outperforms the execution of the same operation in Cypher and SPARQL on major existing graph database management systems by at least one order of magnitude, also including indexing and loading time. | NautiLOD The @cite_34 query language was conceived for performing path queries (defined through regular expressions) over RDF graphs with recursion operators (Kleene Star). The same paper shows that queries can be evaluated in polynomial time. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2037516787"
],
"abstract": [
"The Web of Linked Data is a huge graph of distributed and interlinked datasources fueled by structured information. This new environment calls for formal languages and tools to automatize navigation across datasources (nodes in such graph) and enable semantic-aware and Web-scale search mechanisms. In this article we introduce a declarative navigational language for the Web of Linked Data graph called N auti LOD. N auti LOD enables one to specify datasources via the intertwining of navigation and querying capabilities. It also features a mechanism to specify actions (e.g., send notification messages) that obtain their parameters from datasources reached during the navigation. We provide a formalization of the N auti LOD semantics, which captures both nodes and fragments of the Web of Linked Data. We present algorithms to implement such semantics and study their computational complexity. We discuss an implementation of the features of N auti LOD in a tool called swget, which exploits current Web technologies and protocols. We report on the evaluation of swget and its comparison with related work. Finally, we show the usefulness of capturing Web fragments by providing examples in different knowledge domains."
]
} |
1608.05594 | 2510753236 | In the graph database literature the term "join" does not refer to an operator used to merge two graphs. In particular, a counterpart of the relational join is not present in existing graph query languages, and consequently no efficient algorithms have been developed for this operator. This paper provides two main contributions. First, we define a binary graph join operator that acts on the vertices as a standard relational join and combines the edges according to a user-defined semantics. Then we propose the "CoGrouped Graph Conjunctive @math -Join" algorithm running over data indexed in secondary memory. Our implementation outperforms the execution of the same operation in Cypher and SPARQL on major existing graph database management systems by at least one order of magnitude, also including indexing and loading time. | Gremlin Another graph traversal language, , have been proved to be Turing Complete @cite_11 : by the way this is not a desired feature for query languages since it must guarantee that each query always returns an answer and that the evaluation of the query always terminates. Another problem with this query language is based on its semantics: while all the other graph traversal languages return the desired subgraph, Gremlin returns a bag of values (e.g. vertices, values, edges). This peculiarity does not allow the user to take advantage of partial query evaluations and to combine them in a final result. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1893177189"
],
"abstract": [
"Gremlin is a graph traversal machine and language designed, developed, and distributed by the Apache TinkerPop project. Gremlin, as a graph traversal machine, is composed of three interacting components: a graph, a traversal, and a set of traversers. The traversers move about the graph according to the instructions specified in the traversal, where the result of the computation is the ultimate locations of all halted traversers. A Gremlin machine can be executed over any supporting graph computing system such as an OLTP graph database and or an OLAP graph processor. Gremlin, as a graph traversal language, is a functional language implemented in the user's native programming language and is used to define the traversal of a Gremlin machine. This article provides a mathematical description of Gremlin and details its automaton and functional properties. These properties enable Gremlin to naturally support imperative and declarative querying, host language agnosticism, user-defined domain specific languages, an extensible compiler optimizer, single- and multi-machine execution models, hybrid depth- and breadth-first evaluation, as well as the existence of a Universal Gremlin Machine and its respective entailments."
]
} |
1608.05594 | 2510753236 | In the graph database literature the term "join" does not refer to an operator used to merge two graphs. In particular, a counterpart of the relational join is not present in existing graph query languages, and consequently no efficient algorithms have been developed for this operator. This paper provides two main contributions. First, we define a binary graph join operator that acts on the vertices as a standard relational join and combines the edges according to a user-defined semantics. Then we propose the "CoGrouped Graph Conjunctive @math -Join" algorithm running over data indexed in secondary memory. Our implementation outperforms the execution of the same operation in Cypher and SPARQL on major existing graph database management systems by at least one order of magnitude, also including indexing and loading time. | Discrete Mathematics At the time of writing, the only field where graph joins where effectively discussed is Discrete Mathematics. In this field such operations are defined over either on finite graphs or on finite graphs with cycles, and are named @cite_2 . As the name suggests, every graph product of two graphs, e.g. @math and @math , produces a graph whose vertex set is defined as @math , while the edge set changes accordingly to the different graph product definition. Consequently the Kroneker Graph Product @cite_37 is defined as follows: while the @cite_14 is defined as follows: Please observe that this definition creates a new vertex which is a pair of vertices: hereby such operation is defined differently from the relational algebra's cartesian product, where the two vertices are merged. As a consequence, such graph products admit commutativity and associativity properties only up to graph isomorphism. Other graph products are and @cite_2 @cite_26 . | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_26",
"@cite_2"
],
"mid": [
"",
"2127749196",
"71943752",
"2341256025"
],
"abstract": [
"",
"We present an algorithm that determines the prime factors of connected graphs with respect to the Cartesian product in linear time and space. This improves a result of [Cartesian graph factorization at logarithmic cost per edge, Comput. Complexity 2 (1992) 331-349], who compute the prime factors in O(mlogn) time, where m denotes the number of vertices of G and n the number of edges. Our algorithm is conceptually simpler. It gains its efficiency by the introduction of edge-labellings.",
"Basic Concepts. Hypercubes. Hamming Graphs. Cartesian Products. Strong and Direct Products. Lexicographic Products. Fast Recognition Algorithms. Invariants. Appendices. Bibliography. Indexes.",
"Handbook of Product Graphs, Second Edition examines the dichotomy between the structure of products and their subgraphs. It also features the design of efficient algorithms that recognize products and their subgraphs and explores the relationship between graph parameters of the product and factors. Extensively revised and expanded, the handbook presents full proofs of many important results as well as up-to-date research and conjectures. Results and Algorithms New to the Second Edition: Cancellation results A quadratic recognition algorithm for partial cubes Results on the strong isometric dimension Computing the Wiener index via canonical isometric embedding Connectivity results A fractional version of Hedetniemis conjecture Results on the independence number of Cartesian powers of vertex-transitive graphs Verification of Vizings conjecture for chordal graphs Results on minimum cycle bases Numerous selected recent results, such as complete minors and nowhere-zero flows The second edition of this classic handbook provides a thorough introduction to the subject and an extensive survey of the field. The first three parts of the book cover graph products in detail. The authors discuss algebraic properties, such as factorization and cancellation, and explore interesting and important classes of subgraphs. The fourth part presents algorithms for the recognition of products and related classes of graphs. The final two parts focus on graph invariants and infinite, directed, and product-like graphs. Sample implementations of selected algorithms and other information are available on the books website, which can be reached via the authors home pages."
]
} |
1608.05528 | 2516846622 | Recent work has demonstrated that state-of-the-art word embedding models require different context types to produce high-quality representations for different word classes such as adjectives (A), verbs (V), and nouns (N). This paper is concerned with identifying contexts useful for learning A V N-specific representations. We introduce a simple yet effective framework for selecting class-specific context configurations that yield improved representations for each class. We propose an automatic A* style selection algorithm that effectively searches only a fraction of the large configuration space. The results on predicting similarity scores for the A, V, and N subsets of the benchmarking SimLex-999 evaluation set indicate that our method is useful for each class: the improvements are 6 (A), 6 (V), and 5 (N) over the best previously proposed context type for each class. At the same time, the model trains on only 14 (A), 26.2 (V), and 33.6 (N) of all dependency-based contexts, resulting in much shorter training time. | Word representation models typically train on ( word, context ) pairs. Traditionally, most models use bag-of-words (BOW) contexts, which represent a word using its neighbouring words, irrespective of the syntactic or semantic relations between them [inter alia] Collobert:2011jmlr,Mikolov:2013nips,Mnih:2013nips,Pennington:2014emnlp . Several alternative context types have been proposed, motivated by the limitations of BOW contexts, most notably their focus on topical rather than functional similarity (e.g., vs. ). These include dependency contexts @cite_8 @cite_12 , pattern contexts @cite_16 @cite_32 and substitute vectors @cite_29 @cite_28 . | {
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_32",
"@cite_16",
"@cite_12"
],
"mid": [
"2005181355",
"2294874432",
"1831478036",
"1241017059",
"2140480387",
"2251771443"
],
"abstract": [
"Traditionally, vector-based semantic space models use word co-occurrence counts from large corpora to represent lexical meaning. In this article we present a novel framework for constructing semantic spaces that takes syntactic relations into account. We introduce a formalization for this class of models, which allows linguistic knowledge to guide the construction process. We evaluate our framework on a range of tasks relevant for cognitive science and natural language processing: semantic priming, synonymy detection, and word sense disambiguation. In all cases, our framework obtains results that are comparable or superior to the state of the art.",
"Context representations are a key element in distributional models of word meaning. In contrast to typical representations based on neighboring words, a recently proposed approach suggests to represent a context of a target word by a substitute vector, comprising the potential fillers for the target word slot in that context. In this work we first propose a variant of substitute vectors, which we find particularly suitable for measuring context similarity. Then, we propose a novel model for representing word meaning in context based on this context representation. Our model outperforms state-of-the-art results on lexical substitution tasks in an unsupervised setting.",
"We investigate paradigmatic representations of word context in the domain of unsupervised syntactic category acquisition. Paradigmatic representations of word context are based on potential substitutes of a word in contrast to syntagmatic representations based on properties of neighboring words. We compare a bigram based baseline model with several paradigmatic models and demonstrate significant gains in accuracy. Our best model based on Euclidean co-occurrence embedding combines the paradigmatic context representation with morphological and orthographic features and achieves 80 many-to-one accuracy on a 45-tag 1M word corpus.",
"We present a novel word level vector representation based on symmetric patterns (SPs). For this aim we automatically acquire SPs (e.g., “X and Y”) from a large corpus of plain text, and generate vectors where each coordinate represents the cooccurrence in SPs of the represented word with another word of the vocabulary. Our representation has three advantages over existing alternatives: First, being based on symmetric word relationships, it is highly suitable for word similarity prediction. Particularly, on the SimLex999 word similarity dataset, our model achieves a Spearman’s score of 0.517, compared to 0.462 of the state-of-the-art word2vec model. Interestingly, our model performs exceptionally well on verbs, outperforming stateof-the-art baselines by 20.2‐41.5 . Second, pattern features can be adapted to the needs of a target NLP application. For example, we show that we can easily control whether the embeddings derived from SPs deem antonym pairs (e.g. (big,small)) as similar or dissimilar, an important distinction for tasks such as word classification and sentiment analysis. Finally, we show that a simple combination of the word similarity scores generated by our method and by word2vec results in a superior predictive power over that of each individual model, scoring as high as 0.563 in Spearman’s on SimLex999. This emphasizes the differences between the signals captured by each of the models.",
"Computational models of meaning trained on naturally occurring text successfully model human performance on tasks involving simple similarity measures, but they characterize meaning in terms of undifferentiated bags of words or topical dimensions. This has led some to question their psychological plausibility (Murphy, 2002; Schunn, 1999). We present here a fully automatic method for extracting a structured and comprehensive set of concept descriptions directly from an English part-of-speech-tagged corpus. Concepts are characterized by weighted properties, enriched with concept–property types that approximate classical relations such as hypernymy and function. Our model outperforms comparable algorithms in cognitive tasks pertaining not only to concept-internal structures (discovering properties of concepts, grouping properties by property type) but also to inter-concept relations (clustering into superordinates), suggesting the empirical validity of the property-based approach.",
"While continuous word embeddings are gaining popularity, current models are based solely on linear contexts. In this work, we generalize the skip-gram model with negative sampling introduced by to include arbitrary contexts. In particular, we perform experiments with dependency-based contexts, and show that they produce markedly different embeddings. The dependencybased embeddings are less topical and exhibit more functional similarity than the original skip-gram embeddings."
]
} |
1608.05528 | 2516846622 | Recent work has demonstrated that state-of-the-art word embedding models require different context types to produce high-quality representations for different word classes such as adjectives (A), verbs (V), and nouns (N). This paper is concerned with identifying contexts useful for learning A V N-specific representations. We introduce a simple yet effective framework for selecting class-specific context configurations that yield improved representations for each class. We propose an automatic A* style selection algorithm that effectively searches only a fraction of the large configuration space. The results on predicting similarity scores for the A, V, and N subsets of the benchmarking SimLex-999 evaluation set indicate that our method is useful for each class: the improvements are 6 (A), 6 (V), and 5 (N) over the best previously proposed context type for each class. At the same time, the model trains on only 14 (A), 26.2 (V), and 33.6 (N) of all dependency-based contexts, resulting in much shorter training time. | Previous attempts on specialising word representations for a particular relation (e.g., similarity vs relatedness, antonyms) operate in one of two frameworks: (1) modifying the prior or the regularisation of the original training procedure @cite_3 @cite_11 @cite_35 @cite_37 @cite_17 ; (2) post-processing procedures which use lexical knowledge to refine previously trained word vectors @cite_2 @cite_11 @cite_21 . Our work suggests that the induced representations can be specialised by directly training the word representation model with carefully selected contexts. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_11",
"@cite_21",
"@cite_3",
"@cite_2",
"@cite_17"
],
"mid": [
"2250683455",
"2251507550",
"1814992895",
"",
"2250930514",
"2250539671",
"2251830157"
],
"abstract": [
"In this paper, we propose a general framework to incorporate semantic knowledge into the popular data-driven learning process of word embeddings to improve the quality of them. Under this framework, we represent semantic knowledge as many ordinal ranking inequalities and formulate the learning of semantic word embeddings (SWE) as a constrained optimization problem, where the data-derived objective function is optimized subject to all ordinal knowledge inequality constraints extracted from available knowledge resources such as Thesaurus and WordNet. We have demonstrated that this constrained optimization problem can be efficiently solved by the stochastic gradient descent (SGD) algorithm, even for a large number of inequality constraints. Experimental results on four standard NLP tasks, including word similarity measure, sentence completion, name entity recognition, and the TOEFL synonym selection, have all demonstrated that the quality of learned word vectors can be significantly improved after semantic knowledge is incorporated as inequality constraints during the learning process of word embeddings.",
"We demonstrate the advantage of specializing semantic word embeddings for either similarity or relatedness. We compare two variants of retrofitting and a joint-learning approach, and find that all three yield specialized semantic spaces that capture human intuitions regarding similarity and relatedness better than unspecialized spaces. We also show that using specialized spaces in NLP tasks and applications leads to clear improvements, for document classification and synonym selection, which rely on either similarity or relatedness but not both.",
"The Paraphrase Database (PPDB; , 2013) is an extensive semantic resource, consisting of a list of phrase pairs with (heuristic) confidence estimates. However, it is still unclear how it can best be used, due to the heuristic nature of the confidences and its necessarily incomplete coverage. We propose models to leverage the phrase pairs from the PPDB to build parametric paraphrase models that score paraphrase pairs more accurately than the PPDB’s internal scores while simultaneously improving its coverage. They allow for learning phrase embeddings as well as improved word embeddings. Moreover, we introduce two new, manually annotated datasets to evaluate short-phrase paraphrasing models. Using our paraphrase model trained using PPDB, we achieve state-of-the-art results on standard word and bigram similarity tasks and beat strong baselines on our new short phrase paraphrase tasks.",
"",
"Word embeddings learned on unlabeled data are a popular tool in semantics, but may not capture the desired semantics. We propose a new learning objective that incorporates both a neural language model objective (, 2013) and prior knowledge from semantic resources to learn improved lexical semantic embeddings. We demonstrate that our embeddings improve over those learned solely on raw text in three settings: language modeling, measuring semantic similarity, and predicting human judgements.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing an attention model that finds within the contextual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global context, such as the topic of the document. Experiments performed on both semantically and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, compared to other more sophisticated models, our model scales better as we increase the size of the context of the model."
]
} |
1608.05094 | 2517408916 | We consider compressed sensing (CS) using partially coherent sensing matrices. Most of CS theory to date is focused on incoherent sensing, that is, columns from the sensing matrix are highly uncorrelated. However, sensing systems with naturally occurring correlations arise in many applications, such as signal detection, motion detection and radar. Moreover, in these applications it is often not necessary to know the support of the signal exactly, but instead small errors in the support and signal are tolerable. In this paper, we focus on @math -tol -erant recovery, in which support set reconstructions are considered accurate when their locations match the true locations within @math indices. Despite the abundance of work utilizing incoherent sensing matrices, for @math -tol -erant recovery we suggest that coherence is actually . This is especially true for situations with only a few and very noisy measurements as we demonstrate via numerical simulations. As a first step towards the theory of tolerant coherent sensing we introduce the notions of @math -co -herence and @math -tol -erant recovery. We then provide some theoretical arguments for a greedy algorithm applicable to @math -tol -erant recovery of signals with sufficiently spread support. | The literature on OMP related methods using partially coherent sensing matrices can be summarized as follows. In @cite_15 multiple extensions to existing algorithms were formulated. The authors proved and showed numerically that by introducing a band-exclusion method they were able to recover signals in a specific sense. Each non-zero of the original signal has a counterpart in the reconstruction, which is however allowed to be located . Thus the "tolerance" would be @math . Further, a condition related to the ERC @cite_6 is required, and the signals are assumed to have support which is spread enough so that coherent columns do not appear in the support indices. The work @cite_2 also considers spread signals, seeking accurate signal recovery and attempting to overcome coherence in the sampling matrix. | {
"cite_N": [
"@cite_15",
"@cite_6",
"@cite_2"
],
"mid": [
"",
"2116148865",
"2133285942"
],
"abstract": [
"",
"This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms.",
"Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals based on randomized dimensionality reduction. To recover a signal from its compressive measurements, standard CS algorithms seek the sparsest signal in some discrete basis or frame that agrees with the measurements. A great many applications feature smooth or modulated signals that are frequency-sparse and can be modeled as a superposition of a small number of sinusoids; for such signals, the discrete Fourier transform (DFT) basis is a natural choice for CS recovery. Unfortunately, such signals are only sparse in the DFT domain when the sinusoid frequencies live precisely at the centers of the DFT bins; when this is not the case, CS recovery performance degrades signicantly. In this paper, we introduce the spectral CS (SCS) recovery framework for arbitrary frequencysparse signals. The key ingredients are an over-sampled DFT frame and a restricted unionof-subspaces signal model that inhibits closely spaced sinusoids. We demonstrate that SCS signicantly outperforms current state-of-the-art CS algorithms based on the DFT while providing provable bounds on the number of measurements required for stable recovery. We also leverage line spectral estimation methods (specically Thomson’s multitaper method"
]
} |
1608.05094 | 2517408916 | We consider compressed sensing (CS) using partially coherent sensing matrices. Most of CS theory to date is focused on incoherent sensing, that is, columns from the sensing matrix are highly uncorrelated. However, sensing systems with naturally occurring correlations arise in many applications, such as signal detection, motion detection and radar. Moreover, in these applications it is often not necessary to know the support of the signal exactly, but instead small errors in the support and signal are tolerable. In this paper, we focus on @math -tol -erant recovery, in which support set reconstructions are considered accurate when their locations match the true locations within @math indices. Despite the abundance of work utilizing incoherent sensing matrices, for @math -tol -erant recovery we suggest that coherence is actually . This is especially true for situations with only a few and very noisy measurements as we demonstrate via numerical simulations. As a first step towards the theory of tolerant coherent sensing we introduce the notions of @math -co -herence and @math -tol -erant recovery. We then provide some theoretical arguments for a greedy algorithm applicable to @math -tol -erant recovery of signals with sufficiently spread support. | Along a different line of work, @cite_7 shows that mild coherence in the sensing matrix can be allowed when the signal is modeled as random. In this case, accurate recovery is still possible when the coherence scales like @math . Here again, in this setting the goal is exact recovery and the coherence is something that needs to be overcome, not something that aids in recovery. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2115447612"
],
"abstract": [
"We consider the fundamental problem of estimating the mean of a vector y=Xβ+z, where X is an n×p design matrix in which one can have far more variables than observations, and z is a stochastic error term -— the so-called “p>n” setup. When β is sparse, or, more generally, when there is a sparse subset of covariates providing a close approximation to the unknown mean vector, we ask whether or not it is possible to accurately estimate Xβ using a computationally tractable algorithm. We show that, in a surprisingly wide range of situations, the lasso happens to nearly select the best subset of variables. Quantitatively speaking, we prove that solving a simple quadratic program achieves a squared error within a logarithmic factor of the ideal mean squared error that one would achieve with an oracle supplying perfect information about which variables should and should not be included in the model. Interestingly, our results describe the average performance of the lasso; that is, the performance one can expect in an vast majority of cases where Xβ is a sparse or nearly sparse superposition of variables, but not in all cases. Our results are nonasymptotic and widely applicable, since they simply require that pairs of predictor variables are not too collinear."
]
} |
1608.05180 | 2952229497 | Object cutout is a fundamental operation for image editing and manipulation, yet it is extremely challenging to automate it in real-world images, which typically contain considerable background clutter. In contrast to existing cutout methods, which are based mainly on low-level image analysis, we propose a more holistic approach, which considers the entire shape of the object of interest by leveraging higher-level image analysis and learnt global shape priors. Specifically, we leverage a deep neural network (DNN) trained for objects of a particular class (chairs) for realizing this mechanism. Given a rectangular image region, the DNN outputs a probability map (P-map) that indicates for each pixel inside the rectangle how likely it is to be contained inside an object from the class of interest. We show that the resulting P-maps may be used to evaluate how likely a rectangle proposal is to contain an instance of the class, and further process good proposals to produce an accurate object cutout mask. This amounts to an automatic end-to-end pipeline for catergory-specific object cutout. We evaluate our approach on segmentation benchmark datasets, and show that it significantly outperforms the state-of-the-art on them. | is the process of partitioning an image into multiple segments of similar appearance. The problem can be formulated as a clustering problem in color space @cite_19 . To incorporate more spatial constrains into the process, the image may be modeled as a graph, converting image segmentation into a graph partition problem. The weights on the graph edges can either be inferred from pixel colors @cite_15 or from sparse user input, as an addition @cite_26 . Algorithms have been proposed for efficiently computing the partition, even when the pixels are densely connected (DenseCRF) @cite_16 . Such methods are capable of inferring a sharp segmentation mask from sparse of fuzzy probabilities, and thus are widely used as a post-process for methods that produce segmentation probability maps. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_26",
"@cite_16"
],
"mid": [
"2067191022",
"1999478155",
"2124351162",
"2161236525"
],
"abstract": [
"A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.",
"This paper addresses the problem of segmenting an image into regions. We define a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image. We then develop an efficient segmentation algorithm based on this predicate, and show that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties. We apply the algorithm to image segmentation using two different kinds of local neighborhoods in constructing the graph, and illustrate the results with both real and synthetic images. The algorithm runs in time nearly linear in the number of graph edges and is also fast in practice. An important characteristic of the method is its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions.",
"The problem of efficient, interactive foreground background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for \"border matting\" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools.",
"Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy."
]
} |
1608.05180 | 2952229497 | Object cutout is a fundamental operation for image editing and manipulation, yet it is extremely challenging to automate it in real-world images, which typically contain considerable background clutter. In contrast to existing cutout methods, which are based mainly on low-level image analysis, we propose a more holistic approach, which considers the entire shape of the object of interest by leveraging higher-level image analysis and learnt global shape priors. Specifically, we leverage a deep neural network (DNN) trained for objects of a particular class (chairs) for realizing this mechanism. Given a rectangular image region, the DNN outputs a probability map (P-map) that indicates for each pixel inside the rectangle how likely it is to be contained inside an object from the class of interest. We show that the resulting P-maps may be used to evaluate how likely a rectangle proposal is to contain an instance of the class, and further process good proposals to produce an accurate object cutout mask. This amounts to an automatic end-to-end pipeline for catergory-specific object cutout. We evaluate our approach on segmentation benchmark datasets, and show that it significantly outperforms the state-of-the-art on them. | Instead of grouping pixels only by appearance, semantic segmentation forms segments by grouping pixels belonging to same semantic objects; thus, a single segment might contain heterogeneous appearances. Since such segmentation depends on semantic understanding of the image content, state-of-the-art methods operate by running classification neural networks on patches densely sampled from the image in order to predict the semantic label of their central pixels @cite_28 @cite_27 @cite_33 . Instead, @cite_11 proposed a DeconvNet to directly output a high resolution semantic segmentation. We leverage DeconvNet for solving the more challenging object cutout problem by adapting and training it extensively on objects from a specific class. | {
"cite_N": [
"@cite_28",
"@cite_27",
"@cite_33",
"@cite_11"
],
"mid": [
"1903029394",
"1529410181",
"",
"2952637581"
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL",
"",
"We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network."
]
} |
1608.05180 | 2952229497 | Object cutout is a fundamental operation for image editing and manipulation, yet it is extremely challenging to automate it in real-world images, which typically contain considerable background clutter. In contrast to existing cutout methods, which are based mainly on low-level image analysis, we propose a more holistic approach, which considers the entire shape of the object of interest by leveraging higher-level image analysis and learnt global shape priors. Specifically, we leverage a deep neural network (DNN) trained for objects of a particular class (chairs) for realizing this mechanism. Given a rectangular image region, the DNN outputs a probability map (P-map) that indicates for each pixel inside the rectangle how likely it is to be contained inside an object from the class of interest. We show that the resulting P-maps may be used to evaluate how likely a rectangle proposal is to contain an instance of the class, and further process good proposals to produce an accurate object cutout mask. This amounts to an automatic end-to-end pipeline for catergory-specific object cutout. We evaluate our approach on segmentation benchmark datasets, and show that it significantly outperforms the state-of-the-art on them. | Object cutout further pushes semantic segmentation from category-level to instance-level. The additional challenge is that objects with similar appearance may hinder the cutout accuracy for individual instances. The state-of-the-art addresses the object cutout problem by solving it jointly with detection @cite_2 @cite_22 , object number prediction @cite_31 , or by explicitly modeling the occlusion interactions between different instances @cite_8 @cite_13 . Though significant progress has been made recently, the performance on some object categories is still very low. In this work, we take advantage of being able to utilize training data synthesized from 3D models @cite_24 , and focus on leveraging rich holistic shape priors for addressing segmentation ambiguities. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_24",
"@cite_2",
"@cite_31",
"@cite_13"
],
"mid": [
"2952302801",
"130423592",
"1591870335",
"",
"2223259665",
"1927486677"
],
"abstract": [
"In this work, we propose a novel Reversible Recursive Instance-level Object Segmentation (R2-IOS) framework to address the challenging instance-level object segmentation task. R2-IOS consists of a reversible proposal refinement sub-network that predicts bounding box offsets for refining the object proposal locations, and an instance-level segmentation sub-network that generates the foreground mask of the dominant object instance in each proposal. By being recursive, R2-IOS iteratively optimizes the two sub-networks during joint training, in which the refined object proposals and improved segmentation predictions are alternately fed into each other to progressively increase the network capabilities. By being reversible, the proposal refinement sub-network adaptively determines an optimal number of refinement iterations required for each proposal during both training and testing. Furthermore, to handle multiple overlapped instances within a proposal, an instance-aware denoising autoencoder is introduced into the segmentation sub-network to distinguish the dominant object from other distracting instances. Extensive experiments on the challenging PASCAL VOC 2012 benchmark well demonstrate the superiority of R2-IOS over other state-of-the-art methods. In particular, the @math over @math classes at @math IoU achieves @math , which significantly outperforms the results of @math by PFN PFN and @math by liu2015multi .",
"A major limitation of existing models for semantic segmentation is the inability to identify individual instances of the same class: when labeling pixels with only semantic classes, a set of pixels with the same label could represent a single object or ten. In this work, we introduce a model to perform both semantic and instance segmentation simultaneously. We introduce a new higher-order loss function that directly minimizes the coverage metric and evaluate a variety of region features, including those from a convolutional network. We apply our model to the NYU Depth V2 dataset, obtaining state of the art results.",
"Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.",
"",
"Instance-level object segmentation is an important yet under-explored task. The few existing studies are almost all based on region proposal methods to extract candidate segments and then utilize object classification to produce final results. Nonetheless, generating accurate region proposals itself is quite challenging. In this work, we propose a Proposal-Free Network (PFN ) to address the instance-level object segmentation problem, which outputs the instance numbers of different categories and the pixel-level information on 1) the coordinates of the instance bounding box each pixel belongs to, and 2) the confidences of different categories for each pixel, based on pixel-to-pixel deep convolutional neural network. All the outputs together, by using any off-the-shelf clustering method for simple post-processing, can naturally generate the ultimate instance-level object segmentation results. The whole PFN can be easily trained in an end-to-end way without the requirement of a proposal generation stage. Extensive evaluations on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate that the proposed PFN solution well beats the state-of-the-arts for instance-level object segmentation. In particular, the @math over 20 classes at 0.5 IoU reaches 58.7 by PFN, significantly higher than 43.8 and 46.3 by the state-of-the-art algorithms, SDS [9] and [16], respectively.",
"We present a multi-instance object segmentation algorithm to tackle occlusions. As an object is split into two parts by an occluder, it is nearly impossible to group the two separate regions into an instance by purely bottomup schemes. To address this problem, we propose to incorporate top-down category specific reasoning and shape prediction through exemplars into an intuitive energy minimization framework. We perform extensive evaluations of our method on the challenging PASCAL VOC 2012 segmentation set. The proposed algorithm achieves favorable results on the joint detection and segmentation task against the state-of-the-art method both quantitatively and qualitatively."
]
} |
1608.05180 | 2952229497 | Object cutout is a fundamental operation for image editing and manipulation, yet it is extremely challenging to automate it in real-world images, which typically contain considerable background clutter. In contrast to existing cutout methods, which are based mainly on low-level image analysis, we propose a more holistic approach, which considers the entire shape of the object of interest by leveraging higher-level image analysis and learnt global shape priors. Specifically, we leverage a deep neural network (DNN) trained for objects of a particular class (chairs) for realizing this mechanism. Given a rectangular image region, the DNN outputs a probability map (P-map) that indicates for each pixel inside the rectangle how likely it is to be contained inside an object from the class of interest. We show that the resulting P-maps may be used to evaluate how likely a rectangle proposal is to contain an instance of the class, and further process good proposals to produce an accurate object cutout mask. This amounts to an automatic end-to-end pipeline for catergory-specific object cutout. We evaluate our approach on segmentation benchmark datasets, and show that it significantly outperforms the state-of-the-art on them. | Recently, exciting advances in image based 3D object retrieval and object view estimation have made @cite_10 @cite_1 @cite_24 . Such efforts are quite related to object cutout, as the retrieved 3D model can be rendered in the estimated view to approximate the object in the image, thus providing a strong prior for cutout. However, we found that the gap between projected proxies and accurate cutout masks cannot be easily bridged. One reason is that there are only few models in the existing shape databases that match well with real world objects. The inherent mismatch between 3D database and real world objects, plus the introduced retrieval and view estimation errors, render it infeasible to compute object cutout through such an approach, in general cases. | {
"cite_N": [
"@cite_1",
"@cite_10",
"@cite_24"
],
"mid": [
"2083163329",
"2010625607",
"1591870335"
],
"abstract": [
"Both 3D models and 2D images contain a wealth of information about everyday objects in our environment. However, it is difficult to semantically link together these two media forms, even when they feature identical or very similar objects. We propose a joint embedding space populated by both 3D shapes and 2D images of objects, where the distances between embedded entities reflect similarity between the underlying objects. This joint embedding space facilitates comparison between entities of either form, and allows for cross-modality retrieval. We construct the embedding space using 3D shape similarity measure, as 3D shapes are more pure and complete than their appearance in images, leading to more robust distance metrics. We then employ a Convolutional Neural Network (CNN) to \"purify\" images by muting distracting factors. The CNN is trained to map an image to a point in the embedding space, so that it is close to a point attributed to a 3D model of a similar object to the one depicted in the image. This purifying capability of the CNN is accomplished with the help of a large amount of training data consisting of images synthesized from 3D shapes. Our joint embedding allows cross-view image retrieval, image-based shape retrieval, as well as shape-based image retrieval. We evaluate our method on these retrieval tasks and show that it consistently out-performs state-of-the-art methods, and demonstrate the usability of a joint embedding in a number of additional applications.",
"This paper poses object category detection in images as a type of 2D-to-3D alignment problem, utilizing the large quantities of 3D CAD models that have been made publicly available online. Using the \"chair\" class as a running example, we propose an exemplar-based 3D category representation, which can explicitly model chairs of different styles as well as the large variation in viewpoint. We develop an approach to establish part-based correspondences between 3D CAD models and real photographs. This is achieved by (i) representing each 3D model using a set of view-dependent mid-level visual elements learned from synthesized views in a discriminative fashion, (ii) carefully calibrating the individual element detectors on a common dataset of negative images, and (iii) matching visual elements to the test image allowing for small mutual deformations but preserving the viewpoint and style constraints. We demonstrate the ability of our system to align 3D models with 2D objects in the challenging PASCAL VOC images, which depict a wide variety of chairs in complex scenes.",
"Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark."
]
} |
1608.05159 | 2512998417 | Most of existing detection pipelines treat object proposals independently and predict bounding box locations and classification scores over them separately. However, the important semantic and spatial layout correlations among proposals are often ignored, which are actually useful for more accurate object detection. In this work, we propose a new EM-like group recursive learning approach to iteratively refine object proposals by incorporating such context of surrounding proposals and provide an optimal spatial configuration of object detections. In addition, we propose to incorporate the weakly-supervised object segmentation cues and region-based object detection into a multi-stage architecture in order to fully exploit the learned segmentation features for better object detection in an end-to-end way. The proposed architecture consists of three cascaded networks which respectively learn to perform weakly-supervised object segmentation, object proposal generation and recursive detection refinement. Combining the group recursive learning and the multi-stage architecture provides competitive mAPs of 78.6 and 74.9 on the PASCAL VOC2007 and VOC2012 datasets respectively, which outperforms many well-established baselines [10] [20] significantly. | In recent years, several works have proposed to incorporate segmentation techniques to assist object detection in different ways. For example, Parkhi @cite_0 improved the predicted bounding box with color models from predicted rectangles on cat and dog faces. Dai @cite_16 proposed to use segments extracted for each object detection hypothesis to accurately localize detected objects. Other research has exploited segmentation to generate object detection hypothesis for better localization. Segmentation was adopted as a selective search strategy to generate the best locations for object recognition in @cite_8 . Arbelaez @cite_6 proposed a hierarchical segmenter that leverages multiscale information and a grouping algorithm to produce accurate object candidates. Instead of using segmentation for better localizing detections, Fidler @cite_14 took advantage of semantic segmentation results @cite_22 to more accurately score detections. In this work, we propose a unified framework to incorporate semantic segmentation features for both object proposal generation and better scoring and localizing detections. In addition, a group recursive learning strategy is employed to recursively refine the scores and locations of the detections, thus achieving more precise predictions. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_8",
"@cite_6",
"@cite_0",
"@cite_16"
],
"mid": [
"",
"78159342",
"",
"1991367009",
"2039507552",
"2056933870"
],
"abstract": [
"",
"Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster.",
"",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"Template-based object detectors such as the deformable parts model of [11] achieve state-of-the-art performance for a variety of object categories, but are still outperformed by simpler bag-of-words models for highly flexible objects such as cats and dogs. In these cases we propose to use the template-based model to detect a distinctive part for the class, followed by detecting the rest of the object via segmentation on image specific information learnt from that part. This approach is motivated by two ob- servations: (i) many object classes contain distinctive parts that can be detected very reliably by template-based detec- tors, whilst the entire object cannot; (ii) many classes (e.g. animals) have fairly homogeneous coloring and texture that can be used to segment the object once a sample is provided in an image. We show quantitatively that our method substantially outperforms whole-body template-based detectors for these highly deformable object categories, and indeed achieves accuracy comparable to the state-of-the-art on the PASCAL VOC competition, which includes other models such as bag-of-words.",
"In this paper, we propose an approach to accurately localize detected objects. The goal is to predict which features pertain to the object and define the object extent with segmentation or bounding box. Our initial detector is a slight modification of the DPM detector by , which often reduces confusion with background and other objects but does not cover the full object. We then describe and evaluate several color models and edge cues for local predictions, and we propose two approaches for localization: learned graph cut segmentation and structural bounding box prediction. Our experiments on the PASCAL VOC 2010 dataset show that our approach leads to accurate pixel assignment and large improvement in bounding box overlap, sometimes leading to large overall improvement in detection accuracy."
]
} |
1608.05046 | 2510719728 | Scientists often run experiments to distinguish competing theories. This requires patience, rigor, and ingenuity - there is often a large space of possible experiments one could run. But we need not comb this space by hand - if we represent our theories as formal models and explicitly declare the space of experiments, we can automate the search for good experiments, looking for those with high expected information gain. Here, we present a general and principled approach to experiment design based on probabilistic programming languages (PPLs). PPLs offer a clean separation between declaring problems and solving them, which means that the scientist can automate experiment design by simply declaring her model and experiment spaces in the PPL without having to worry about the details of calculating information gain. We demonstrate our system in two case studies drawn from cognitive psychology, where we use it to design optimal experiments in the domains of sequence prediction and categorization. We find strong empirical validation that our automatically designed experiments were indeed optimal. We conclude by discussing a number of interesting questions for future research. | The basic intuition behind OED---to find experiments that maximize some expected measure of informativeness---has been independently discovered in a number of fields, including physics @cite_7 , chemistry @cite_2 , biology @cite_12 @cite_4 , psychology @cite_9 , statistics @cite_0 , and machine learning @cite_8 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_0",
"@cite_2",
"@cite_12"
],
"mid": [
"",
"2160518205",
"2951249809",
"2068036624",
"",
"1579213377",
"2110690917"
],
"abstract": [
"",
"SUMMARY When designing an experiment, the aim is usually to find the design which minimizes expected post-experimental uncertainties on the model parameters. Classical methods for experimental design are shown to fail in nonlinear problems because they incorporate linearized design criteria. A more fundamental criterion is introduced which, in principle, can be used to design any nonlinear problem. The criterion is entropy-based and depends on the calculation of marginal probability distributions. In turn, this requires the numerical calculation of integrals for which we use Monte Carlo sampling. The choice of discretization in the parameter data space strongly influences the number of samples required. Thus, the only practical limitation for this technique appears to be computational power. A synthetic experiment with an oscillatory, highly nonlinear parameter‐data relationship and a simple seismic amplitude versus offset (AVO) experiment are used to demonstrate the method. Interestingly, in our AVO example, although overly coarse discretizations lead to incorrect evaluation of the entropy, the optimal design remains unchanged.",
"We tackle the fundamental problem of Bayesian active learning with noise, where we need to adaptively select from a number of expensive tests in order to identify an unknown hypothesis sampled from a known prior distribution. In the case of noise-free observations, a greedy algorithm called generalized binary search (GBS) is known to perform near-optimally. We show that if the observations are noisy, perhaps surprisingly, GBS can perform very poorly. We develop EC2, a novel, greedy active learning algorithm and prove that it is competitive with the optimal policy, thus obtaining the first competitiveness guarantees for Bayesian active learning with noisy observations. Our bounds rely on a recently discovered diminishing returns property called adaptive submodularity, generalizing the classical notion of submodular set functions to adaptive policies. Our results hold even if the tests have non-uniform cost and their noise is correlated. We also propose EffECXtive, a particularly fast approximation of EC2, and evaluate it on a Bayesian experimental design problem involving human subjects, intended to tease apart competing economic theories of how people make decisions under uncertainty.",
"Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method.",
"",
"The optimal selection of experimental conditions is essential in maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. A general Bayesian framework for optimal experimental design with nonlinear simulation-based models is proposed. The formulation accounts for uncertainty in model parameters, observables, and experimental conditions. Straightforward Monte Carlo evaluation of the objective function - which reflects expected information gain (Kullback-Leibler divergence) from prior to posterior - is intractable when the likelihood is computationally intensive. Instead, polynomial chaos expansions are introduced to capture the dependence of observables on model parameters and on design conditions. Under suitable regularity conditions, these expansions converge exponentially fast. Since both the parameter space and the design space can be high-dimensional, dimension-adaptive sparse quadrature is used to construct the polynomial expansions. Stochastic optimization methods will be used in the future to maximize the expected utility. While this approach is broadly applicable, it is demonstrated on a chemical kinetic system with strong nonlinearities. In particular, the Arrhenius rate parameters in a combustion reaction mechanism are estimated from observations of autoignition. Results show multiple order-of-magnitude speedups in both experimental design and parameter inference.",
"Motivation: Systems biology employs mathematical modelling to further our understanding of biochemical pathways. Since the amount of experimental data on which the models are parameterized is often limited, these models exhibit large uncertainty in both parameters and predictions. Statistical methods can be used to select experiments that will reduce such uncertainty in an optimal manner. However, existing methods for optimal experiment design (OED) rely on assumptions that are inappropriate when data are scarce considering model complexity. Results: We have developed a novel method to perform OED for models that cope with large parameter uncertainty. We employ a Bayesian approach involving importance sampling of the posterior predictive distribution to predict the efficacy of a new measurement at reducing the uncertainty of a selected prediction. We demonstrate the method by applying it to a case where we show that specific combinations of experiments result in more precise predictions. Availability and implementation: Source code is available at: http: bmi.bmt.tue.nl sysbio software pua.html Contact:j.vanlier@tue.nl; N.A.W.v.Riel@tue.nl Supplementary information:Supplementary data are available at Bioinformatics online."
]
} |
1608.04917 | 2512506520 | We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff’s Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. | To the best of our knowledge, this is the first paper that studies legislative behavior in the Eighth European Parliament. The legislative behavior of the previous parliaments was thoroughly studied by Hix, Attina, and others @cite_17 @cite_14 @cite_16 @cite_0 @cite_8 @cite_23 . These studies found that voting behavior is determined to a large extent---and when viewed over time, increasingly so---by affiliation to a political group, as an organizational reflection of the ideological position. The authors found that the cohesion of political groups in the parliament has increased, while nationality has been less and less of a decisive factor @cite_3 . The literature also reports that a split into political camps on the left and right of the political spectrum has recently replaced the grand coalition' between the two big blocks of Christian Conservatives () and Social Democrats () as the dominant form of finding majorities in the parliament. The authors conclude that coalitions are to a large extent formed along the left-to-right axis @cite_3 . | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_3",
"@cite_0",
"@cite_23",
"@cite_16",
"@cite_17"
],
"mid": [
"2007865189",
"2078812239",
"1885479328",
"2048702049",
"2046500494",
"",
"2165365036"
],
"abstract": [
"This article analyzes coalition formation within the European Parliament (EP) under the cooperation procedure through the analysis of a random sample of 100 roll call votes. The authors find that generally, coalitions form on the basis of ideology, not nationality, although they are able to identify some national groups that occasionally vote against the majority of their party group. More interestingly, they find that the political initiative within the EP belongs to the Left and that the majorities required at different stages affect not only the outcomes of votes but also the coalitions that will form. Finally, a slight variation is found in coalition building depending on the subject matter. On the basis of these findings, the authors suggest an alternative interpretation of the conflicts between the Council and EP based on an ideological conflict about more (EP) or less (Council) regulation, as opposed to more or less integration.",
"The European Parliament has be? come one of the most powerful insti? tutions in the European Union. Mem? bers of the European Parliament (MEPs) can now enact legislation, amend the European Union budget, veto the nominee for the European Union Commission President, and censure the Commission. But, we know little about what determines MEPs' voting behavior. Do they vote according to their personal policy preferences? Do the EP parties force MEPs to toe the party line? And, when national party and EP party preferences conflict, which way do MEPs respond?to the principals who control their election (the national parties) or the principals who control their influence in the EP (the EP par? ties)? The results reported here show that national party policies are the strongest predictors of voting behav? ior in the EP.",
"Introduction 1. Development of the European Parliament 2. Democracy, transaction costs and political parties 3. Ideological not territorial politics 4. Participation 5. Trends in party cohesion 6. Agenda setting and cohesion 7. Who controls the MEPs? 8. Competition and coalition formation 9. Dimensions of politics 10. Investiture and censure of the Santer Commission 11. The takeover directive Conclusion.",
"Members of the European Parliament (MEPs) typically follow one of two career paths, either advancing within the European Parliament itself or returning to higher offices in their home states. We argue that these different ambitions condition legislative behavior. Specifically, MEPs seeking domestic careers defect from group leadership votes more frequently and oppose legislation that expands the purview of supranational institutions. We show how individual, domestic-party, and national-level variables shape the careers available to MEPs and, in turn, their voting choices. To test the argument, we analyze MEPs’ roll-call voting behavior in the 5th session of the EP (1999–2004) using a random effects model that captures idiosyncrasies in voting behavior across both individual MEPs and specific roll-call votes.",
"We examined how voting behavior in the European Parliament changed after the European Union added ten new member-states in 2004. Using roll-call votes, we compared voting behavior in the first half of the Sixth European Parliament (July 2004-December 2006) with voting behavior in the previous Parliament (1999–2004). We looked at party cohesion, coalition formation, and the spatial map of voting by members of the European Parliament. We found stable levels of party cohesion and interparty coalitions that formed mainly around the left-right dimension. Ideological distance between parties was the strongest predictor of coalition preferences. Overall, the enlargement of the European Union in 2004 did not change the way politics works inside the European Parliament. We also looked at the specific case of the controversial Services Directive and found that ideology remained the main predictor of voting behavior, although nationality also played a role.",
"",
". The members of the European Parliament are elected in nationally organized and domestically oriented polls; however, in the Strasbourg Assembly they form transnational Party Groups or Europarties. The Rules of Procedure require such formations for the functioning of the Assembly, but Party Groups are much more than procedure requisites. They assemble elected representatives of national parties which share a consistent similarity in political ideologies and strategies. Party integration is a decisive development in the unification process of the Western European countries and it is expected to come from the Party Groups experience. The paper analyses such an issue by examining roll-call votes. Data include a systematic sample of votes cast during the first and second elected Parliament. The research looks into two fundamental items: (a) Party Group cohesion (an index of agreement is used to measure it); (b) voting line-ups of Party Groups. The aim is to point out the most important political cleavages and issues of the Community political system."
]
} |
1608.04917 | 2512506520 | We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff’s Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. | In this paper we analyze the roll-call vote data published in the minutes of the parliament's plenary sessions. For a given subject, the data contains the vote of each MEP present at the respective sitting. Roll-call vote data from the European Parliament has already been extensively studied by other authors, most notably by @cite_8 @cite_36 @cite_23 . To be able to study the cohesion and coalitions, authors like Hix, Attina, and Rice @cite_17 @cite_36 @cite_25 defined and employed a variety of agreement measures. The most prominent measure is the Agreement Index proposed by @cite_36 . This measure computes the agreement score from the size of the majority class for a particular vote. The Agreement Index, however, exhibits two drawbacks: (i) it does not account for co-voting by chance, and (ii) without a proper adaptation, it does not accommodate the scenario in which the agreement is to be measured between two different political groups. | {
"cite_N": [
"@cite_8",
"@cite_36",
"@cite_23",
"@cite_25",
"@cite_17"
],
"mid": [
"2078812239",
"2142389944",
"2046500494",
"1997607583",
"2165365036"
],
"abstract": [
"The European Parliament has be? come one of the most powerful insti? tutions in the European Union. Mem? bers of the European Parliament (MEPs) can now enact legislation, amend the European Union budget, veto the nominee for the European Union Commission President, and censure the Commission. But, we know little about what determines MEPs' voting behavior. Do they vote according to their personal policy preferences? Do the EP parties force MEPs to toe the party line? And, when national party and EP party preferences conflict, which way do MEPs respond?to the principals who control their election (the national parties) or the principals who control their influence in the EP (the EP par? ties)? The results reported here show that national party policies are the strongest predictors of voting behav? ior in the EP.",
"How cohesive are political parties in the European Parliament? What coalitions form and why? The answers to these questions are central for understanding the impact of the European Parliament on European Union policies. These questions are also central in the study of legislative behaviour in general. We collected the total population of roll-call votes in the European Parliament, from the first elections in 1979 to the end of 2001 (over 11,500 votes). The data show growing party cohesion despite growing internal national and ideological diversity within the European party groups. We also find that the distance between parties on the left–right dimension is the strongest predictor of coalition patterns. We conclude that increased power of the European Parliament has meant increased power for the transnational parties, via increased internal party cohesion and inter-party competition.",
"We examined how voting behavior in the European Parliament changed after the European Union added ten new member-states in 2004. Using roll-call votes, we compared voting behavior in the first half of the Sixth European Parliament (July 2004-December 2006) with voting behavior in the previous Parliament (1999–2004). We looked at party cohesion, coalition formation, and the spatial map of voting by members of the European Parliament. We found stable levels of party cohesion and interparty coalitions that formed mainly around the left-right dimension. Ideological distance between parties was the strongest predictor of coalition preferences. Overall, the enlargement of the European Union in 2004 did not change the way politics works inside the European Parliament. We also looked at the specific case of the controversial Services Directive and found that ideology remained the main predictor of voting behavior, although nationality also played a role.",
"",
". The members of the European Parliament are elected in nationally organized and domestically oriented polls; however, in the Strasbourg Assembly they form transnational Party Groups or Europarties. The Rules of Procedure require such formations for the functioning of the Assembly, but Party Groups are much more than procedure requisites. They assemble elected representatives of national parties which share a consistent similarity in political ideologies and strategies. Party integration is a decisive development in the unification process of the Western European countries and it is expected to come from the Party Groups experience. The paper analyses such an issue by examining roll-call votes. Data include a systematic sample of votes cast during the first and second elected Parliament. The research looks into two fundamental items: (a) Party Group cohesion (an index of agreement is used to measure it); (b) voting line-ups of Party Groups. The aim is to point out the most important political cleavages and issues of the Community political system."
]
} |
1608.04917 | 2512506520 | We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff’s Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. | We employ two statistically sound methodologies developed in two different fields of science. The first one is based on Krippendorff's @cite_27 . , is a measure of the agreement among observers, coders, or measuring instruments that assign values to items or phenomena. It compares the observed agreement to the agreement expected by chance. , is used to measure the inter- and self-annotator agreement of human experts when labeling data, and the performance of classification models in machine learning scenarios @cite_7 . In addition to , we employ Exponential Random Graph Models (ERGM) @cite_4 . In contrast to the former, ERGM is a network-based approach, often used in social-network analyses. ERGM can be employed to investigate how different network statistics (e.g., number of edges and triangles) or external factors (e.g., political group membership) govern the network-formation process. | {
"cite_N": [
"@cite_27",
"@cite_4",
"@cite_7"
],
"mid": [
"2153222072",
"2145402497",
"2278629362"
],
"abstract": [
"History Conceptual Foundations Uses and Kinds of Inference The Logic of Content Analysis Designs Unitizing Sampling Recording Data Languages Constructs for Inference Analytical Techniques The Use of Computers Reliability Validity A Practical Guide",
"We describe some of the capabilities of the ergm package and the statistical theory underlying it. This package contains tools for accomplishing three important, and inter-related, tasks involving exponential-family random graph models (ERGMs): estimation, simulation, and goodness of fit. More precisely, ergm has the capability of approximating a maximum likelihood estimator for an ERGM given a network data set; simulating new network data sets from a fitted ERGM using Markov chain Monte Carlo; and assessing how well a fitted ERGM does at capturing characteristics of a particular network data set.",
"What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self- and inter-annotator agreements since this improves the training datasets and consequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered."
]
} |
1608.04917 | 2512506520 | We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff’s Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. | The second important aspect of our study is related to analyzing the behavior of participants in social networks, specifically Twitter. Twitter is studied by researchers to better understand different political processes, and in some cases to predict their outcomes. @cite_28 consider the number of tweets by a party as a proxy for the collective attention to the party, explore the dynamics of the volume, and show that this quantity contains information about an election's outcome. Other studies @cite_18 reach similar conclusions. @cite_35 predicted the political alignment of Twitter users in the run-up to the 2010 US elections based on content and network structure. They analyzed the polarization of the retweet and mention networks for the same elections @cite_33 . @cite_9 analyzed user activity during the Spanish presidential elections. They additionally analyzed the 2012 Catalan elections, focusing on the interplay between the language and the community structure of the network @cite_21 . Most existing research, as Larsson points out @cite_24 , focuses on the online behavior of leading political figures during election campaigns. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_33",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_24"
],
"mid": [
"2538660910",
"2184009014",
"",
"1769818149",
"2079373549",
"2042402925",
"2312324132"
],
"abstract": [
"The widespread adoption of social media for political communication creates unprecedented opportunities to monitor the opinions of large numbers of politically active individuals in real time. However, without a way to distinguish between users of opposing political alignments, conflicting signals at the individual level may, in the aggregate, obscure partisan differences in opinion that are important to political strategy. In this article we describe several methods for predicting the political alignment of Twitter users based on the content and structure of their political communication in the run-up to the 2010 U.S. midterm elections. Using a data set of 1,000 manually-annotated individuals, we find that a support vector machine (SVM) trained on hash tag metadata outperforms an SVM trained on the full text of users' tweets, yielding predictions of political affiliations with 91 accuracy. Applying latent semantic analysis to the content of users' tweets we identify hidden structure in the data strongly associated with political affiliation, but do not find that topic detection improves prediction performance. All of these content-based methods are outperformed by a classifier based on the segregated community structure of political information diffusion networks (95 accuracy). We conclude with a practical application of this machinery to web-based political advertising, and outline several approaches to public opinion monitoring based on the techniques developed herein.",
"We present a generic approach to real-time monitoring of the Twitter sentiment and show its application to the Bulgarian parliamentary elections in May 2013. Our approach is based on building high quality sentiment classification models from manually annotated tweets. In particular, we have developed a user-friendly annotation platform, a feature selection procedure based on maximizing prediction accuracy, and a binary SVM classifier extended with a neutral zone. We have also considerably improved the language detection in tweets. The evaluation results show that before and after the Bulgarian elections, negative sentiment about political parties prevailed. Both, the volume and the difference between the negative and positive tweets for individual parties closely match the election results. The later result is somehow surprising, but consistent with the prevailing negative sentiment during the elections.",
"",
"Large-scale data from social media have a significant potential to describe complex phenomena in the real world and to anticipate collective behaviors such as information spreading and social trends. One specific case of study is represented by the collective attention to the action of political parties. Not surprisingly, researchers and stakeholders tried to correlate parties' presence on social media with their performances in elections. Despite the many efforts, results are still inconclusive since this kind of data is often very noisy and significant signals could be covered by (largely unknown) statistical fluctuations. In this paper we consider the number of tweets (tweet volume) of a party as a proxy of collective attention to the party, identify the dynamics of the volume, and show that this quantity has some information on the election outcome. We find that the distribution of the tweet volume for each party follows a log-normal distribution with a positive autocorrelation of the volume over short terms, which indicates the volume has large fluctuations of the log-normal distribution yet with a short-term tendency. Furthermore, by measuring the ratio of two consecutive daily tweet volumes, we find that the evolution of the daily volume of a party can be described by means of a geometric Brownian motion (i.e., the logarithm of the volume moves randomly with a trend). Finally, we determine the optimal period of averaging tweet volume for reducing fluctuations and extracting short-term tendencies. We conclude that the tweet volume is a good indicator of parties' success in the elections when considered over an optimal time window. Our study identifies the statistical nature of collective attention to political issues and sheds light on how to model the dynamics of collective attention in social media.",
"Transmitting messages in the most efficient way as possible has always been one of politicians’ main concerns during electoral processes. Due to the rapidly growing number of users, online social networks have become ideal platforms for politicians to interact with their potential voters. Exploiting the available potential of these tools to maximize their influence over voters is one of politicians’ actual challenges. To step in this direction, we have analyzed the user activity in the online social network Twitter, during the 2011 Spanish Presidential electoral process, and found that such activity is correlated with the election results. We introduce a new measure to study political sentiment in Twitter, which we call the relative support. We have also characterized user behavior by analyzing the structural and dynamical patterns of the complex networks emergent from the mention and retweet networks. Our results suggest that the collective attention is driven by a very small fraction of users. Furthermo...",
"The structure of the social networks in which individuals are embedded influences their political choices and therefore their voting behavior. Nowadays, social media represent a new channel for individuals to communicate, what together with the availability of the data, makes it possible to analyze the online social network resulting from political conversations. Here, by taking advantage of the recently developed techniques to analyze complex systems, we map the communication patterns resulting from Spanish political conversations. We identify the different existing communities, building networks of communities, and finding that users cluster themselves in politically homogeneous networks. We found that while most of the collective attention was monopolized by politicians, traditional media accounts were still the preferred sources from which to propagate information. Finally, we propose methods to analyze the use of different languages, finding a clear trend from sympathizers of several political parties to overuse or infra-use each language. We conclude that, on the light of a social media analysis perspective, the political conversation is constrained by both ideology and language.",
"ABSTRACTAlthough conceptual efforts have often suggested that the Internet harbors considerable possibilities to revolutionize political participation, empirical studies have often presented rather limited impacts in this regard. Nevertheless, novel online services such as Twitter are still pointed to as having potential to be employed by citizens and politicians alike. Utilizing state-of-the-art data collection methods, this study builds on the suggestions of previous research and gauges the degree to which EU parliamentarians make use of Twitter for so-called permanent campaigning. Specifically, the paper seeks to assess the degree to which Twitter use by European Parliament representatives can be described as being characterized by permanence—a concept related to the professionalization of political campaigns. Thus, by examining these uses outside of election periods, the study provides useful insights into the day-to-day uses of Twitter, contributing to the limited body of work focusing on the everyda..."
]
} |
1608.04917 | 2512506520 | We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff’s Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. | This paper continues our research on communities that MEPs (and their followers) form on Twitter @cite_37 . The goal of our research was to evaluate the role of Twitter in identifying communities of influence when the actual communities are known. We represent the influence on Twitter by the number of retweets that MEPs receive''. We construct two networks of influence: (i) core, which consists only of MEPs, and (ii) extended, which also involves their followers. We compare the detected communities in both networks to the groups formed by the political, country, and language membership of MEPs. The results show that the detected communities in the core network closely match the political groups, while the communities in the extended network correspond to the countries of residence. This provides empirical evidence that analyzing retweet networks can reveal real-world relationships and can be used to uncover hidden properties of the networks. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2402596232"
],
"abstract": [
"Analyzing information from social media to uncover underlying real-world phenomena is becoming widespread. The goal of this paper is to evaluate the role of Twitter in identifying communities of influence when the ‘ground truth’ is known. We consider the European Parliament (EP) Twitter users during a period of one year, in which they posted over 560,000 tweets. We represent the influence on Twitter by the number of retweets users get. We construct two networks of influence: (i) core, where both users are the EP members, and (ii) extended, where one user can be outside the EP. We compare the detected communities in both networks to the ‘ground truth’: the political group, country, and language of the EP members. The results show that the core network closely matches the political groups, while the extended network best reflects the country of origin. This provides empirical evidence that the formation of retweet networks and community detection are appropriate tools to reveal real-world relationships, and can be used to uncover hidden properties when the ‘ground truth’ is not known."
]
} |
1608.04917 | 2512506520 | We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff’s Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. | Lazer @cite_1 highlights the importance of network-based approaches in political science in general by arguing that politics is a relational phenomenon at its core. Some researchers have adopted the network-based approach to investigate the structure of legislative work in the US Congress, including committee and sub-committee membership @cite_5 , bill co-sponsoring @cite_15 , and roll-call votes @cite_19 . More recently, Dal @cite_2 examined the community structure with respect to political coalitions and government structure in the Italian Parliament. @cite_11 examined the constituency, personal, and strategic characteristics of MEPs that influence their tweeting behavior. They suggested that Twitter's characteristics, like immediacy, interactivity, spontaneity, personality, and informality, are likely to resonate with political parties across Europe. By fitting regression models, the authors find that MEPs from incohesive groups have a greater tendency to retweet. | {
"cite_N": [
"@cite_1",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_11"
],
"mid": [
"2131877490",
"",
"1692960356",
"2065914526",
"2103094896",
"2563692779"
],
"abstract": [
"Whatare the relational dimensions of politics? Does theway that people and organizations are connected to each other matter? Are our opinions affected by the people with whom we talk? Are legislators affected by lobbyists? Is the capacity of social movements to mobilize affected by the structure of societ al networks? Powerful evidence in the literature answers each of these questions in the affirmative. However, compared to other paradigmatic foci, political science has invested tiny amounts of capacity in the study of the relevance of networks to political phenomena. Far more attention has been paid to the psychology of howpeople process information individually as opposed to collectively, and to the role that institutions play in structuring politics as opposed to the relational undergirdings of politics. A review of the flagship journals in political science reveals a dearth of articles on networks. Few, if any, doctoral programs include courses for which the primary focus is network-related ideas, and even the notion of a relational dependence in data is rarely mentioned in discussions of the assumptions embedded in the statistical methods that dominate political science. This gap is arguably the result of the boundaries among social sciencedisciplines thatemerged inthe1950s,whensocial networkideasfoundtheirhomelargely insociologyandanthropology while political science leaned toward statistical methods that assumed away interdependence among observations. Ironically, there is now awave of interest in networks in political science thathasoriginatedpartly in sociologyandpartly in thatmost distant of disciplines frompolitical science, physics. The objective of this article is to provide an intellectual history of the study of social networks and political networks in particular, as well as the current trajectory of such work.",
"",
"We analyze the network of relations between parliament members according to their voting behavior. In particular, we examine the emergent community structure with respect to political coalitions and government alliances. We rely on tools developed in the Complex Network literature to explore the core of these communities and use their topological features to develop new metrics for party polarization, internal coalition cohesiveness and government strength. As a case study, we focus on the Chamber of Deputies of the Italian Parliament, for which we are able to characterize the heterogeneity of the ruling coalition as well as parties specific contributions to the stability of the government over time. We find sharp contrast in the political debate which surprisingly does not imply a relevant structure based on establised parties. We take a closer look to changes in the community structure after parties split up and their effect on the position of single deputies within communities. Finally, we introduce a way to track the stability of the government coalition over time that is able to discern the contribution of each member along with the impact of its possible defection. While our case study relies on the Italian parliament, whose relevance has come into the international spotlight in the present economic downturn, the methods developed here are entirely general and can therefore be applied to a multitude of other scenarios.",
"Network theory provides a powerful tool for the representation and analysis of complex systems of interacting agents. Here, we investigate the U.S. House of Representatives network of committees and subcommittees, with committees connected according to \"interlocks,\" or common membership. Analysis of this network reveals clearly the strong links between different committees, as well as the intrinsic hierarchical structure within the House as a whole. We show that network theory, combined with the analysis of roll-call votes using singular value decomposition, successfully uncovers political and organizational correlations between committees in the House without the need to incorporate other political information.",
"We study the United States Congress by constructing networks between Members of Congress based on the legislation that they cosponsor. Using the concept of modularity, we identify the community structure of Congressmen, who are connected via sponsorship cosponsorship of the same legislation. This analysis yields an explicit and conceptually clear measure of political polarization, demonstrating a sharp increase in partisan polarization which preceded and then culminated in the 104th Congress (1995–1996), when Republicans took control of both chambers of Congress. Although polarization has since waned in the U.S. Senate, it remains at historically high levels in the House of Representatives.",
"ABSTRACTMembers of the European Parliament (MEPs) struggle to connect with European publics. Few European Union (EU) citizens feel connected to their MEPs. Levels of turnout for European Parliament (EP) elections are low, and EU citizens rarely retain EP-related news. For these and other reasons, we might expect MEPs to embrace social media platforms, like Twitter, that facilitate interactivity, spontaneity, personality, and informality. In reality, however, significant variation characterizes the timing and nature of MEPs’ engagement with Twitter. In this article, we document and seek to explain elements of this variation. We examine five dimensions of MEP engagement with Twitter: Do MEPs establish Twitter accounts? Are they early adopters? How frequently do they tweet? And how, exactly, do they use Twitter – do they engage in direct conversations via Twitter's @-reply functionality and or refer followers to other content via retweeting? We find that MEPs’ approaches to Twitter are conditioned by specifi..."
]
} |
1608.05143 | 2515166224 | We propose a systematic approach for registering cross-source point clouds that come from different kinds of sensors. This task is especially challenging due to the presence of significant missing data, large variations in point density, scale difference, large proportion of noise, and outliers. The robustness of the method is attributed to the extraction of macro and micro structures. Macro structure is the overall structure that maintains similar geometric layout in cross-source point clouds. Micro structure is the element (e.g., local segment) being used to build the macro structure. We use graph to organize these structures and convert the registration into graph matching. With a novel proposed descriptor, we conduct the graph matching in a discriminative feature space. The graph matching problem is solved by an improved graph matching solution, which considers global geometrical constraints. Robust cross source registration results are obtained by incorporating graph matching outcome with RANSAC and ICP refinements. Compared with eight state-of-the-art registration algorithms, the proposed method invariably outperforms on Pisa Cathedral and other challenging cases. In order to compare quantitatively, we propose two challenging cross-source data sets and conduct comparative experiments on more than 27 cases, and the results show we obtain much better performance than other methods. The proposed method also shows high accuracy in same-source data sets. | In contrast to these ICP-based methods, registration amounts to solving a global problem to find the best aligning rigid transform over the 6DOF space of all possible rigid transforms comprised of translations and rotations when scan pairs start in arbitrary initial poses. Since aligning rigid transforms are uniquely determined by three pairs of (non-degenerate) corresponding points, one popular strategy is to invoke RANSAC @cite_39 to find the aligning triplets of point pairs @cite_18 . This approach, however, regularly degrades to its worst case @math complexity in the number @math of data samples in presence of partial matching with low overlap. Various alternatives to RANSAC have been proposed to counter the cubic complexity, such as hierarchical representation in the normal space @cite_40 ; super-symmetric tensors to represent the constraints between the tuples @cite_3 ; stochastic non-linear optimization to reduce the distance between scan pairs @cite_46 ; branch-and-bound using pairwise distance invariants @cite_10 ; or evolutionary game theoretic matching @cite_21 @cite_41 . However, these methods are all sensitive to missing data. | {
"cite_N": [
"@cite_18",
"@cite_41",
"@cite_21",
"@cite_3",
"@cite_39",
"@cite_40",
"@cite_46",
"@cite_10"
],
"mid": [
"2066863160",
"2012596828",
"1749228504",
"2085068598",
"2085261163",
"2076032759",
"2064358676",
"2025062188"
],
"abstract": [
"In this paper, we propose a new method, the RANSAC-based DARCES method (data-aligned rigidity-constrained exhaustive search based on random sample consensus), which can solve the partially overlapping 3D registration problem without any initial estimation. For the noiseless case, the basic algorithm of our method can guarantee that the solution it finds is the true one, and its time complexity can be shown to be relatively low. An extra characteristic is that our method can be used even for the case that there are no local features in the 3D data sets.",
"During the last years a wide range of algorithms and devices have been made available to easily acquire range images. The increasing abundance of depth data boosts the need for reliable and unsupervised analysis techniques, spanning from part registration to automated segmentation. In this context, we focus on the recognition of known objects in cluttered and incomplete 3D scans. Locating and fitting a model to a scene are very important tasks in many scenarios such as industrial inspection, scene understanding, medical imaging and even gaming. For this reason, these problems have been addressed extensively in the literature. Several of the proposed methods adopt local descriptor-based approaches, while a number of hurdles still hinder the use of global techniques. In this paper we offer a different perspective on the topic: We adopt an evolutionary selection algorithm that seeks global agreement among surface points, while operating at a local level. The approach effectively extends the scope of local descriptors by actively selecting correspondences that satisfy global consistency constraints, allowing us to attack a more challenging scenario where model and scene have different, unknown scales. This leads to a novel and very effective pipeline for 3D object recognition, which is validated with an extensive set of experiments and comparisons with recent techniques at the state of the art.",
"Many successful feature detectors and descriptors exist for 2D intensity images. However, obtaining the same effectiveness in the domain of 3D objects has proven to be a more elusive goal. In fact, the smoothness often found in surfaces and the lack of texture information on the range images produced by conventional 3D scanners hinder both the localization of interesting points and the distinctiveness of their characterization in terms of descriptors. To overcome these limitations several approaches have been suggested, ranging from the simple enlargement of the area over which the descriptors are computed to the reliance on external texture information. In this paper we offer a change in perspective, where a game-theoretic matching technique that exploits global geometric consistency allows to obtain an extremely robust surface registration even when coupled with simple surface features exhibiting very low distinctiveness. In order to assess the performance of the whole approach we compare it with state-of-the-art alignment pipelines. Furthermore, we show that using the novel feature points with well-known alternative non-global matching techniques leads to poorer results.",
"Feature matching is a challenging problem at the heart of numerous computer graphics and computer vision applications. We present the SuperMatching algorithm for finding correspondences between two sets of features. It does so by considering triples or higher order tuples of points, going beyond the pointwise and pairwise approaches typically used. SuperMatching is formulated using a supersymmetric tensor representing an affinity metric that takes into account feature similarity and geometric constraints between features: Feature matching is cast as a higher order graph matching problem. SuperMatching takes advantage of supersymmetry to devise an efficient sampling strategy to estimate the affinity tensor, as well as to store the estimated tensor compactly. Matching is performed by an efficient higher order power iteration approach that takes advantage of this compact representation. Experiments on both synthetic and real data show that SuperMatching provides more accurate feature matching than other state-of-the-art approaches for a wide range of 2D and 3D features, with competitive computational cost.",
"A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing",
"Point cloud matching is a central problem in Object Modeling with applications in Computer Vision and Computer Graphics. Although the problem is well studied in the case when an initial estimate of the relative pose is known (fine matching), the problem becomes much more difficult when this a priori knowledge is not available (coarse matching). In this paper we introduce a novel technique to speed up coarse matching algorithms for point clouds. This new technique, called Hierarchical Normal Space Sampling (HNSS), extends Normal Space Sampling by grouping points hierarchically according to the distribution of their normal vectors. This hierarchy guides the search for corresponding points while staying free of user intervention. This permits to navigate through the huge search space taking advantage of geometric information and to stop when a sufficiently good initial pose is found. This initial pose can then be used as the starting point for any fine matching algorithm. Hierarchical Normal Space Sampling is adaptable to different searching strategies and shape descriptors. To illustrate HNSS, we present experiments using both synthetic and real data that show the computational complexity of the problem, the computation time reduction obtained by HNSS and the application potentials in combination with ICP.",
"In this paper, we propose a new algorithm for pairwise rigid point set registration with unknown point correspondences. The main properties of our method are noise robustness, outlier resistance and global optimal alignment. The problem of registering two point clouds is converted to a minimization of a nonlinear cost function. We propose a new cost function based on an inverse distance kernel that significantly reduces the impact of noise and outliers. In order to achieve a global optimal registration without the need of any initial alignment, we develop a new stochastic approach for global minimization. It is an adaptive sampling method which uses a generalized BSP tree and allows for minimizing nonlinear scalar fields over complex shaped search spaces like, e.g., the space of rotations. We introduce a new technique for a hierarchical decomposition of the rotation space in disjoint equally sized parts called spherical boxes. Furthermore, a procedure for uniform point sampling from spherical boxes is presented. Tests on a variety of point sets show that the proposed registration method performs very well on noisy, outlier corrupted and incomplete data. For comparison, we report how two state-of-the-art registration algorithms perform on the same data sets.",
"We present an algorithm for the automatic alignment of two 3D shapes (data and model), without any assumptions about their initial positions. The algorithm computes for each surface point a descriptor based on local geometry that is robust to noise. A small number of feature points are automatically picked from the data shape according to the uniqueness of the descriptor value at the point. For each feature point on the data, we use the descriptor values of the model to find potential corresponding points. We then develop a fast branch-and-bound algorithm based on distance matrix comparisons to select the optimal correspondence set and bring the two shapes into a coarse alignment. The result of our alignment algorithm is used as the initialization to ICP (iterative closest point) and its variants for fine registration of the data to the model. Our algorithm can be used for matching shapes that overlap only over parts of their extent, for building models from partial range scans, as well as for simple symmetry detection, and for matching shapes undergoing articulated motion."
]
} |
1608.05143 | 2515166224 | We propose a systematic approach for registering cross-source point clouds that come from different kinds of sensors. This task is especially challenging due to the presence of significant missing data, large variations in point density, scale difference, large proportion of noise, and outliers. The robustness of the method is attributed to the extraction of macro and micro structures. Macro structure is the overall structure that maintains similar geometric layout in cross-source point clouds. Micro structure is the element (e.g., local segment) being used to build the macro structure. We use graph to organize these structures and convert the registration into graph matching. With a novel proposed descriptor, we conduct the graph matching in a discriminative feature space. The graph matching problem is solved by an improved graph matching solution, which considers global geometrical constraints. Robust cross source registration results are obtained by incorporating graph matching outcome with RANSAC and ICP refinements. Compared with eight state-of-the-art registration algorithms, the proposed method invariably outperforms on Pisa Cathedral and other challenging cases. In order to compare quantitatively, we propose two challenging cross-source data sets and conduct comparative experiments on more than 27 cases, and the results show we obtain much better performance than other methods. The proposed method also shows high accuracy in same-source data sets. | Following the concept of RANSAC, another kind of method is 4PCS @cite_5 , which uses a randomized alignment approach and the idea of planar congruent sets to compute optimal global rigid transformation. The 4PCS method is widely used and has been extended to take into account uniform scale variations @cite_36 . However, these methods have a complexity of @math where @math denotes the size of the point clouds and @math is the set of candidate congruent 4-points. It has great limitations when point numbers are large. To remove the quadratic complexity of the original 4PCS, @cite_33 extends it to a fast algorithm with only linear computation time needed. This method reports the points or spheres in @math and uses a smart index to quickly find the matched plane in all candidate congruent 4-points planes. One cross-source point cloud registration experiment is reported in @cite_33 . However, these methods have many limitations due to their point-level operation. They may easily be sub-optimal when computing their transformation relations. The varying density of the cross-source problem makes the performance of the 4PCS-based method even worse. | {
"cite_N": [
"@cite_36",
"@cite_5",
"@cite_33"
],
"mid": [
"2002500614",
"2064499898",
"2034950486"
],
"abstract": [
"The photorealistic acquisition of 3D objects often requires color information from digital photography to be mapped on the acquired geometry, in order to obtain a textured 3D model. This paper presents a novel fully automatic 2D 3D global registration pipeline consisting of several stages that simultaneously register the input image set on the corresponding 3D object. The first stage exploits Structure From Motion (SFM) on the image set in order to generate a sparse point cloud. During the second stage, this point cloud is aligned to the 3D object using an extension of the 4 Point Congruent Set (4PCS) algorithm for the alignment of range maps. The extension accounts for models with different scales and unknown regions of overlap. In the last processing stage a global refinement algorithm based on mutual information optimizes the color projection of the aligned photos on the 3D object, in order to obtain high quality textures. The proposed registration pipeline is general, capable of dealing with small and big objects of any shape, and robust. We present results from six real cases, evaluating the quality of the final colors mapped onto the 3D object. A comparison with a ground truth dataset is also presented.",
"We introduce 4PCS, a fast and robust alignment scheme for 3D point sets that uses wide bases, which are known to be resilient to noise and outliers. The algorithm allows registering raw noisy data, possibly contaminated with outliers, without pre-filtering or denoising the data. Further, the method significantly reduces the number of trials required to establish a reliable registration between the underlying surfaces in the presence of noise, without any assumptions about starting alignment. Our method is based on a novel technique to extract all coplanar 4-points sets from a 3D point set that are approximately congruent, under rigid transformation, to a given set of coplanar 4-points. This extraction procedure runs in roughly O(n2 + k) time, where n is the number of candidate points and k is the number of reported 4-points sets. In practice, when noise level is low and there is sufficient overlap, using local descriptors the time complexity reduces to O(n + k). We also propose an extension to handle similarity and affine transforms. Our technique achieves an order of magnitude asymptotic acceleration compared to common randomized alignment techniques. We demonstrate the robustness of our algorithm on several sets of multiple range scans with varying degree of noise, outliers, and extent of overlap.",
"Data acquisition in large-scale scenes regularly involves accumulating information across multiple scans. A common approach is to locally align scan pairs using Iterative Closest Point (ICP) algorithm (or its variants), but requires static scenes and small motion between scan pairs. This prevents accumulating data across multiple scan sessions and or different acquisition modalities (e.g., stereo, depth scans). Alternatively, one can use a global registration algorithm allowing scans to be in arbitrary initial poses. The state-of-the-art global registration algorithm, 4PCS, however has a quadratic time complexity in the number of data points. This vastly limits its applicability to acquisition of large environments. We present Super 4PCS for global pointcloud registration that is optimal, i.e., runs in linear time (in the number of data points) and is also output sensitive in the complexity of the alignment problem based on the (unknown) overlap across scan pairs. Technically, we map the algorithm as an 'instance problem' and solve it efficiently using a smart indexing data organization. The algorithm is simple, memory-efficient, and fast. We demonstrate that Super 4PCS results in significant speedup over alternative approaches and allows unstructured efficient acquisition of scenes at scales previously not possible. Complete source code and datasets are available for research use at http: geometry.cs.ucl.ac.uk projects 2014 super4PCS ."
]
} |
1608.05143 | 2515166224 | We propose a systematic approach for registering cross-source point clouds that come from different kinds of sensors. This task is especially challenging due to the presence of significant missing data, large variations in point density, scale difference, large proportion of noise, and outliers. The robustness of the method is attributed to the extraction of macro and micro structures. Macro structure is the overall structure that maintains similar geometric layout in cross-source point clouds. Micro structure is the element (e.g., local segment) being used to build the macro structure. We use graph to organize these structures and convert the registration into graph matching. With a novel proposed descriptor, we conduct the graph matching in a discriminative feature space. The graph matching problem is solved by an improved graph matching solution, which considers global geometrical constraints. Robust cross source registration results are obtained by incorporating graph matching outcome with RANSAC and ICP refinements. Compared with eight state-of-the-art registration algorithms, the proposed method invariably outperforms on Pisa Cathedral and other challenging cases. In order to compare quantitatively, we propose two challenging cross-source data sets and conduct comparative experiments on more than 27 cases, and the results show we obtain much better performance than other methods. The proposed method also shows high accuracy in same-source data sets. | One of the mathematical tools typically used for registration is Mutual Information (MI), which catches the non-linear correlations between the point clouds and the geometric properties of the target surface. The authors in @cite_8 use ICP and mutual information (MI) to build one-to-one correspondence between an magnetic resonance (MR) surface and laser-scanned cortical surface; however, this method is highly dependent on initialization and overlap rate. The work in @cite_28 registers unstructured 3D point clouds by using K-means to form a set of codewords and using an estimator to optimize the MI value to obtain the final rigid relations. Cross correlation of the horizontal cross section images of the two point clouds is used in @cite_20 to coarsely register the point clouds, and ICP is then used to refine the coarse results. These MI-based methods perform poorly when data is missing because it make the MI of two point clouds originally not the same. | {
"cite_N": [
"@cite_28",
"@cite_20",
"@cite_8"
],
"mid": [
"2001613026",
"2276426632",
"1514327714"
],
"abstract": [
"This paper reports a novel mutual information (MI) based algorithm for automatic registration of unstructured 3D point clouds comprised of co-registered 3D lidar and camera imagery. The proposed method provides a robust and principled framework for fusing the complementary information obtained from these two different sensing modalities. High-dimensional features are extracted from a training set of textured point clouds (scans) and hierarchical k-means clustering is used to quantize these features into a set of codewords. Using this codebook, any new scan can be represented as a collection of codewords. Under the correct rigid-body transformation aligning two overlapping scans, the MI between the codewords present in the scans is maximized. We apply a James-Stein-type shrinkage estimator to estimate the true MI from the marginal and joint histograms of the codewords extracted from the scans. Experimental results using scans obtained by a vehicle equipped with a 3D laser scanner and an omnidirectional camera are used to validate the robustness of the proposed algorithm over a wide range of initial conditions. We also show that the proposed method works well with 3D data alone.",
"Abstract. Registration of point clouds is a necessary step to obtain a complete overview of scanned objects of interest. The majority of the current registration approaches target the general case where a full range of the registration parameters search space is assumed and searched. It is very common in urban objects scanning to have leveled point clouds with small roll and pitch angles and with also a small height differences. For such scenarios the registration search problem can be handled faster to obtain a coarse registration of two point clouds. In this paper, a fully automatic approach is proposed for registration of approximately leveled point clouds. The proposed approach estimates a coarse registration based on three registration parameters and then conducts a fine registration step using iterative closest point approach. The approach has been tested on three data sets of different areas and the achieved registration results validate the significance of the proposed approach.",
"An inter-modality registration algorithm that uses textured point clouds and mutual information is presented within the context of a new physical-space to image-space registration technique for image-guided neurosurgery. The approach uses a laser range scanner that acquires textured geometric data of the brain surface intraoperatively and registers the data to grayscale encoded surfaces of the brain extracted from gadolinium enhanced MR tomograms. Intra-modality as well as inter-modality registration simulations are presented to evaluate the new framework. The results demonstrate alignment accuracies on the order of the resolution of the scanned surfaces (i.e. submillimetric). In addition, data are presented from laser scanning a brain's surface during surgery. The results reported support this approach as a new means for registration and tracking of the brain surface during surgery."
]
} |
1608.05143 | 2515166224 | We propose a systematic approach for registering cross-source point clouds that come from different kinds of sensors. This task is especially challenging due to the presence of significant missing data, large variations in point density, scale difference, large proportion of noise, and outliers. The robustness of the method is attributed to the extraction of macro and micro structures. Macro structure is the overall structure that maintains similar geometric layout in cross-source point clouds. Micro structure is the element (e.g., local segment) being used to build the macro structure. We use graph to organize these structures and convert the registration into graph matching. With a novel proposed descriptor, we conduct the graph matching in a discriminative feature space. The graph matching problem is solved by an improved graph matching solution, which considers global geometrical constraints. Robust cross source registration results are obtained by incorporating graph matching outcome with RANSAC and ICP refinements. Compared with eight state-of-the-art registration algorithms, the proposed method invariably outperforms on Pisa Cathedral and other challenging cases. In order to compare quantitatively, we propose two challenging cross-source data sets and conduct comparative experiments on more than 27 cases, and the results show we obtain much better performance than other methods. The proposed method also shows high accuracy in same-source data sets. | Another type of transformed method is the feature-based method, which extracts features from 3D point clouds and transforms the point cloud registration Euclidean space into feature space. Typical 3D feature extraction methods There is a tutorial about 3D features. http: robotica.unileon.es index.php PCL OpenNI_tutorial_4:_3D_object_recognition_(descriptors) are FPFH @cite_32 , ESF @cite_7 , Spin image @cite_30 and SHOT @cite_35 . These feature-based methods produce exciting results on same-source point clouds. However, it is very difficult to reliably extract similar features from cross-source point clouds, and these methods always fail in this situation. This is because these features may original perceive large discrepancy and cannot used for registration. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_32",
"@cite_7"
],
"mid": [
"168966905",
"2160643963",
"2160821342",
"2041469568"
],
"abstract": [
"",
"This paper deals with local 3D descriptors for surface matching. First, we categorize existing methods into two classes: Signatures and Histograms. Then, by discussion and experiments alike, we point out the key issues of uniqueness and repeatability of the local reference frame. Based on these observations, we formulate a novel comprehensive proposal for surface representation, which encompasses a new unique and repeatable local reference frame as well as a new 3D descriptor. The latter lays at the intersection between Signatures and Histograms, so as to possibly achieve a better balance between descriptiveness and robustness. Experiments on publicly available datasets as well as on range scans obtained with Spacetime Stereo provide a thorough validation of our proposal.",
"In our recent work [1], [2], we proposed Point Feature Histograms (PFH) as robust multi-dimensional features which describe the local geometry around a point p for 3D point cloud datasets. In this paper, we modify their mathematical expressions and perform a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views. More concretely, we present several optimizations that reduce their computation times drastically by either caching previously computed values or by revising their theoretical formulations. The latter results in a new type of local features, called Fast Point Feature Histograms (FPFH), which retain most of the discriminative power of the PFH. Moreover, we propose an algorithm for the online computation of FPFH features for realtime applications. To validate our results we demonstrate their efficiency for 3D registration and propose a new sample consensus based method for bringing two datasets into the convergence basin of a local non-linear optimizer: SAC-IA (SAmple Consensus Initial Alignment).",
"This work addresses the problem of real-time 3D shape based object class recognition, its scaling to many categories and the reliable perception of categories. A novel shape descriptor for partial point clouds based on shape functions is presented, capable of training on synthetic data and classifying objects from a depth sensor in a single partial view in a fast and robust manner. The classification task is stated as a 3D retrieval task finding the nearest neighbors from synthetically generated views of CAD-models to the sensed point cloud with a Kinect-style depth sensor. The presented shape descriptor shows that the combination of angle, point-distance and area shape functions gives a significant boost in recognition rate against the baseline descriptor and outperforms the state-of-the-art descriptors in our experimental evaluation on a publicly available dataset of real-world objects in table scene contexts with up to 200 categories."
]
} |
1608.05143 | 2515166224 | We propose a systematic approach for registering cross-source point clouds that come from different kinds of sensors. This task is especially challenging due to the presence of significant missing data, large variations in point density, scale difference, large proportion of noise, and outliers. The robustness of the method is attributed to the extraction of macro and micro structures. Macro structure is the overall structure that maintains similar geometric layout in cross-source point clouds. Micro structure is the element (e.g., local segment) being used to build the macro structure. We use graph to organize these structures and convert the registration into graph matching. With a novel proposed descriptor, we conduct the graph matching in a discriminative feature space. The graph matching problem is solved by an improved graph matching solution, which considers global geometrical constraints. Robust cross source registration results are obtained by incorporating graph matching outcome with RANSAC and ICP refinements. Compared with eight state-of-the-art registration algorithms, the proposed method invariably outperforms on Pisa Cathedral and other challenging cases. In order to compare quantitatively, we propose two challenging cross-source data sets and conduct comparative experiments on more than 27 cases, and the results show we obtain much better performance than other methods. The proposed method also shows high accuracy in same-source data sets. | Torki and Elgammal @cite_42 use local features in images to learn manifold symbol. The authors first learn a feature embedding representation that contains the spatial structure of the features as well as the local appearance similarity. The out-of-sample method is then used to embed the features from new images. Similarly, Yuan @cite_31 transforms every point in a point clouds into a shape representation, in order to cast the problem of point sets matching as a shape registration problem, which is the Schrodinger distance transform (SDT) representation. The problem is then transformed into solving a static Schrodinger equation in place of the consistent static Hamilton-Jacobi equation in the setting. The SDT representation is an analytic expression which can be normalized to have unit L2 norm in accordance with theoretical physics literature. The outline of this method is "points set" @math "SDTs" @math "minimize the geodesic distance". | {
"cite_N": [
"@cite_31",
"@cite_42"
],
"mid": [
"2012048508",
"2026164926"
],
"abstract": [
"In this paper, we cast the problem of point cloud matching as a shape matching problem by transforming each of the given point clouds into a shape representation called the Schrodinger distance transform (SDT) representation. This is achieved by solving a static Schrodinger equation instead of the corresponding static Hamilton-Jacobi equation in this setting. The SDT representation is an analytic expression and following the theoretical physics literature, can be normalized to have unit 2 norm -- making it a square-root density, which is identified with a point on a unit Hilbert sphere, whose intrinsic geometry is fully known. The Fisher-Rao metric, a natural metric for the space of densities leads to analytic expressions for the geodesic distance between points on this sphere. In this paper, we use the well known Riemannian framework never before used for point cloud matching, and present a novel matching algorithm. We pose point set matching under rigid and non-rigid transformations in this framework and solve for the transformations using standard nonlinear optimization techniques. Finally, to evaluate the performance of our algorithm -- dubbed SDTM -- we present several synthetic and real data examples along with extensive comparisons to state-of-the-art techniques. The experiments show that our algorithm outperforms state-of the-art point set registration algorithms on many quantitative metrics.",
"Local features have proven very useful for recognition. Manifold learning has proven to be a very powerful tool in data analysis. However, manifold learning application for images are mainly based on holistic vectorized representations of images. The challenging question that we address in this paper is how can we learn image manifolds from a punch of local features in a smooth way that captures the feature similarity and spatial arrangement variability between images. We introduce a novel framework for learning a manifold representation from collections of local features in images. We first show how we can learn a feature embedding representation that preserves both the local appearance similarity as well as the spatial structure of the features. We also show how we can embed features from a new image by introducing a solution for the out-of-sample that is suitable for this context. By solving these two problems and defining a proper distance measure in the feature embedding space, we can reach an image manifold embedding space."
]
} |
1608.04694 | 2509365155 | Scientific software is often driven by multiple parameters that affect both accuracy and performance. Since finding the optimal configuration of these parameters is a highly complex task, it extremely common that the software is used suboptimally. In a typical scenario, accuracy requirements are imposed, and attained through suboptimal performance. In this paper, we present a methodology for the automatic selection of parameters for simulation codes, and a corresponding prototype tool. To be amenable to our methodology, the target code must expose the parameters affecting accuracy and performance, and there must be formulas available for error bounds and computational complexity of the underlying methods. As a case study, we consider the particle-particle particle-mesh method (PPPM) from the LAMMPS suite for molecular dynamics, and use our tool to identify configurations of the input parameters that achieve a given accuracy in the shortest execution time. When compared with the configurations suggested by expert users, the parameters selected by our tool yield reductions in the time-to-solution ranging between 10 and 60 . In other words, for the typical scenario where a fixed number of core-hours are granted and simulations of a fixed number of timesteps are to be run, usage of our tool may allow up to twice as many simulations. While we develop our ideas using LAMMPS as computational framework and use the PPPM method for dispersion as case study, the methodology is general and valid for a range of software tools and methods. | The list of available molecular dynamics suites is also large. Among others, it is worth mentioning GROMACS, NAMD, and CHARMM. @cite_7 @cite_5 @cite_2 While in our case study we consider LAMMPS, the approach is generic and totally portable to any other suite. | {
"cite_N": [
"@cite_5",
"@cite_7",
"@cite_2"
],
"mid": [
"2132262459",
"1981021420",
"2150981663"
],
"abstract": [
"CHARMM (Chemistry at HARvard Molecular Mechanics) is a highly versatile and widely used molecu- lar simulation program. It has been developed over the last three decades with a primary focus on molecules of bio- logical interest, including proteins, peptides, lipids, nucleic acids, carbohydrates, and small molecule ligands, as they occur in solution, crystals, and membrane environments. For the study of such systems, the program provides a large suite of computational tools that include numerous conformational and path sampling methods, free energy estima- tors, molecular minimization, dynamics, and analysis techniques, and model-building capabilities. The CHARMM program is applicable to problems involving a much broader class of many-particle systems. Calculations with CHARMM can be performed using a number of different energy functions and models, from mixed quantum mechanical-molecular mechanical force fields, to all-atom classical potential energy functions with explicit solvent and various boundary conditions, to implicit solvent and membrane models. The program has been ported to numer- ous platforms in both serial and parallel architectures. This article provides an overview of the program as it exists today with an emphasis on developments since the publication of the original CHARMM article in 1983.",
"Abstract A parallel message-passing implementation of a molecular dynamics (MD) program that is useful for bio(macro)molecules in aqueous environment is described. The software has been developed for a custom-designed 32-processor ring GROMACS (GROningen MAchine for Chemical Simulation) with communication to and from left and right neighbours, but can run on any parallel system onto which a a ring of processors can be mapped and which supports PVM-like block send and receive calls. The GROMACS software consists of a preprocessor, a parallel MD and energy minimization program that can use an arbitrary number of processors (including one), an optional monitor, and several analysis tools. The programs are written in ANSI C and available by ftp (information: gromacs@chem.rug.nl). The functionality is based on the GROMOS (GROningen MOlecular Simulation) package (van Gunsteren and Berendsen, 1987; BIOMOS B.V., Nijenborgh 4, 9747 AG Groningen). Conversion programs between GROMOS and GROMACS formats are included. The MD program can handle rectangular periodic boundary conditions with temperature and pressure scaling. The interactions that can be handled without modification are variable non-bonded pair interactions with Coulomb and Lennard-Jones or Buckingham potentials, using a twin-range cut-off based on charge groups, and fixed bonded interactions of either harmonic or constraint type for bonds and bond angles and either periodic or cosine power series interactions for dihedral angles. Special forces can be added to groups of particles (for non-equilibrium dynamics or for position restraining) or between particles (for distance restraints). The parallelism is based on particle decomposition. Interprocessor communication is largely limited to position and force distribution over the ring once per time step.",
"NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomo- lecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This article, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free energy differences are presented. The motivations for and a roadmap to the internal design of NAMD, implemented in C and based on Charm parallel objects, are outlined. The factors affecting the serial and parallel performance of a simulation are discussed. Finally, typical NAMD use is illustrated with representative applications to a small, a medium, and a large biomolecular system, highlighting particular features of NAMD, for example, the Tcl scripting language. The article also provides a list of the key features of NAMD and discusses the benefits of combining NAMD with the molecular graphics sequence analysis software VMD and the grid computing collaboratory software BioCoRE. NAMD is distributed free of charge with source code at www.ks.uiuc.edu."
]
} |
1608.04689 | 2508757244 | Explicit high-order feature interactions efficiently capture essential structural knowledge about the data of interest and have been used for constructing generative models. We present a supervised discriminative High-Order Parametric Embedding (HOPE) approach to data visualization and compression. Compared to deep embedding models with complicated deep architectures, HOPE generates more effective high-order feature mapping through an embarrassingly simple shallow model. Furthermore, two approaches to generating a small number of exemplars conveying high-order interactions to represent large-scale data sets are proposed. These exemplars in combination with the feature mapping learned by HOPE effectively capture essential data variations. Moreover, through HOPE, these exemplars are employed to increase the computational efficiency of kNN classification for fast information retrieval by thousands of times. For classification in two-dimensional embedding space on MNIST and USPS datasets, our shallow method HOPE with simple Sigmoid transformations significantly outperforms state-of-the-art supervised deep embedding models based on deep neural networks, and even achieved historically low test error rate of 0.65 in two-dimensional space on MNIST, which demonstrates the representational efficiency and power of supervised shallow models with high-order feature interactions. | High-order feature interactions have been studied for building more powerful generative models such as Boltzmann Machines and autoencoders @cite_8 @cite_11 @cite_17 @cite_4 @cite_21 . Factorization Machine (FM) @cite_19 and FHIM @cite_1 are similar to the version of HOPE with only linear projection, but they used feature interactions for classification, regression, or feature selection. None of previous research has been conducted under the context of data embedding, visualization, or compression, and therefore has different objective function or parametric form. Especially, our joint learning approach is completely different from previous methods. And to the best of our knowledge, our work here is the first successful one to model input feature interactions with order higher than two for practical supervised embedding. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_17",
"@cite_19",
"@cite_11"
],
"mid": [
"1983334819",
"2132381605",
"2161000554",
"2027450747",
"",
"",
"2007726935"
],
"abstract": [
"Learning a generative model of natural images is a useful way of extracting features that capture interesting regularities. Previous work on learning such models has focused on methods in which the latent features are used to determine the mean and variance of each pixel independently, or on methods in which the hidden units determine the covariance matrix of a zero-mean Gaussian distribution. In this work, we propose a probabilistic model that combines these two approaches into a single framework. We represent each image using one set of binary latent features that model the image-specific covariance and a separate set that model the mean. We show that this approach provides a probabilistic framework for the widely used simple-cell complex-cell architecture, it produces very realistic samples of natural images and it extracts features that yield state-of-the-art recognition accuracy on the challenging CIFAR 10 dataset.",
"Methods and systems for training a neural network include pre-training a bi-linear, tensor-based network, separately pre-training an auto-encoder, and training the bi-linear, tensor-based network and auto-encoder jointly. Pre-training the bi-linear, tensor-based network includes calculating high-order interactions between an input and a transformation to determine a preliminary network output and minimizing a loss function to pre-train network parameters. Pre-training the auto-encoder includes calculating high-order interactions of a corrupted real network output, determining an auto-encoder output using high-order interactions of the corrupted real network output, and minimizing a loss function to pre-train auto-encoder parameters.",
"Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM) which is used as a module for learning deep belief nets one layer at a time. The Gaussian-Binary RBMs that have been used to model real-valued data are not a good way to model the covariance structure of natural images. We propose a factored 3-way RBM that uses the states of its hidden units to represent abnormalities in the local covariance structure of an image. This provides a probabilistic framework for the widely used simple complex cell architecture. Our model learns binary features that work very well for object recognition on the “tiny images” data set. Even better features are obtained by then using standard binary RBM’s to learn a deeper model.",
"Identifying interpretable discriminative high-order feature interactions given limited training data in high dimensions is challenging in both machine learning and data mining. In this paper, we propose a factorization based sparse learning framework termed FHIM for identifying high-order feature interactions in linear and logistic regression models, and study several optimization methods for solving them. Unlike previous sparse learning methods, our model FHIM recovers both the main effects and the interaction terms accurately without imposing tree-structured hierarchical constraints. Furthermore, we show that FHIM has oracle properties when extended to generalized linear regression models with pairwise interactions. Experiments on simulated data show that FHIM outperforms the state-of-the-art sparse lear-ning techniques. Further experiments on our experimentally generated data from patient blood samples using a novel SOMAmer (Slow Off-rate Modified Aptamer) technology show that, FHIM performs blood-based cancer diagnosis and bio-marker discovery for Renal Cell Carcinoma much better than other competing methods, and it identifies interpretable block-wise high-order gene interactions predictive of cancer stages of samples. A literature survey shows that the interactions identified by FHIM play important roles in cancer development.",
"",
"",
"Recent work on unsupervised feature learning has shown that learning on polynomial expansions of input patches, such as on pair-wise products of pixel intensities, can improve the performance of feature learners and extend their applicability to spatio-temporal problems, such as human action recognition or learning of image transformations. Learning of such higher order features, however, has been much more difficult than standard dictionary learning, because of the high dimensionality and because standard learning criteria are not applicable. Here, we show how one can cast the problem of learning higher-order features as the problem of learning a parametric family of manifolds. This allows us to apply a variant of a de-noising autoencoder network to learn higher-order features using simple gradient based optimization. Our experiments show that the approach can outperform existing higher-order models, while training and inference are exact, fast, and simple."
]
} |
1608.04484 | 2511341304 | A new successive encoding scheme is proposed to effectively generate a random vector with prescribed joint density that induces a latent Gaussian tree structure. We prove the accuracy of such encoding scheme in terms of vanishing total variation distance between the synthesized and desired statistics. The encoding algorithm relies on the learned structure of tree to use minimal number of common random variables to synthesize the desired density, with compact modeling complexity. We characterize the achievable rate region for the rate tuples of multi-layer latent Gaussian tree, through which the number of bits needed to simulate such Gaussian joint density are determined. The random sources used in our algorithm are the latent variables at the top layer of tree along with Bernoulli sign inputs, which capture the correlation signs between the variables. In latent Gaussian trees the pairwise correlation signs between the variables are intrinsically unrecoverable. Such information is vital since it completely determines the direction in which two variables are associated. Given the derived achievable rate region for synthesis of latent Gaussian trees, we also quantify the amount of information loss due to unrecoverable sign information. It is shown that maximizing the achievable rate-region is equivalent to finding the worst case density for Bernoulli sign inputs where maximum amount of sign information is lost. | There are several works that extend the classical bi-variate synthesis problem in Wyner's study to more general scenarios. In @cite_3 @cite_19 @cite_16 , the authors aim to define the common information of @math dependent random variables, to further address the same question in this setting. A lower bound on such generalized common information is obtained in @cite_4 . Also, the common information for a special case with @math Gaussian variables with homogeneous pairwise correlations is obtained. They resort to the same scenario as Wyner @cite_0 did, i.e., considering one random variable to define such common randomness. Veld and Gastpar @cite_9 characterize such quantity for a more general set of Gaussian vectors with circulant covariance matrices. Also, in @cite_5 the authors completely characterize the common information between two jointly Gaussian vectors, as a function of certain singular values that are related to both joint and marginal covariance matrices of two Gaussian random vectors. However, they still divide the random vector into two groups, which makes it similar to Wyner's scenario. | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_16"
],
"mid": [
"1662534456",
"2342676922",
"2553864996",
"2141432745",
"2043976992",
"1901718207",
"2244961551"
],
"abstract": [
"We prove that by imposing a conditional mutual independence constraint and a marginalisation constraint, the almost entropic region can be completely characterised by Shannon-type information inequalities. Such a property is applied to obtain an explicit lower bound on the generalised Wyner common information.",
"We study a caching problem that resembles a lossy Gray-Wyner network: A source produces vector samples from a Gaussian distribution, but the user is interested in the samples of only one component. The encoder first sends a cache message without any knowledge of the user's preference. Upon learning her request, a second message is provided in the update phase so as to attain the desired fidelity on that component.",
"Wyner defined the notion of common information of two discrete random variables as the minimum of I(W; X, Y) where W induces conditional independence between X and Y. Its generalization to multiple dependent random variables revealed a surprising monotone property in the number of variables. Motivated by this monotonicity property, this paper explores the application of Wyner's common information to inference problems and its connection with other performance metrics. A central question is that under what conditions Wyner's common information captures the entire information contained in the observations about the inference object under a simple Bayesian model. For infinitely exchangeable random variables, it is shown using the de Finetti-Hewitt-Savage theorem that the common information is asymptotically equal to the information of the inference object. For finite exchangeable random variables, such conclusion is no longer true even for infinitely extendable sequences. However, for some special cases, including both the binary and the Gaussian cases, concrete connection between common information and inference performance metrics can be established even for finite samples.",
"The problem of finding a meaningful measure of the \"common information\" or \"common randomness' of two discrete dependent random variables X,Y is studied. The quantity C(X; Y) is defined as the minimum possible value of I(X, Y; W) where the minimum is taken over all distributions defining an auxiliary random variable W W , a finite set, such that X, Y are conditionally independent given W . The main result of the paper is contained in two theorems which show that C(X; Y) is i) the minimum R_0 such that a sequence of independent copies of (X,Y) can be efficiently encoded into three binary streams W_0, W_1,W_2 with rates R_0,R_1,R_2 , respectively, [ R_i = H(X, Y)] and X recovered from (W_0, W_1) , and Y recovered from (W_0, W_2) , i.e., W_0 is the common stream; ii) the minimum binary rate R of the common input to independent processors that generate an approximation to X,Y .",
"",
"We study secure source-coding with causal disclosure, under the Gaussian distribution. The optimality of Gaussian auxiliary random variables is shown in various scenarios. We explicitly characterize the tradeoff between the rates of communication and secret key. This tradeoff is the result of a mutual information optimization under Markov constraints. As a corollary, we deduce a general formula for Wyner's Common Information in the Gaussian setting.",
"Wyner’s common information was originally defined for a pair of dependent discrete random variables. Its significance is largely reflected in, and also confined to, several existing interpretations in various source coding problems. This paper attempts to expand its practical significance by providing a new operational interpretation. In the context of the Gray–Wyner network, it is established that Wyner’s common information has a new lossy source coding interpretation. Specifically, it is established that, under suitable conditions, Wyner’s common information equals to the smallest common message rate when the total rate is arbitrarily close to the rate distortion function with joint decoding for the Gray–Wyner network. A surprising observation is that such equality holds independent of the values of distortion constraints as long as the distortions are within some distortion region. The new lossy source coding interpretation provides the first meaningful justification for defining Wyner’s common information for continuous random variables and the result can also be extended to that of multiple variables. Examples are given for characterizing the rate distortion region for the Gray–Wyner lossy source coding problem and for identifying conditions under which Wyner’s common information equals that of the smallest common rate. As a by-product, the explicit expression for the common information between a pair of Gaussian random variables is obtained."
]
} |
1608.04484 | 2511341304 | A new successive encoding scheme is proposed to effectively generate a random vector with prescribed joint density that induces a latent Gaussian tree structure. We prove the accuracy of such encoding scheme in terms of vanishing total variation distance between the synthesized and desired statistics. The encoding algorithm relies on the learned structure of tree to use minimal number of common random variables to synthesize the desired density, with compact modeling complexity. We characterize the achievable rate region for the rate tuples of multi-layer latent Gaussian tree, through which the number of bits needed to simulate such Gaussian joint density are determined. The random sources used in our algorithm are the latent variables at the top layer of tree along with Bernoulli sign inputs, which capture the correlation signs between the variables. In latent Gaussian trees the pairwise correlation signs between the variables are intrinsically unrecoverable. Such information is vital since it completely determines the direction in which two variables are associated. Given the derived achievable rate region for synthesis of latent Gaussian trees, we also quantify the amount of information loss due to unrecoverable sign information. It is shown that maximizing the achievable rate-region is equivalent to finding the worst case density for Bernoulli sign inputs where maximum amount of sign information is lost. | Similar to @cite_7 @cite_14 we also consider multi-variable cases, but unlike those works, we are interested in characterizing the achievable rates to synthesize a special class of Gaussian distributions, namely Gaussian trees. We adopt a specific (but natural) structure to our synthesis scheme to decrease the number parameters to model the synthesis scheme. It is worthy to point that the achievability results given in this paper are under the assumed structured synthesis framework. Hence, although through defining an optimization problems, we show that the proposed method is efficient in terms of both modeling and codebook rates, the converse proof, which shows the optimality of such scheme and rate regions is never claimed. | {
"cite_N": [
"@cite_14",
"@cite_7"
],
"mid": [
"2583932153",
"2417541697"
],
"abstract": [
"We study a generalization of Wyner's Common Information toWatanabe's Total Correlation. The first minimizes the description size required for a variable that can make two other random variables conditionally independent. If independence is unattainable, Watanabe's total (conditional) correlation is measure to check just how independent they have become. Following up on earlier work for scalar Gaussians, we discuss the minimization of total correlation for Gaussian vector sources. Using Gaussian auxiliaries, we show one should transform two vectors of length d into d independent pairs, after which a reverse water filling procedure distributes the minimization over all these pairs. Lastly, we show how this minimization of total conditional correlation fits a lossy coding problem by using the Gray-Wyner network as a model for a caching problem.",
"Measuring the relationship between any pair of variables is a rich and active area of research that is central to scientific practice. In contrast, characterizing the common information among any group of variables is typically a theoretical exercise with few practical methods for high-dimensional data. A promising solution would be a multivariate generalization of the famous Wyner common information, but this approach relies on solving an apparently intractable optimization problem. We leverage the recently introduced information sieve decomposition to formulate an incremental version of the common information problem that admits a simple fixed point solution, fast convergence, and complexity that is linear in the number of variables. This scalable approach allows us to demonstrate the usefulness of common information in high-dimensional learning problems. The sieve outperforms standard methods on dimensionality reduction tasks, solves a blind source separation problem that cannot be solved with ICA, and accurately recovers structure in brain imaging data."
]
} |
1608.04493 | 2507318699 | Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of @math and @math respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at this https URL | In order to make DNN models portable, a variety of methods have been proposed. @cite_20 analyse the effectiveness of data layout, batching and the usage of Intel fixed-point instructions, making a @math speedup on x86 CPUs. @cite_11 explore the fast Fourier transforms (FFTs) on GPUs and improve the speed of CNNs by performing convolution calculations in the frequency domain. | {
"cite_N": [
"@cite_20",
"@cite_11"
],
"mid": [
"587794757",
"1922123711"
],
"abstract": [
"Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters suitable for a number of applications that require real-time processing. The sheer size of these networks can represent a challenging computational burden, even for modern CPUs. For this reason, GPUs are routinely used instead to train and run such networks. This paper is a tutorial for students and researchers on some of the techniques that can be used to reduce this computational cost considerably on modern x86 CPUs. We emphasize data layout, batching of the computation, the use of SSE2 instructions, and particularly leverage SSSE3 and SSE4 fixed-point instructions which provide a 3× improvement over an optimized floating-point baseline. We use speech recognition as an example task, and show that a real-time hybrid hidden Markov model neural network (HMM NN) large vocabulary system can be built with a 10× speedup over an unoptimized baseline and a 4× speedup over an aggressively optimized floating-point baseline at no cost in accuracy. The techniques described extend readily to neural network training and provide an effective alternative to the use of specialized hardware.",
"Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges."
]
} |
1608.04493 | 2507318699 | Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of @math and @math respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at this https URL | Vector quantization is possible way to compress DNNs. @cite_13 explore several such methods and point out the effectiveness of product quantization. HashNet proposed by @cite_6 handles network compression by grouping its parameters into hash buckets. It is trained with a standard backpropagation procedure and should be able to make substantial storage savings. The recently proposed BinaryConnect @cite_1 and Binarized Neural Networks @cite_7 are able to compress DNNs by a factor of @math , while a noticeable accuracy loss is sort of inevitable. | {
"cite_N": [
"@cite_1",
"@cite_13",
"@cite_7",
"@cite_6"
],
"mid": [
"2963114950",
"1724438581",
"2319920447",
"2952432176"
],
"abstract": [
"Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"Deep convolutional neural networks (CNN) has become the most promising method for object recognition, repeatedly demonstrating record breaking results for image classification and object detection in recent years. However, a very deep CNN generally involves many layers with millions of parameters, making the storage of the network model to be extremely large. This prohibits the usage of deep CNNs on resource limited hardware, especially cell phones or other embedded devices. In this paper, we tackle this model storage issue by investigating information theoretical vector quantization methods for compressing the parameters of CNNs. In particular, we have found in terms of compressing the most storage demanding dense connected layers, vector quantization methods have a clear gain over existing matrix factorization methods. Simply applying k-means clustering to the weights or conducting product quantization can lead to a very good balance between model size and recognition accuracy. For the 1000-category classification task in the ImageNet challenge, we are able to achieve 16-24 times compression of the network with only 1 loss of classification accuracy using the state-of-the-art CNN.",
"We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time. At training-time the binary weights and activations are used for computing the parameters gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency. To validate the effectiveness of BNNs we conduct two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available on-line.",
"As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance."
]
} |
1608.04493 | 2507318699 | Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of @math and @math respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at this https URL | This paper follows the idea of network pruning. It starts from the early work of 's @cite_21 , which makes use of the second derivatives of loss function to balance training loss and model complexity. As an extension, Hassibi and Stork @cite_12 propose to take non-diagonal elements of the Hessian matrix into consideration, producing compression results with less accuracy loss. In spite of their theoretical optimization, these two methods suffer from the high computational complexity when tackling large networks, regardless of the accuracy drop. Very recently, @cite_16 explore the magnitude-based pruning in conjunction with retraining, and report promising compression results without accuracy loss. It has also been validated that the sparse matrix-vector multiplication can further be accelerated by certain hardware design, making it more efficient than traditional CPU and GPU calculations @cite_14 . The drawback of 's method @cite_16 is mostly its potential risk of irretrievable network damage and learning inefficiency. | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_12",
"@cite_16"
],
"mid": [
"2285660444",
"2114766824",
"2125389748",
"2963674932"
],
"abstract": [
"State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120× energy saving; Exploiting sparsity saves 10×; Weight sharing gives 8×; Skipping zero activations from ReLU saves another 3×. Evaluated on nine DNN benchmarks, EIE is 189× and 13× faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102 GOPS working directly on a compressed network, corresponding to 3 TOPS on an uncompressed network, and processes FC layers of AlexNet at 1.88×104 frames sec with a power dissipation of only 600mW. It is 24,000× and 3,400× more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9×, 19× and 3× better throughput, energy efficiency and area efficiency.",
"We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.",
"We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy."
]
} |
1608.04493 | 2507318699 | Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of @math and @math respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at this https URL | Our research on network pruning is partly inspired by @cite_16 , not only because it can be very effective to compress DNNs, but also because it makes no assumption on the network structure. In particular, this branch of methods can be naturally combined with many other methods introduced above, to further reduce the network complexity. In fact, @cite_18 have already tested such combinations and obtained excellent results. | {
"cite_N": [
"@cite_18",
"@cite_16"
],
"mid": [
"2119144962",
"2963674932"
],
"abstract": [
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy."
]
} |
1608.04509 | 2511736818 | The plenoptic camera can capture both angular and spatial information of the rays, enabling 3D reconstruction by single exposure. The geometry of the recovered scene structure is affected by the calibration of the plenoptic camera significantly. In this paper, we propose a novel unconstrained two-parallel-plane (TPP) model with 7 parameters to describe a 4D light field. By reconstructing scene points from ray-ray association, a 3D projective transformation is deduced to establish the relationship between the scene structure and the TPP parameters. Based on the transformation, we simplify the focused plenoptic camera as a TPP model and calibrate its intrinsic parameters. Our calibration method includes a close-form solution and a nonlinear optimization by minimizing re-projection error. Experiments on both simulated data and real scene data verify the performance of the calibration on the focused plenoptic camera. | To acquire light field, there are various imaging systems developed from the traditional camera. @cite_25 presented a camera array to obtain light field with high spatial and angular resolution. Prior work dealt with the calibration of the camera arrays @cite_4 . Unfortunately, applications on camera arrays are limited by its high cost and complex control. In contrast, a MLA enables a single camera to record 4D light field more conveniently and efficiently, though the baseline and spatial resolution is smaller than the camera array. Recent work devoted to calibrate the intrinsic parameters of the plenoptic cameras in two designs @cite_0 @cite_3 , which are quite different according to the image structure of the micro lenses. Moreover, in traditional multi-view geometry, multiple cameras in different poses are defined as a set of unconstrained rays, which is known as as Generalized Camera Model (GCM) @cite_16 . The ambiguity of the reconstructed scene was discussed in traditional topics. For a plenoptic camera, the different views of the same scene point are obtained, and the calibration of a plenoptic camera can use the theory on traditional multi-view for reference. | {
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_0",
"@cite_16",
"@cite_25"
],
"mid": [
"2113642013",
"2133515844",
"",
"2118341165",
"2116361875"
],
"abstract": [
"A light field consists of images of a scene taken from different viewpoints. Light fields are used in computer graphics for image-based rendering and synthetic aperture photography, and in vision for recovering shape. In this paper, we describe a simple procedure to calibrate camera arrays used to capture light fields using a plane + parallax framework. Specifically, for the case when the cameras lie on a plane, we show (i) how to estimate camera positions up to an affine ambiguity, and (ii) how to reproject light field images onto a family of planes using only knowledge of planar parallax for one point in the scene. While planar parallax does not completely describe the geometry of the light field, it is adequate for the first two applications which, it turns out, do not depend on having a metric calibration of the light field. Experiments on acquired light fields indicate that our method yields better results than full metric calibration.",
"Plenoptic cameras, constructed with internal microlens arrays, focus those microlenses at infinity in order to sample the 4D radiance directly at the microlenses. The consequent assumption is that each microlens image is completely defocused with respect to to the image created by the main camera lens and the outside object. As a result, only a single pixel in the final image can be rendered from it, resulting in disappointingly low resolution. In this paper, we present a new approach to lightfield capture and image rendering that interprets the microlens array as an imaging system focused on the focal plane of the main camera lens. This approach captures a lightfield with significantly higher spatial resolution than the traditional approach, allowing us to render high resolution images that meet the expectations of modern photographers. Although the new approach samples the lightfield with reduced angular density, analysis and experimental results demonstrate that there is sufficient parallax to completely support lightfield manipulation algorithms such as refocusing and novel views",
"",
"We illustrate how to consider a network of cameras as a single generalized camera in a framework proposed by Nayar (2001). We derive the discrete structure from motion equations for generalized cameras, and illustrate the corollaries to epipolar geometry. This formal mechanism allows one to use a network of cameras as if they were a single imaging device, even when they do not share a common center of projection. Furthermore, an analysis of structure from motion algorithms for this imaging model gives constraints on the optimal design of panoramic imaging systems constructed from multiple cameras.",
"The advent of inexpensive digital image sensors and the ability to create photographs that combine information from a number of sensed images are changing the way we think about photography. In this paper, we describe a unique array of 100 custom video cameras that we have built, and we summarize our experiences using this array in a range of imaging applications. Our goal was to explore the capabilities of a system that would be inexpensive to produce in the future. With this in mind, we used simple cameras, lenses, and mountings, and we assumed that processing large numbers of images would eventually be easy and cheap. The applications we have explored include approximating a conventional single center of projection video camera with high performance along one or more axes, such as resolution, dynamic range, frame rate, and or large aperture, and using multiple cameras to approximate a video camera with a large synthetic aperture. This permits us to capture a video light field, to which we can apply spatiotemporal view interpolation algorithms in order to digitally simulate time dilation and camera motion. It also permits us to create video sequences using custom non-uniform synthetic apertures."
]
} |
1608.04667 | 2510850936 | Image denoising is an important pre-processing step in medical image analysis. Different algorithms have been proposed in past three decades with varying denoising performances. More recently, having outperformed all conventional methods, deep learning based models have shown a great promise. These methods are however limited for requirement of large training sample size and high computational costs. In this paper we show that using small sample size, denoising autoencoders constructed using convolutional layers can be used for efficient denoising of medical images. Heterogeneous images can be combined to boost sample size for increased denoising performance. Simplest of networks can reconstruct images with corruption levels so high that noise and signal are not differentiable to human eye. | Although BM3D @cite_22 is considered state-of-the-art in image denoising and is a very well engineered method, @cite_14 showed that a plain multi layer perceptron (MLP) can achieve similar denoising performance. Denoising autoencoders are a recent addition to image denoising literature. Used as a building block for deep networks, they were introduced by @cite_31 as an extension to classic autoencoders. It was shown that denoising autoencoders can be stacked @cite_25 to form a deep network by feeding the output of one denoising autoencoder to the one below it. @cite_16 proposed image denoising using convolutional neural networks. It was observed that using a small sample of training images, performance at par or better than state-of-the-art based on wavelets and Markov random fields can be achieved. @cite_30 used stacked sparse autoencoders for image denoising and inpainting, it performed at par with K-SVD. @cite_15 experimented with adaptive multi column deep neural networks for image denoising, built using combination of stacked sparse autoencoders. This system was shown to be robust for different noise types. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_22",
"@cite_15",
"@cite_31",
"@cite_16",
"@cite_25"
],
"mid": [
"2146337213",
"2037642501",
"2056370875",
"2151503710",
"2025768430",
"2098477387",
"2145094598"
],
"abstract": [
"We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method's performance in the image denoising task is comparable to that of KSVD which is a widely used sparse coding technique. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning.",
"Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with a plain multi layer perceptron (MLP) applied to image patches. While this has been done before, we will show that by training on large image databases we are able to compete with the current state-of-the-art image denoising methods. Furthermore, our approach is easily adapted to less extensively studied types of noise (by merely exchanging the training data), for which we achieve excellent results as well.",
"We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.",
"Stacked sparse denoising autoencoders (SSDAs) have recently been shown to be successful at removing noise from corrupted images. However, like most denoising techniques, the SSDA is not robust to variation in noise types beyond what it has seen during training. To address this limitation, we present the adaptive multi-column stacked sparse denoising autoencoder (AMC-SSDA), a novel technique of combining multiple SSDAs by (1) computing optimal column weights via solving a nonlinear optimization program and (2) training a separate network to predict the optimal weights. We eliminate the need to determine the type of noise, let alone its statistics, at test time and even show that the system can be robust to noise not seen in the training set. We show that state-of-the-art denoising performance can be achieved with a single system on a variety of different noise types. Additionally, we demonstrate the efficacy of AMC-SSDA as a preprocessing (denoising) algorithm by achieving strong classification performance on corrupted MNIST digits.",
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
"We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind de-noising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.",
"We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations."
]
} |
1608.04342 | 2951352790 | We present a method to automatically decompose a light field into its intrinsic shading and albedo components. Contrary to previous work targeted to 2D single images and videos, a light field is a 4D structure that captures non-integrated incoming radiance over a discrete angular domain. This higher dimensionality of the problem renders previous state-of-the-art algorithms impractical either due to their cost of processing a single 2D slice, or their inability to enforce proper coherence in additional dimensions. We propose a new decomposition algorithm that jointly optimizes the whole light field data for proper angular coherence. For efficiency, we extend Retinex theory, working on the gradient domain, where new albedo and occlusion terms are introduced. Results show our method provides 4D intrinsic decompositions difficult to achieve with previous state-of-the-art algorithms. We further provide a comprehensive analysis and comparisons with existing intrinsic image video decomposition methods on light field images. | Intrinsic decomposition of the shading and albedo components of an image is a long-standing problem in computer vision and graphics since it was formulated by Barrow and Tenembaum in the 70s @cite_33 . We review previous intrinsic decomposition algorithms based on their input, and then briefly cover related light field processing. | {
"cite_N": [
"@cite_33"
],
"mid": [
"39428922"
],
"abstract": [
"We suggest that an appropriate role of early visual processing is to describe a scene in terms of intrinsic (vertical) characteristics -- such as range, orientation, reflectance, and incident illumination -- of the surface element visible at each point in the image. Support for this idea comes from three sources: the obvious utility of intrinsic characteristics for higher-level scene analysis; the apparent ability of humans to determine these characteristics, regardless of viewing conditions or familiarity with the scene; and a theoretical argument that such a description is obtainable, by a noncognitive and nonpurposive process, at least, for simple scene domains. The central problem in recovering intrinsic scene characteristics is that the information is confounded in the original light-intensity image: a single intensity value encodes all the characteristics of the corresponding scene point. Recovery depends on exploiting constraints, derived from assumptions about the nature of the scene and the physics of the imaging process."
]
} |
1608.04342 | 2951352790 | We present a method to automatically decompose a light field into its intrinsic shading and albedo components. Contrary to previous work targeted to 2D single images and videos, a light field is a 4D structure that captures non-integrated incoming radiance over a discrete angular domain. This higher dimensionality of the problem renders previous state-of-the-art algorithms impractical either due to their cost of processing a single 2D slice, or their inability to enforce proper coherence in additional dimensions. We propose a new decomposition algorithm that jointly optimizes the whole light field data for proper angular coherence. For efficiency, we extend Retinex theory, working on the gradient domain, where new albedo and occlusion terms are introduced. Results show our method provides 4D intrinsic decompositions difficult to achieve with previous state-of-the-art algorithms. We further provide a comprehensive analysis and comparisons with existing intrinsic image video decomposition methods on light field images. | * Single Image. Several works rely on the original Retinex theory @cite_34 to estimate the component. By assuming that shading varies smoothly, either pixel-wise @cite_35 @cite_37 or cluster-based @cite_50 optimization is performed. Clustering strategies have also been used to obtain the component, e.g. assuming a sparse number of reflectances @cite_21 @cite_12 , using a dictionary of learned reflectances from crowd-sourced experiments @cite_36 , or flattening the image to remove shading variations @cite_16 . Alternative methods require user interaction @cite_26 , jointly optimize the shape, albedo and illumination @cite_3 , incorporate priors from data driven statistics @cite_38 , train a Convolutional Neural Network (CNN) with synthetic datasets @cite_27 , or use depth maps acquired with a depth camera to help disambiguate shading from reflectance @cite_0 @cite_46 @cite_32 . For a full review of single image methods, we refer the reader to the state-of-the-art @cite_4 . Although some of these algorithms can produce good quality results, they require additional processing for angular coherence, and they do not make use of the implicit information captured by a light field. Our work is based on the Retinex theory, with 2D and 4D scene-based heuristics to classify reflectance gradients. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_26",
"@cite_38",
"@cite_4",
"@cite_36",
"@cite_21",
"@cite_32",
"@cite_16",
"@cite_3",
"@cite_0",
"@cite_27",
"@cite_50",
"@cite_46",
"@cite_34",
"@cite_12"
],
"mid": [
"2116919352",
"2087257250",
"",
"",
"2608400466",
"2076491823",
"",
"2133661850",
"",
"",
"2117751343",
"2951548216",
"2083779601",
"2101856619",
"2164847484",
""
],
"abstract": [
"Interpreting real-world images requires the ability distinguish the different characteristics of the scene that lead to its final appearance. Two of the most important of these characteristics are the shading and reflectance of each point in the scene. We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, given the lighting direction, each image derivative is classified as being caused by shading or a change in the surface's reflectance. The classifiers gather local evidence about the surface's form and color, which is then propagated using the generalized belief propagation algorithm. The propagation step disambiguates areas of the image where the correct classification is not clear from local evidence. We use real-world images to demonstrate results and show how each component of the system affects the results.",
"We propose a method for intrinsic image decomposition based on retinex theory and texture analysis. While most previous methods approach this problem by analyzing local gradient properties, our technique additionally identifies distant pixels with the same reflectance through texture analysis, and uses these nonlocal reflectance constraints to significantly reduce ambiguity in decomposition. We formulate the decomposition problem as the minimization of a quadratic function which incorporates both the retinex constraint and our nonlocal texture constraint. This optimization can be solved in closed form with the standard conjugate gradient algorithm. Extensive experimentation with comparisons to previous techniques validate our method in terms of both decomposition accuracy and runtime efficiency.",
"",
"",
"Intrinsic images are a mid-level representation of an image that decompose the image into reflectance and illumination layers. The reflectance layer captures the color texture of surfaces in the scene, while the illumination layer captures shading effects caused by interactions between scene illumination and surface geometry. Intrinsic images have a long history in computer vision and recently in computer graphics, and have been shown to be a useful representation for tasks ranging from scene understanding and reconstruction to image editing. In this report, we review and evaluate past work on this problem. Specifically, we discuss each work in terms of the priors they impose on the intrinsic image problem. We introduce a new synthetic ground-truth dataset that we use to evaluate the validity of these priors and the performance of the methods. Finally, we evaluate the performance of the different methods in the context of image-editing applications.",
"Intrinsic image decomposition separates an image into a reflectance layer and a shading layer. Automatic intrinsic image decomposition remains a significant challenge, particularly for real-world scenes. Advances on this longstanding problem have been spurred by public datasets of ground truth data, such as the MIT Intrinsic Images dataset. However, the difficulty of acquiring ground truth data has meant that such datasets cover a small range of materials and objects. In contrast, real-world scenes contain a rich range of shapes and materials, lit by complex illumination. In this paper we introduce Intrinsic Images in the Wild, a large-scale, public dataset for evaluating intrinsic image decompositions of indoor scenes. We create this benchmark through millions of crowdsourced annotations of relative comparisons of material properties at pairs of points in each scene. Crowdsourcing enables a scalable approach to acquiring a large database, and uses the ability of humans to judge material comparisons, despite variations in illumination. Given our database, we develop a dense CRF-based intrinsic image algorithm for images in the wild that outperforms a range of state-of-the-art intrinsic image algorithms. Intrinsic image decomposition remains a challenging problem; we release our code and database publicly to support future research on this problem, available online at http: intrinsic.cs.cornell.edu .",
"",
"We present a technique for estimating intrinsic images from image+depth video, such as that acquired from a Kinect camera. Intrinsic image decomposition in this context has importance in applications like object modeling, in which surface colors need to be recovered without illumination effects. The proposed method is based on two new types of decomposition constraints derived from the multiple viewpoints and reconstructed 3D scene geometry of the video data. The first type provides shading constraints that enforce relationships among the shading components of different surface points according to their similarity in surface orientation. The second type imposes temporal constraints that favor consistency in the intrinsic color of a surface point seen in different video frames, which improves decomposition in cases of view-dependent non-Lambertian reflections. Local and non-local variants of the two constraints are employed in a manner complementary to local and non-local reflectance constraints used in previous works. Together they are formulated within a linear system that allows for efficient optimization. Experimental results demonstrate that each of the new constraints appreciably elevates the quality of intrinsic image estimation, and that they jointly yield decompositions that compare favorably to current techniques.",
"",
"",
"In this paper we extend the “shape, illumination and reflectance from shading” (SIRFS) model [3, 4], which recovers intrinsic scene properties from a single image. Though SIRFS performs well on images of segmented objects, it performs poorly on images of natural scenes, which contain occlusion and spatially-varying illumination. We therefore present Scene-SIRFS, a generalization of SIRFS in which we have a mixture of shapes and a mixture of illuminations, and those mixture components are embedded in a “soft” segmentation of the input image. We additionally use the noisy depth maps provided by RGB-D sensors (in this case, the Kinect) to improve shape estimation. Our model takes as input a single RGB-D image and produces as output an improved depth map, a set of surface normals, a reflectance image, a shading image, and a spatially varying model of illumination. The output of our model can be used for graphics applications, or for any application involving RGB-D images.",
"We introduce a new approach to intrinsic image decomposition, the task of decomposing a single image into albedo and shading components. Our strategy, which we term direct intrinsics, is to learn a convolutional neural network (CNN) that directly predicts output albedo and shading channels from an input RGB image patch. Direct intrinsics is a departure from classical techniques for intrinsic image decomposition, which typically rely on physically-motivated priors and graph-based inference algorithms. The large-scale synthetic ground-truth of the MPI Sintel dataset plays a key role in training direct intrinsics. We demonstrate results on both the synthetic images of Sintel and the real images of the classic MIT intrinsic image dataset. On Sintel, direct intrinsics, using only RGB input, outperforms all prior work, including methods that rely on RGB+Depth input. Direct intrinsics also generalizes across modalities; it produces quite reasonable decompositions on the real images of the MIT dataset. Our results indicate that the marriage of CNNs with synthetic training data may be a powerful new technique for tackling classic problems in computer vision.",
"Decomposing an input image into its intrinsic shading and reflectance components is a long-standing ill-posed problem. We present a novel algorithm that requires no user strokes and works on a single image. Based on simple assumptions about its reflectance and luminance, we first find clusters of similar reflectance in the image, and build a linear system describing the connections and relations between them. Our assumptions are less restrictive than widely-adopted Retinex-based approaches, and can be further relaxed in conflicting situations. The resulting system is robust even in the presence of areas where our assumptions do not hold. We show a wide variety of results, including natural images, objects from the MIT dataset and texture images, along with several applications, proving the versatility of our method. © 2012 Wiley Periodicals, Inc.",
"We present a model for intrinsic decomposition of RGB-D images. Our approach analyzes a single RGB-D image and estimates albedo and shading fields that explain the input. To disambiguate the problem, our model estimates a number of components that jointly account for the reconstructed shading. By decomposing the shading field, we can build in assumptions about image formation that help distinguish reflectance variation from shading. These assumptions are expressed as simple nonlocal regularizers. We evaluate the model on real-world images and on a challenging synthetic dataset. The experimental results demonstrate that the presented approach outperforms prior models for intrinsic decomposition of RGB-D images.",
"Sensations of color show a strong correlation with reflectance, even though the amount of visible light reaching the eye depends on the product of reflectance and illumination. The visual system must achieve this remarkable result by a scheme that does not measure flux. Such a scheme is described as the basis of retinex theory. This theory assumes that there are three independent cone systems, each starting with a set of receptors peaking, respectively, in the long-, middle-, and short-wavelength regions of the visible spectrum. Each system forms a separate image of the world in terms of lightness that shows a strong correlation with reflectance within its particular band of wavelengths. These images are not mixed, but rather are compared to generate color sensations. The problem then becomes how the lightness of areas in these separate images can be independent of flux. This article describes the mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects",
""
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.