aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1702.06728
2589759987
Inspired by the recent advances of image super-resolution using convolutional neural network (CNN), we propose a CNN-based block up-sampling scheme for intra frame coding. A block can be down-sampled before being compressed by normal intra coding, and then up-sampled to its original resolution. Different from previous studies on down up-sampling-based coding, the up-sampling methods in our scheme have been designed by training CNN instead of hand-crafted. We explore a new CNN structure for up-sampling, which features deconvolution of feature maps, multi-scale fusion, and residue learning, making the network both compact and efficient. We also design different networks for the up-sampling of luma and chroma components, respectively, where the chroma up-sampling CNN utilizes the luma information to boost its performance. In addition, we design a two-stage up-sampling process, the first stage being within the block-by-block coding loop, and the second stage being performed on the entire frame, so as to refine block boundaries. We also empirically study how to set the coding parameters of down-sampled blocks for pursuing the frame-level rate-distortion optimization. Our proposed scheme is implemented into the high-efficiency video coding (HEVC) reference software, and a comprehensive set of experiments have been performed to evaluate our methods. Experimental results show that our scheme achieves significant bits saving compared with the HEVC anchor, especially at low bit rates, leading to on average 5.5 BD-rate reduction on common test sequences and on average 9.0 BD-rate reduction on ultrahigh definition test sequences.
In this paper, we explore a new five-layer CNN structure for block up-sampling. Some key ingredients in the previously studied networks, such as residue learning and resolution change embedded in network, have been borrowed into our designed network. Our network structure is greatly simplified to reduce computational complexity, but still achieves satisfactory reconstruction quality, compared to the state-of-the-arts @cite_0 @cite_11 .
{ "cite_N": [ "@cite_0", "@cite_11" ], "mid": [ "2951997238", "2505593925" ], "abstract": [ "We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification simonyan2015very . We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates ( @math times higher than SRCNN dong2015image ) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.", "One impressive advantage of convolutional neural networks (CNNs) is their ability to automatically learn feature representation from raw pixels, eliminating the need for hand-designed procedures. However, recent methods for single image super-resolution (SR) fail to maintain this advantage. They utilize CNNs in two decoupled steps, i.e., first upsampling the low resolution (LR) image to the high resolution (HR) size with hand-designed techniques (e.g., bicubic interpolation), and then applying CNNs on the upsampled LR image to reconstruct HR results. In this paper, we seek an alternative and propose a new image SR method, which jointly learns the feature extraction, upsampling and HR reconstruction modules, yielding a completely end-to-end trainable deep CNN. As opposed to existing approaches, the proposed method conducts upsampling in the latent feature space with filters that are optimized for the task of image SR. In addition, the HR reconstruction is performed in a multi-scale manner to simultaneously incorporate both short- and long-range contextual information, ensuring more accurate restoration of HR images. To facilitate network training, a new training approach is designed, which jointly trains the proposed deep network with a relatively shallow network, leading to faster convergence and more superior performance. The proposed method is extensively evaluated on widely adopted data sets and improves the performance of state-of-the-art methods with a considerable margin. Moreover, in-depth ablation studies are conducted to verify the contribution of different network designs to image SR, providing additional insights for future research." ] }
1702.07054
2591949924
Cascade is a widely used approach that rejects obvious negative samples at early stages for learning better classifier and faster inference. This paper presents chained cascade network (CC-Net). In this CC-Net, the cascaded classifier at a stage is aided by the classification scores in previous stages. Feature chaining is further proposed so that the feature learning for the current cascade stage uses the features in previous stages as the prior information. The chained ConvNet features and classifiers of multiple stages are jointly learned in an end-to-end network. In this way, features and classifiers at latter stages handle more difficult samples with the help of features and classifiers in previous stages. It yields consistent boost in detection performance on benchmarks like PASCAL VOC 2007 and ImageNet. Combined with better region proposal, CC-Net leads to state-of-the-art result of 81.1 mAP on PASCAL VOC 2007.
Deeper ConvNets were found to be effective for image classification and object detection @cite_5 @cite_21 @cite_15 @cite_8 @cite_10 . On other hand, wide residual network @cite_22 , inception modules @cite_8 @cite_28 , multi-region features @cite_4 @cite_27 @cite_7 showed that increasing the width of the ConvNets in an effective way led to improvement the image classification accuracy. Our work is complementary to the works above that learn better features. We can use these features to obtain diverse features for cascade in different stages. In our work, features of the same depth are divided into different cascade stages and communicate by feature chaining. This design, which is not investigated in previous works, improves the ability of features in handling more difficult examples in latter cascade stages.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_7", "@cite_8", "@cite_28", "@cite_21", "@cite_27", "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "1932624639", "2401231614", "", "2950179405", "", "1487583988", "", "2951583185", "1686810756", "2949650786" ], "abstract": [ "We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2 and 73.9 correspondingly, surpassing any other published work by a significant margin.", "Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layer-deep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at this https URL", "", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "", "We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ] }
1702.06803
2592184081
Obtaining flow-level measurements, similar to those provided by Netflow IPFIX, with OpenFlow is challenging as it requires the installation of an entry per flow in the flow tables. This approach does not scale well with the number of concurrent flows in the traffic as the number of entries in the flow tables is limited and small. Flow monitoring rules may also interfere with forwarding or other rules already present in the switches, which are often defined at different granularities than the flow level. In this paper, we present a transparent and scalable flow-based monitoring solution that is fully compatible with current off-the-shelf OpenFlow switches. As in NetFlow IPFIX, we aggregate packets into flows directly in the switches and asynchronously send traffic reports to an external collector. In order to reduce the overhead, we implement three different traffic sampling methods depending on the OpenFlow features available in the switch. We developed our complete flow monitoring solution within OpenDaylight and evaluated its accuracy in a testbed with Open vSwitch. Our experimental results using real-world traffic traces show that the proposed sampling methods are accurate and can effectively reduce the resource requirements of flow measurements in OpenFlow.
Since its inception in 2008, OpenFlow @cite_16 has become a dominant protocol for the southbound interface (between control and data planes) in SDN. It is impossible to foresee whether OpenFlow will ever evolve towards a measurement standard technology, but potentially it could be a definitive solution for traffic measurement. It can maintain records with flow statistics and includes an interface that allows to retrieve measurements at different aggregation levels passively (when a flow entry expires), or actively (by querying the statistics to the switch).
{ "cite_N": [ "@cite_16" ], "mid": [ "2147118406" ], "abstract": [ "This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too" ] }
1702.06803
2592184081
Obtaining flow-level measurements, similar to those provided by Netflow IPFIX, with OpenFlow is challenging as it requires the installation of an entry per flow in the flow tables. This approach does not scale well with the number of concurrent flows in the traffic as the number of entries in the flow tables is limited and small. Flow monitoring rules may also interfere with forwarding or other rules already present in the switches, which are often defined at different granularities than the flow level. In this paper, we present a transparent and scalable flow-based monitoring solution that is fully compatible with current off-the-shelf OpenFlow switches. As in NetFlow IPFIX, we aggregate packets into flows directly in the switches and asynchronously send traffic reports to an external collector. In order to reduce the overhead, we implement three different traffic sampling methods depending on the OpenFlow features available in the switch. We developed our complete flow monitoring solution within OpenDaylight and evaluated its accuracy in a testbed with Open vSwitch. Our experimental results using real-world traffic traces show that the proposed sampling methods are accurate and can effectively reduce the resource requirements of flow measurements in OpenFlow.
In the light of the above, we present a monitoring solution which emulates the NetFlow IPFIX operation with OpenFlow and implements flow sampling. In this way, for each flow sampled, we maintain a flow entry in the switch. Here each entry records the duration (in seconds and nanoseconds) and packet and bytes counts. We use timeouts to define when these records are going to expire and, therefore, being reported to the controller. A similar approach was previously used in @cite_3 to assess the accuracy of measurements and timeouts in some OpenFlow switches. However, their approach is not scalable as it requires to install an entry in the flow tables for every single flow observed in the traffic, assumes that all rules have been deployed proactively for every flow that will be observed in the network, and does not address the problem of how monitoring rules interfere with the rest of rules installed in the switch (e.g., forwarding rules). In contrast, we present a complete flow monitoring solution that has the following novel features:
{ "cite_N": [ "@cite_3" ], "mid": [ "2593983841" ], "abstract": [ "Since its initial proposal in 2008, OpenFlow has evolved to become today’s main enabler of Software-Defined Networking. OpenFlow specifies operations for network forwarding devices and a communication protocol between data and control planes. Although not primarily designed as a traffic measurement tool, many works have proposed to use measured data from OpenFlow to support, e.g., traffic engineering or security in OpenFlow-enabled networks. These works, however, generally do not question or address the quality of actual measured data obtained from OpenFlow devices. Therefore, in this paper we assess the quality of measurements in real OpenFlow devices from multiple vendors. We demonstrate that inconsistencies and measurement artifacts can be found due to particularities of different OpenFlow implementations, making it impractical to deploy an OpenFlow measurement-based approach in a network consisting of devices from multiple vendors. In addition, we show that the accuracy of measured packet and byte counts and duration for flows vary among the tested devices, and in some cases counters are not even implemented for the sake of forwarding performance." ] }
1702.06879
2590699358
In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs---labeled directed graphs---and predicting missing relationships---labeled edges. State-of-the-art embedding models propose different trade-offs between modeling expressiveness, and time and space complexity. We reconcile both expressiveness and complexity through the use of complex-valued embeddings and explore the link between such complex-valued embeddings and unitary diagonalization. We corroborate our approach theoretically and show that all real square matrices---thus all possible relation adjacency matrices---are the real part of some unitarily diagonalizable matrix. This results opens the door to a lot of other applications of square matrices factorization. Our approach based on complex embeddings is arguably simple, as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors, whereas other methods resort to more and more complicated composition functions to increase their expressiveness. The proposed complex embeddings are scalable to large data sets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.
Conversely, in signal processing, data is often complex-valued @cite_19 and the complex-valued counterparts of these decompositions are then used. Joint diagonalization is also a much more common tool than in machine learning for decomposing sets of (complex) dense square matrices @cite_16 @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_16" ], "mid": [ "2147544992", "", "2142638745" ], "abstract": [ "Comon's (1994) well-known scheme for independent component analysis (ICA) is based on the maximal diagonalization, in a least-squares sense, of a higher-order cumulant tensor. In a previous paper, we proved that for fourth-order cumulants, the computation of an elementary Jacobi rotation is equivalent to the computation of the best rank-1 approximation of a fourth-order tensor. In this paper, we show that for third-order tensors, the computation of an elementary Jacobi rotation is again equivalent to a best rank-1 approximation; however, here, it is a matrix that has to be approximated. This favorable computational load makes it attractive to do \"something third-order-like\" for fourth-order cumulant tensors as well. We show that simultaneous optimal diagonalization of \"third-order tensor slices\" of the fourth-order cumulant is a suitable strategy. This \"simultaneous third-order tensor diagonalization\" approach (STOTD) is similar in spirit to the efficient JADE-algorithm.", "", "Separation of sources consists of recovering a set of signals of which only instantaneous linear mixtures are observed. In many situations, no a priori information on the mixing matrix is available: The linear mixture should be \"blindly\" processed. This typically occurs in narrowband array processing applications when the array manifold is unknown or distorted. This paper introduces a new source separation technique exploiting the time coherence of the source signals. In contrast with other previously reported techniques, the proposed approach relies only on stationary second-order statistics that are based on a joint diagonalization of a set of covariance matrices. Asymptotic performance analysis of this method is carried out; some numerical simulations are provided to illustrate the effectiveness of the proposed method." ] }
1702.06879
2590699358
In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs---labeled directed graphs---and predicting missing relationships---labeled edges. State-of-the-art embedding models propose different trade-offs between modeling expressiveness, and time and space complexity. We reconcile both expressiveness and complexity through the use of complex-valued embeddings and explore the link between such complex-valued embeddings and unitary diagonalization. We corroborate our approach theoretically and show that all real square matrices---thus all possible relation adjacency matrices---are the real part of some unitarily diagonalizable matrix. This results opens the door to a lot of other applications of square matrices factorization. Our approach based on complex embeddings is arguably simple, as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors, whereas other methods resort to more and more complicated composition functions to increase their expressiveness. The proposed complex embeddings are scalable to large data sets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.
Some works on recommender systems use complex numbers as an encoding facility, to merge two real-valued relations, similarity and liking, into one single complex-valued matrix which is then decomposed with complex embeddings @cite_53 @cite_6 . Still, unlike our work, it is not real data that is decomposed in the complex domain.
{ "cite_N": [ "@cite_53", "@cite_6" ], "mid": [ "1997707785", "2060594535" ], "abstract": [ "A typical recommender setting is based on two kinds of relations: similarity between users (or between objects) and the taste of users towards certain objects. In environments such as online dating websites, these two relations are difficult to separate, as the users can be similar to each other, but also have preferences towards other users, i.e., rate other users. In this paper, we present a novel and unified way to model this duality of the relations by using split-complex numbers, a number system related to the complex numbers that is used in mathematics, physics and other fields. We show that this unified representation is capable of modeling both notions of relations between users in a joint expression and apply it for recommending potential partners. In experiments with the Czech dating website Libimseti.cz we show that our modeling approach leads to an improvement over baseline recommendation methods in this scenario.", "Recommendation can be reduced to a sub-problem of link prediction, with specific nodes (users and items) and links (similar relations among users items, and interactions between users and items). However, previous link prediction approaches must be modified to suit recommendation instances because they neglect to distinguish the fundamental relations similar vs. dissimilar and like vs. dislike. Here, we propose a novel and unified way to cope with this deficiency, modeling the relational dualities using complex numbers. Previous works can still be used in this representation. In experiments with the MovieLens dataset and the Android software website AppChina.com, the proposed Complex Representation-based Link Prediction method (CORLP) achieves significant performance in accuracy and coverage compared with state-of-the-art methods. In addition, the results reveal several new findings. First, performance is improved, when the user and item degrees are taken into account. Second, the item degree plays a more important role than the user degree in the final recommendation. Given its notable performance, we are preparing to use the method in a commercial setting, AppChina.com, for application recommendation." ] }
1702.06879
2590699358
In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs---labeled directed graphs---and predicting missing relationships---labeled edges. State-of-the-art embedding models propose different trade-offs between modeling expressiveness, and time and space complexity. We reconcile both expressiveness and complexity through the use of complex-valued embeddings and explore the link between such complex-valued embeddings and unitary diagonalization. We corroborate our approach theoretically and show that all real square matrices---thus all possible relation adjacency matrices---are the real part of some unitarily diagonalizable matrix. This results opens the door to a lot of other applications of square matrices factorization. Our approach based on complex embeddings is arguably simple, as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors, whereas other methods resort to more and more complicated composition functions to increase their expressiveness. The proposed complex embeddings are scalable to large data sets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.
Many knowledge graphs have recently arisen, pushed by the W3C recommendation to use the resource description framework (RDF) @cite_35 for data representation. Examples of such knowledge graphs include DBPedia @cite_7 , Freebase @cite_8 and the Google Knowledge Vault @cite_39 . Motivating applications of knowledge graph completion include question answering @cite_32 and more generally probabilistic querying of knowledge bases @cite_18 @cite_48 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_7", "@cite_8", "@cite_48", "@cite_32", "@cite_39" ], "mid": [ "", "1767833073", "102708294", "2094728533", "1930472418", "2952792693", "2016753842" ], "abstract": [ "", "Over the last few years, RDF has been used as a knowledge representation model in a wide variety of domains. Some domains are full of uncertainty. Thus, it is desired to process and manage probabilistic RDF data. The core operation of queries on an RDF probabilistic database is computing the probability of the result to a query. In this paper, we describe a general framework for supporting SPARQL queries on probabilistic RDF databases. In particular, we consider transitive inference capability for RDF instance data. We show that the find operation for an atomic query with the transitive property can be formalized as the problem of computing path expressions on the transitive relation graph and we also propose an approximate algorithm for computing path expressions efficiently. At last, we implement and experimentally evaluate our approach.", "DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.", "Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications.", "An increasing amount of data is becoming available in the form of large triple stores, with the Semantic Web's linked open data cloud (LOD) as one of the most prominent examples. Data quality and completeness are key issues in many community-generated data stores, like LOD, which motivates probabilistic and statistical approaches to data representation, reasoning and querying. In this paper we address the issue from the perspective of probabilistic databases, which account for uncertainty in the data via a probability distribution over all database instances. We obtain a highly compressed representation using the recently developed RESCAL approach and demonstrate experimentally that efficient querying can be obtained by exploiting inherent features of RESCAL via sub-query approximations of deterministic views.", "Building computers able to answer questions on any subject is a long standing goal of artificial intelligence. Promising progress has recently been achieved by methods that learn to map questions to logical forms or database queries. Such approaches can be effective but at the cost of either large amounts of human-labeled data or by defining lexicons and grammars tailored by practitioners. In this paper, we instead take the radical approach of learning to map questions to vectorial feature representations. By mapping answers into the same space one can query any knowledge base independent of its schema, without requiring any grammar or lexicon. Our method is trained with a new optimization procedure combining stochastic gradient descent followed by a fine-tuning step using the weak supervision provided by blending automatically and collaboratively generated resources. We empirically demonstrate that our model can capture meaningful signals from its noisy supervision leading to major improvements over paralex, the only existing method able to be trained on similar weakly labeled data.", "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods." ] }
1702.06879
2590699358
In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs---labeled directed graphs---and predicting missing relationships---labeled edges. State-of-the-art embedding models propose different trade-offs between modeling expressiveness, and time and space complexity. We reconcile both expressiveness and complexity through the use of complex-valued embeddings and explore the link between such complex-valued embeddings and unitary diagonalization. We corroborate our approach theoretically and show that all real square matrices---thus all possible relation adjacency matrices---are the real part of some unitarily diagonalizable matrix. This results opens the door to a lot of other applications of square matrices factorization. Our approach based on complex embeddings is arguably simple, as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors, whereas other methods resort to more and more complicated composition functions to increase their expressiveness. The proposed complex embeddings are scalable to large data sets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.
First approaches to relational learning relied upon probabilistic graphical models @cite_49 , such as bayesian networks @cite_30 and markov logic networks @cite_5 @cite_12 .
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_12", "@cite_49" ], "mid": [ "2126185296", "1977970897", "", "1585529040" ], "abstract": [ "A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with “flat” data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much of the relational structure present in our database. This paper builds on the recent work on probabilistic relational models (PRMs), and describes how to learn them from databases. PRMs allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related objects. Although PRMs are significantly more expressive than standard models, such as Bayesian networks, we show how to extend well-known statistical methods for learning Bayesian networks to learn these models. We describe both parameter estimation and structure learning — the automatic induction of the dependency structure in a model. Moreover, we show how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets. We present experimental results on both real and synthetic relational databases.", "We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.", "", "Handling inherent uncertainty and exploiting compositional structure are fundamental to understanding and designing large-scale systems. Statistical relational learning builds on ideas from probability theory and statistics to address uncertainty while incorporating tools from logic, databases and programming languages to represent structure. In Introduction to Statistical Relational Learning, leading researchers in this emerging area of machine learning describe current formalisms, models, and algorithms that enable effective and robust reasoning about richly structured systems and data. The early chapters provide tutorials for material used in later chapters, offering introductions to representation, inference and learning in graphical models, and logic. The book then describes object-oriented approaches, including probabilistic relational models, relational Markov networks, and probabilistic entity-relationship models as well as logic-based formalisms including Bayesian logic programs, Markov logic, and stochastic logic programs. Later chapters discuss such topics as probabilistic models with unknown objects, relational dependency networks, reinforcement learning in relational domains, and information extraction. By presenting a variety of approaches, the book highlights commonalities and clarifies important differences among proposed approaches and, along the way, identifies important representational and algorithmic issues. Numerous applications are provided throughout.Lise Getoor is Assistant Professor in the Department of Computer Science at the University of Maryland. Ben Taskar is Assistant Professor in the Computer and Information Science Department at the University of Pennsylvania." ] }
1702.06879
2590699358
In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs---labeled directed graphs---and predicting missing relationships---labeled edges. State-of-the-art embedding models propose different trade-offs between modeling expressiveness, and time and space complexity. We reconcile both expressiveness and complexity through the use of complex-valued embeddings and explore the link between such complex-valued embeddings and unitary diagonalization. We corroborate our approach theoretically and show that all real square matrices---thus all possible relation adjacency matrices---are the real part of some unitarily diagonalizable matrix. This results opens the door to a lot of other applications of square matrices factorization. Our approach based on complex embeddings is arguably simple, as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors, whereas other methods resort to more and more complicated composition functions to increase their expressiveness. The proposed complex embeddings are scalable to large data sets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.
With the first embedding models, asymmetry of relations was quickly seen as a problem and asymmetric extensions of tensors were studied, mostly by either considering independent embeddings @cite_20 or considering relations as matrices instead of vectors in the RESCAL model @cite_22 , or both @cite_29 . Direct extensions were based on uni-,bi- and trigram latent factors for triple data @cite_4 , as well as a low-rank relation matrix @cite_21 . propose a two-layer model where subject and object embeddings are first separately combined with the relation embedding, then each intermediate representation is combined into the final score.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_29", "@cite_21", "@cite_20" ], "mid": [ "614875374", "205829674", "2123228027", "2101802482", "1910578190" ], "abstract": [ "This paper tackles the problem of endogenous link prediction for knowledge base completion. Knowledge bases can be represented as directed graphs whose nodes correspond to entities and edges to relationships. Previous attempts either consist of powerful systems with high capacity to model complex connectivity patterns, which unfortunately usually end up overfitting on rare relationships, or in approaches that trade capacity for simplicity in order to fairly model all relationships, frequent or not. In this paper, we propose TATEC, a happy medium obtained by complementing a high-capacity model with a simpler one, both pre-trained separately and then combined. We present several variants of this model with different kinds of regularization and combination strategies and show that this approach outperforms existing methods on different types of relationships by achieving state-of-the-art results on four benchmarks of the literature.", "Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.", "We consider the problem of learning probabilistic models for complex relational structures between various types of objects. A model can help us \"understand\" a dataset of relational facts in at least two ways, by finding interpretable structure in the data, and by supporting predictions, or inferences about whether particular unobserved relations are likely to be true. Often there is a tradeoff between these two aims: cluster-based models yield more easily interpretable representations, while factorization-based approaches have given better predictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relations in a nonparametric Bayesian clustering framework. Inference is fully Bayesian but scales well to large data sets. The model simultaneously discovers interpretable clusters and yields predictive performance that matches or beats previous probabilistic models for relational data.", "Many data such as social networks, movie preferences or knowledge bases are multi-relational, in that they describe multiple relations between entities. While there is a large body of work focused on modeling these data, modeling these multiple types of relations jointly remains challenging. Further, existing approaches tend to breakdown when the number of these types grows. In this paper, we propose a method for modeling large multi-relational datasets, with possibly thousands of relations. Our model is based on a bilinear structure, which captures various orders of interaction of the data, and also shares sparse latent factors across different relations. We illustrate the performance of our approach on standard tensor-factorization datasets where we attain, or outperform, state-of-the-art results. Finally, a NLP application demonstrates our scalability and the ability of our model to learn efficient and semantically meaningful verb representations.", "The Semantic Web fosters novel applications targeting a more efficient and satisfying exploitation of the data available on the web, e.g. faceted browsing of linked open data. Large amounts and high diversity of knowledge in the Semantic Web pose the challenging question of appropriate relevance ranking for producing fine-grained and rich descriptions of the available data, e.g. to guide the user along most promising knowledge aspects. Existing methods for graph-based authority ranking lack support for fine-grained latent coherence between resources and predicates (i.e. support for link semantics in the linked data model). In this paper, we present TripleRank, a novel approach for faceted authority ranking in the context of RDF knowledge bases. TripleRank captures the additional latent semantics of Semantic Web data by means of statistical methods in order to produce richer descriptions of the available data. We model the Semantic Web by a 3-dimensional tensor that enables the seamless representation of arbitrary semantic links. For the analysis of that model, we apply the PARAFAC decomposition, which can be seen as a multi-modal counterpart to Web authority ranking with HITS. The result are groupings of resources and predicates that characterize their authority and navigational (hub) properties with respect to identified topics. We have applied TripleRank to multiple data sets from the linked open data community and gathered encouraging feedback in a user evaluation where TripleRank results have been exploited in a faceted browsing scenario." ] }
1702.06879
2590699358
In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs---labeled directed graphs---and predicting missing relationships---labeled edges. State-of-the-art embedding models propose different trade-offs between modeling expressiveness, and time and space complexity. We reconcile both expressiveness and complexity through the use of complex-valued embeddings and explore the link between such complex-valued embeddings and unitary diagonalization. We corroborate our approach theoretically and show that all real square matrices---thus all possible relation adjacency matrices---are the real part of some unitarily diagonalizable matrix. This results opens the door to a lot of other applications of square matrices factorization. Our approach based on complex embeddings is arguably simple, as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors, whereas other methods resort to more and more complicated composition functions to increase their expressiveness. The proposed complex embeddings are scalable to large data sets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.
Pairwise interaction models were also considered to improve prediction performances. For example, the Universal Schema approach @cite_34 factorizes a 2D unfolding of the tensor (a matrix of entity pairs vs. relations) while extend this also to other pairs. also consider augmenting the knowledge graph facts by exctracting them from textual data, as does . Injecting prior knowledge in the form of Horn clauses in the objective loss of the Universal Schema model has also been considered @cite_52 . enhance the RESCAL model to take into account information about the entity types. For recommender systems (thus with different subject object sets of entities), proposed a non-commutative extension of the CP decomposition model. More recently, Gaifman models that learn neighborhood embeddings of local structures in the knowledge graph showed competitive performances @cite_24 .
{ "cite_N": [ "@cite_24", "@cite_34", "@cite_52" ], "mid": [ "2547316200", "1852412531", "2296268288" ], "abstract": [ "We present discriminative Gaifman models, a novel family of relational machine learning models. Gaifman models learn feature representations bottom up from representations of locally connected and bounded-size regions of knowledge bases (KBs). Considering local and bounded-size neighborhoods of knowledge bases renders logical inference and learning tractable, mitigates the problem of overfitting, and facilitates weight sharing. Gaifman models sample neighborhoods of knowledge bases so as to make the learned relational models more robust to missing objects and relations which is a common situation in open-world KBs. We present the core ideas of Gaifman models and apply them to large-scale relational learning problems. We also discuss the ways in which Gaifman models relate to some existing relational machine learning approaches.", "© 2013 Association for Computational Linguistics. Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of preexisting databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present matrix factorization models that learn latent feature vectors for entity tuples and relations. We show that such latent models achieve substantially higher accuracy than a traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms stateof- the-Art distant supervision.", "Matrix factorization approaches to relation extraction provide several attractive features: they support distant supervision, handle open schemas, and leverage unlabeled data. Unfortunately, these methods share a shortcoming with all other distantly supervised approaches: they cannot learn to extract target relations without existing data in the knowledge base, and likewise, these models are inaccurate for relations with sparse data. Rule-based extractors, on the other hand, can be easily extended to novel relations and improved for existing but inaccurate relations, through first-order formulae that capture auxiliary domain knowledge. However, usually a large set of such formulae is necessary to achieve generalization. In this paper, we introduce a paradigm for learning low-dimensional embeddings of entity-pairs and relations that combine the advantages of matrix factorization with first-order logic domain knowledge. We introduce simple approaches for estimating such embeddings, as well as a novel training algorithm to jointly optimize over factual and first-order logic information. Our results show that this method is able to learn accurate extractors with little or no distant supervision alignments, while at the same time generalizing to textual patterns that do not appear in the formulae." ] }
1702.06879
2590699358
In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs---labeled directed graphs---and predicting missing relationships---labeled edges. State-of-the-art embedding models propose different trade-offs between modeling expressiveness, and time and space complexity. We reconcile both expressiveness and complexity through the use of complex-valued embeddings and explore the link between such complex-valued embeddings and unitary diagonalization. We corroborate our approach theoretically and show that all real square matrices---thus all possible relation adjacency matrices---are the real part of some unitarily diagonalizable matrix. This results opens the door to a lot of other applications of square matrices factorization. Our approach based on complex embeddings is arguably simple, as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors, whereas other methods resort to more and more complicated composition functions to increase their expressiveness. The proposed complex embeddings are scalable to large data sets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.
The original multilinear model is symmetric in subject and object for every relation @cite_1 and achieves good performance on FB15K and WN18 data sets. However it is likely due to the absence of true negatives in these data sets, as discussed in Section .
{ "cite_N": [ "@cite_1" ], "mid": [ "2250635077" ], "abstract": [ "Models that learn to represent textual and knowledge base relations in the same continuous latent space are able to perform joint inferences among the two kinds of relations and obtain high accuracy on knowledge base completion (, 2013). In this paper we propose a model that captures the compositional structure of textual relations, and jointly optimizes entity, knowledge base, and textual relation representations. The proposed model significantly improves performance over a model that does not share parameters among textual relations with common sub-structure." ] }
1702.06879
2590699358
In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs---labeled directed graphs---and predicting missing relationships---labeled edges. State-of-the-art embedding models propose different trade-offs between modeling expressiveness, and time and space complexity. We reconcile both expressiveness and complexity through the use of complex-valued embeddings and explore the link between such complex-valued embeddings and unitary diagonalization. We corroborate our approach theoretically and show that all real square matrices---thus all possible relation adjacency matrices---are the real part of some unitarily diagonalizable matrix. This results opens the door to a lot of other applications of square matrices factorization. Our approach based on complex embeddings is arguably simple, as it only involves a Hermitian dot product, the complex counterpart of the standard dot product between real vectors, whereas other methods resort to more and more complicated composition functions to increase their expressiveness. The proposed complex embeddings are scalable to large data sets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.
A recent novel way to handle antisymmetry is via the Holographic Embeddings () model by . In the circular correlation is used for combining entity embeddings, measuring the covariance between embeddings at different dimension shifts. This model has been shown to be equivalent to the model @cite_44 @cite_45 .
{ "cite_N": [ "@cite_44", "@cite_45" ], "mid": [ "2593682006", "2733421109" ], "abstract": [ "We show the equivalence of two state-of-the-art link prediction knowledge graph completion methods: 's holographic embedding and 's complex embedding. We first consider a spectral version of the holographic embedding, exploiting the frequency domain in the Fourier transform for efficient computation. The analysis of the resulting method reveals that it can be viewed as an instance of the complex embedding with certain constraints cast on the initial vectors upon training. Conversely, any complex embedding can be converted to an equivalent holographic embedding.", "Embeddings of knowledge graphs have received significant attention due to their excellent performance for tasks like link prediction and entity resolution. In this short paper, we are providing a comparison of two state-of-the-art knowledge graph embeddings for which their equivalence has recently been established, i.e., ComplEx and HolE [Nickel, Rosasco, and Poggio, 2016; , 2016; Hayashi and Shimbo, 2017]. First, we briefly review both models and discuss how their scoring functions are equivalent. We then analyze the discrepancy of results reported in the original articles, and show experimentally that they are likely due to the use of different loss functions. In further experiments, we evaluate the ability of both models to embed symmetric and antisymmetric patterns. Finally, we discuss advantages and disadvantages of both models and under which conditions one would be preferable to the other." ] }
1702.06298
2952382767
Understanding the influence of a product is crucially important for making informed business decisions. This paper introduces a new type of skyline queries, called uncertain reverse skyline, for measuring the influence of a probabilistic product in uncertain data settings. More specifically, given a dataset of probabilistic products P and a set of customers C, an uncertain reverse skyline of a probabilistic product q retrieves all customers c in C which include q as one of their preferred products. We present efficient pruning ideas and techniques for processing the uncertain reverse skyline query of a probabilistic product using R-Tree data index. We also present an efficient parallel approach to compute the uncertain reverse skyline and influence score of a probabilistic product. Our approach significantly outperforms the baseline approach derived from the existing literature. The efficiency of our approach is demonstrated by conducting extensive experiments with both real and synthetic datasets.
Though there exist many works on parallelizing the standard skyline queries ( @cite_12 , @cite_25 , @cite_17 , @cite_27 , @cite_20 , @cite_16 for survey), there are only few works devoted to parallelizing the reverse skyline queries. @cite_6 propose an approach for parallelizing both dynamic and reverse skyline queries in MapReduce by inventing a novel quad-tree based data indexing. Later, the authors extend their quad-tree based data indexing in @cite_3 for evaluating probabilistic dynamic and reverse skylines. Recently, @cite_0 propose an advancement of the quad-tree based data indexing proposed in @cite_6 for evaluating the dynamic skyline, monochromatic and bichromatic reverse skylines in parallel. Here, we propose an efficient approach for parallelizing the computation of uncertain reverse skyline query result and the influence score of an arbitrary probabilistic product using R-Tree. Our approach for computing the influence score of a probabilistic product is significantly different from the one proposed in @cite_15 . Here, we only compute the dynamic skyline probabilities of the products that appear in the uncertain dynamic skyline of the customers existing in the uncertain reverse skyline of the query product, not for all customers in the dataset.
{ "cite_N": [ "@cite_6", "@cite_3", "@cite_0", "@cite_27", "@cite_12", "@cite_15", "@cite_16", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2092850406", "2200550062", "", "2034384507", "2050682716", "2466409142", "", "", "", "2127286557" ], "abstract": [ "The skyline operator and its variants such as dynamic skyline and reverse skyline operators have attracted considerable attention recently due to their broad applications. However, computations of such operators are challenging today since there is an increasing trend of applications to deal with big data. For such data-intensive applications, the MapReduce framework has been widely used recently. In this paper, we propose efficient parallel algorithms for processing the skyline and its variants using MapReduce. We first build histograms to effectively prune out nonskyline (non-reverse skyline) points in advance. We next partition data based on the regions divided by the histograms and compute candidate (reverse) skyline points for each region independently using MapReduce. Finally, we check whether each candidate point is actually a (reverse) skyline point in every region independently. Our performance study confirms the effectiveness and scalability of the proposed algorithms.", "There has been an increased growth in a number of applications that naturally generate large volumes of uncertain data. By the advent of such applications, the support of advanced analysis queries such as the skyline and its variant operators for big uncertain data has become important. In this paper, we propose the effective parallel algorithms using MapReduce to process the probabilistic skyline queries for uncertain data modeled by both discrete and continuous models. We present three filtering methods to identify probabilistic non-skyline objects in advance. We next develop a single MapReduce phase algorithm PS-QP-MR by utilizing space partitioning based on a variant of quadtrees to distribute the instances of objects effectively and the enhanced algorithm PS-QPF-MR by applying the three filtering methods additionally. We also propose the workload balancing technique to balance the workload of reduce functions based on the number of machines available. Finally, we present the brute-force algorithms PS-BR-MR and PS-BRF-MR with partitioning randomly and applying the filtering methods. In our experiments, we demonstrate the efficiency and scalability of PS-QPF-MR compared to the other algorithms.", "", "This paper studies the problem of computing the skyline of a vast-sized spatial dataset in SpatialHadoop, an extension of Hadoop that supports spatial operations efficiently. The problem is particularly interesting due to advent of Big Spatial Data that are generated by modern applications run on mobile devices, and also because of the importance of the skyline operator for decision-making and supporting business intelligence. To this end, we present a scalable and efficient framework for skyline query processing that operates on top of SpatialHadoop, and can be parameterized by individual techniques related to filtering of candidate points as well as merging of local skyline sets. Then, we introduce two novel algorithms that follow the pattern of the framework and boost the performance of skyline query processing. Our algorithms employ specific optimizations based on effective filtering and efficient merging, the combination of which is responsible for improved efficiency. We compare our solution against the state-of-the-art skyline algorithm in SpatialHadoop. The results show that our techniques are more efficient and outperform the competitor significantly, especially in the case of large skyline output size.", "During the last decades, data management and storage have become increasingly distributed. Advanced query operators, such as skyline queries, are necessary in order to help users to handle the huge amount of available data by identifying a set of interesting data objects. Skyline query processing in highly distributed environments poses inherent challenges and demands and requires non-traditional techniques due to the distribution of content and the lack of global knowledge. This paper surveys this interesting and still evolving research area, so that readers can easily obtain an overview of the state-of-the-art. We outline the objectives and the main principles that any distributed skyline approach has to fulfill, leading to useful guidelines for developing algorithms for distributed skyline processing. We review in detail existing approaches that are applicable for highly distributed environments, clarify the assumptions of each approach, and provide a comparative performance analysis. Moreover, we study the skyline variants each approach supports. Our analysis leads to a taxonomy of existing approaches. Finally, we present interesting research topics on distributed skyline computation that have not yet been explored.", "With the development of the economy, products are significantly enriched, and uncertainty has been their inherent quality. The probabilistic dynamic skyline (PDS) query is a powerful tool for customers to use in selecting products according to their preferences. However, this query suffers several limitations: it requires the specification of a probabilistic threshold, which reports undesirable results and disregards important results; it only focuses on the objects that have large dynamic skyline probabilities; and, additionally, the results are not stable. To address this concern, in this paper, we formulate an uncertain dynamic skyline (UDS) query over a probabilistic product set. Furthermore, we propose effective pruning strategies for the UDS query, and integrate them into effective algorithms. In addition, a novel query type, namely the top @math favorite probabilistic products (TFPP) query, is presented. The TFPP query is utilized to select @math products which can meet the needs of a customer set at the maximum level. To tackle the TFPP query, we propose a TFPP algorithm and its efficient parallelization. Extensive experiments with a variety of experimental settings illustrate the efficiency and effectiveness of our proposed algorithms.", "", "", "", "In this paper, we design and analyze parallel algorithms for skyline queries. The skyline of a multidimensional set consists of the points for which no other point exists that is at least as good along every dimension. As a framework for parallel computation, we use both the MP model proposed in Koutris and Suciu (2011), which requires that the data is perfectly load-balanced, and a variation of the model in Afrati and Ullman (2010), the GMP model, which demands weaker load balancing constraints. In addition to load balancing, we want to minimize the number of blocking steps, where all processors must wait and synchronize. We propose a 2-step algorithm in the MP model for any dimension of the dataset, as well a 1-step algorithm for the case of 2 and 3 dimensions. Finally, we present a 1-step algorithm in the GMP model for any number of dimensions and a 1-step algorithm in the MP model for uniform distributions of data points." ] }
1702.06602
2592305621
Metric learning methods for dimensionality reduction in combination with k-Nearest Neighbors (kNN) have been extensively deployed in many classification, data embedding, and information retrieval applications. However, most of these approaches involve pairwise training data comparisons, and thus have quadratic computational complexity with respect to the size of training set, preventing them from scaling to fairly big datasets. Moreover, during testing, comparing test data against all the training data points is also expensive in terms of both computational cost and resources required. Furthermore, previous metrics are either too constrained or too expressive to be well learned. To effectively solve these issues, we present an exemplar-centered supervised shallow parametric data embedding model, using a Maximally Collapsing Metric Learning (MCML) objective. Our strategy learns a shallow high-order parametric embedding function and compares training test data only with learned or precomputed exemplars, resulting in a cost function with linear computational complexity for both training and testing. We also empirically demonstrate, using several benchmark datasets, that for classification in two-dimensional embedding space, our approach not only gains speedup of kNN by hundreds of times, but also outperforms state-of-the-art supervised embedding approaches.
Metric learning methods and their applications have been comprehensively surveyed in @cite_13 @cite_20 . Among them, our proposed method en-HOPE is closely related to the ones that can be used for dimensionality reduction and data visualization, including MCML @cite_12 , NCA @cite_5 , LMNN @cite_8 , nonlinear LMNN @cite_21 , and their deep learning extensions such as dt-MCML @cite_0 , dt-NCA @cite_0 , and DNet-kNN @cite_3 . en-HOPE is also related to neighborhood-modeling dimensionality reduction methods such as LPP @cite_10 , t-SNE @cite_2 , its parametric implementation SNE-encoder @cite_14 and deep parametric implementation pt-SNE @cite_15 . The objective functions of all these related methods have at least quadratic computational complexity with respect to the size of training set due to pairwise training data comparisons required for either loss evaluations or target neighborhood constructions. Our work is also closely related to the RVML method @cite_17 , which suffers scalability issues as MCML does.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_10", "@cite_21", "@cite_3", "@cite_0", "@cite_2", "@cite_5", "@cite_15", "@cite_20", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "", "2106053110", "2154872931", "", "", "2143797877", "2187089797", "", "2116516955", "2121949863", "", "2104752854", "" ], "abstract": [ "", "The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.", "Many problems in information processing involve some form of dimensionality reduction. In this paper, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. This is borne out by illustrative examples on some high dimensional data sets.", "", "", "Deep learning has been successfully applied to perform non-linear embedding. In this paper, we present supervised embedding techniques that use a deep network to collapse classes. The network is pre-trained using a stack of RBMs, and finetuned using approaches that try to collapse classes. The finetuning is inspired by ideas from NCA, but it uses a Student t-distribution to model the similarities of data points belonging to the same class in the embedding. We investigate two types of objective functions: deep t-distributed MCML (dt-MCML) and deep t-distributed NCA (dt-NCA). Our experiments on two handwritten digit data sets reveal the strong performance of dt-MCML in supervised parametric data visualization, whereas dt-NCA outperforms alternative techniques when embeddings with more than two or three dimensions are constructed, e.g., to obtain good classification performances. Overall, our results demonstrate the advantage of using a deep architecture and a heavy-tailed t-distribution for measuring pairwise similarities in supervised embedding.", "We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.", "", "The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.", "The metric learning problem is concerned with learning a distance function tuned to a particular task, and has been shown to be useful when used in conjunction with nearest-neighbor methods and other techniques that rely on distances or similarities. This survey presents an overview of existing research in metric learning, including recent progress on scaling to high-dimensional feature spaces and to data sets with an extremely large number of data points. A goal of the survey is to present as unified as possible a framework under which existing research on metric learning can be cast. The first part of the survey focuses on linear metric learning approaches, mainly concentrating on the class of Mahalanobis distance learning methods. We then discuss nonlinear metric learning approaches, focusing on the connections between the nonlinear and linear approaches. Finally, we discuss extensions of metric learning, as well as applications to a variety of problems in computer vision, text analysis, program analysis, and multimedia. Full text available at: http: dx.doi.org 10.1561 2200000019", "", "We present an algorithm for learning a quadratic Gaussian metric (Mahalanobis distance) for use in classification tasks. Our method relies on the simple geometric intuition that a good metric is one under which points in the same class are simultaneously near each other and far from points in the other classes. We construct a convex optimization problem whose solution generates such a metric by trying to collapse all examples in the same class to a single point and push examples in other classes infinitely far away. We show that when the metric we learn is used in simple classifiers, it yields substantial improvements over standard alternatives on a variety of problems. We also discuss how the learned metric may be used to obtain a compact low dimensional feature representation of the original input space, allowing more efficient classification with very little reduction in performance.", "" ] }
1702.06602
2592305621
Metric learning methods for dimensionality reduction in combination with k-Nearest Neighbors (kNN) have been extensively deployed in many classification, data embedding, and information retrieval applications. However, most of these approaches involve pairwise training data comparisons, and thus have quadratic computational complexity with respect to the size of training set, preventing them from scaling to fairly big datasets. Moreover, during testing, comparing test data against all the training data points is also expensive in terms of both computational cost and resources required. Furthermore, previous metrics are either too constrained or too expressive to be well learned. To effectively solve these issues, we present an exemplar-centered supervised shallow parametric data embedding model, using a Maximally Collapsing Metric Learning (MCML) objective. Our strategy learns a shallow high-order parametric embedding function and compares training test data only with learned or precomputed exemplars, resulting in a cost function with linear computational complexity for both training and testing. We also empirically demonstrate, using several benchmark datasets, that for classification in two-dimensional embedding space, our approach not only gains speedup of kNN by hundreds of times, but also outperforms state-of-the-art supervised embedding approaches.
en-HOPE is closely related to a recent sample compression method called Stochastic Neighbor Compression (SNC) @cite_6 for accelerating kNN classification in a high-dimensional input feature space. SNC learns a set of high-dimensional exemplars by optimizing a modified objective function of NCA. en-HOPE differs from SNC in several aspects: First, their objective functions are different; Second, en-HOPE learns a nonlinear metric based on a shallow model for dimensionality reduction and data visualization, but SNC does not have such capabilities; Third, en-HOPE does not necessarily learn exemplars, instead, which can be precomputed. We will compare en-HOPE to SNC in the experiments to evaluate the compression ability of en-HOPE, however, the focus of en-HOPE is for data embedding and visualization but not for sample compression in a high-dimensional space.
{ "cite_N": [ "@cite_6" ], "mid": [ "2138457014" ], "abstract": [ "We present Stochastic Neighbor Compression (SNC), an algorithm to compress a dataset for the purpose of k-nearest neighbor (kNN) classification. Given training data, SNC learns a much smaller synthetic data set, that minimizes the stochastic 1-nearest neighbor classification error on the training data. This approach has several appealing properties: due to its small size, the compressed set speeds up kNN testing drastically (up to several orders of magnitude, in our experiments); it makes the kNN classifier substantially more robust to label noise; on 4 of 7 data sets it yields lower test error than kNN on the entire training set, even at compression ratios as low as 2 ; finally, the SNC compression leads to impressive speed ups over kNN even when kNN and SNC are both used with ball-tree data structures, hashing, and LMNN dimensionality reduction--demonstrating that it is complementary to existing state-of-the-art algorithms to speed up kNN classification and leads to substantial further improvements." ] }
1702.06673
2593945573
Cascades on social and information networks have been a tremendously popular subject of study in the past decade, and there is a considerable literature on phenomena such as diffusion mechanisms, virality, cascade prediction, and peer network effects. Against the backdrop of this research, a basic question has received comparatively little attention: how desirable are cascades on a social media platform from the point of view of users' While versions of this question have been considered from the perspective of the producers of cascades, any answer to this question must also take into account the effect of cascades on their audience --- the viewers of the cascade who do not directly participate in generating the content that launched it. In this work, we seek to fill this gap by providing a consumer perspective of information cascades. Users on social and information networks play the dual role of producers and consumers, and our work focuses on how users perceive cascades as consumers. Starting from this perspective, we perform an empirical study of the interaction of Twitter users with retweet cascades. We measure how often users observe retweets in their home timeline, and observe a phenomenon that we term the Impressions Paradox: the share of impressions for cascades of size k decays much more slowly than frequency of cascades of size k. Thus, the audience for cascades can be quite large even for rare large cascades. We also measure audience engagement with retweet cascades in comparison to non-retweeted or organic content. Our results show that cascades often rival or exceed organic content in engagement received per impression. This result is perhaps surprising in that consumers didn't opt in to see tweets from these authors. Furthermore, although cascading content is widely popular, one would expect it to eventually reach parts of the audience that may not be interested in the content. Motivated by the tension in these empirical findings, we posit a simple theoretical model that focuses on the effect of cascades on the audience (rather than the cascade producers). Our results on this model highlight the balance between retweeting as a high-quality content selection mechanism and the role of network users in filtering irrelevant content. In particular, the results suggest that together these two effects enable the audience to consume a high quality stream of content in the presence of cascades.
There has been extensive work on on-line information diffusion. This has included studies of news @cite_13 @cite_7 @cite_8 , recommendations @cite_16 , quotes @cite_20 , hashtags on Twitter @cite_9 @cite_0 @cite_6 @cite_5 @cite_11 , information flow on Twitter @cite_19 and memes on Facebook @cite_21 @cite_4 . Past work has also investigated methodological issues including definitions of virality @cite_3 , the problem of prediction @cite_4 , the trade-off between precision and recall in cascading content @cite_1 , and the role of mathematical epidemic models @cite_10 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_10", "@cite_9", "@cite_21", "@cite_1", "@cite_6", "@cite_3", "@cite_0", "@cite_19", "@cite_5", "@cite_16", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "1996263819", "2964586292", "1686065478", "2289705509", "2145446394", "90853004", "2153187222", "1806085624", "2178843456", "2114544578", "2112896229", "1551048630", "1994473607", "2152284345", "2953303434", "2233019047" ], "abstract": [ "On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.", "Why are certain pieces of online content (e.g., advertisements, videos, news articles) more viral than others? This article takes a psychological approach to understanding diffusion. Using a unique data set of all the New York Times articles published over a three-month period, the authors examine how emotion shapes virality. The results indicate that positive content is more viral than negative content, but the relationship between emotion and social transmission is more complex than valence alone. Virality is partially driven by physiological arousal. Content that evokes high-arousal positive (awe) or negative (anger or anxiety) emotions is more viral. Content that evokes low-arousal, or deactivating, emotions (e.g., sadness) is less viral. These results hold even when the authors control for how surprising, interesting, or practically useful content is (all of which are positively linked to virality), as well as external drivers of attention (e.g., how prominently content was featured). Experimental re...", "Exposure to news, opinion and civic information increasingly occurs through social media. How do these online networks influence exposure to perspectives that cut across ideological lines? Using de-identified data, we examined how 10.1 million U.S. Facebook users interact with socially shared news. We directly measured ideological homophily in friend networks, and examine the extent to which heterogeneous friends could potentially expose individuals to cross-cutting content. We then quantified the extent to which individuals encounter comparatively more or less diverse content while interacting via Facebook’s algorithmically ranked News Feed, and further studied users’ choices to click through to ideologically discordant content. Compared to algorithmic ranking, individuals’ choices about what to consume had a stronger effect limiting exposure to cross-cutting content.", "Information cascades on social networks, such as retweet cascades on Twitter, have been often viewed as an epidemiological process, with the associated notion of virality to capture popular cascades that spread across the network. The notion of structural virality or average path length has been posited as a measure of global spread. In this paper, we argue that this simple epidemiological view, though analytically compelling, is not the entire story. We first show empirically that the classical SIR diffusion process on the Twitter graph, even with the best possible distribution of infectiousness parameter, cannot explain the nature of observed retweet cascades on Twitter. More specifically, rather than spreading further from the source as the SIR model would predict, many cascades that have several retweets from direct followers, die out quickly beyond that. We show that our empirical observations can be reconciled if we take interests of users and tweets into account. In particular, we consider a model where users have multi-dimensional interests, and connect to other users based on similarity in interests. Tweets are correspondingly labeled with interests, and propagate only in the subgraph of interested users via the SIR process. In this model, interests can be either narrow or broad, with the narrowest interest corresponding to a star graph on the interested users, with the root being the source of the tweet, and the broadest interest spanning the whole graph. We show that if tweets are generated using such a mix of interests, coupled with a varying infectiousness parameter, then we can qualitatively explain our observation that cascades die out much more quickly than is predicted by the SIR model. In the same breath, this model also explains how cascades can have large size, but low \"structural virality\" or average path length.", "There is a widespread intuitive sense that different kinds of information spread differently on-line, but it has been difficult to evaluate this question quantitatively since it requires a setting where many different kinds of information spread in a shared environment. Here we study this issue on Twitter, analyzing the ways in which tokens known as hashtags spread on a network defined by the interactions among Twitter users. We find significant variation in the ways that widely-used hashtags on different topics spread. Our results show that this variation is not attributable simply to differences in \"stickiness,\" the probability of adoption based on one or more exposures, but also to a quantity that could be viewed as a kind of \"persistence\" - the relative extent to which repeated exposures to a hashtag continue to have significant marginal effects. We find that hashtags on politically controversial topics are particularly persistent, with repeated exposures continuing to have unusually large marginal effects on adoption; this provides, to our knowledge, the first large-scale validation of the \"complex contagion\" principle from sociology, which posits that repeated exposures to an idea are particularly crucial when the idea is in some way controversial or contentious. Among other findings, we discover that hashtags representing the natural analogues of Twitter idioms and neologisms are particularly non-persistent, with the effect of multiple exposures decaying rapidly relative to the first exposure. We also study the subgraph structure of the initial adopters for different widely-adopted hashtags, again finding structural differences across topics. We develop simulation-based and generative models to analyze how the adoption dynamics interact with the network structure of the early adopters on which a hashtag spreads.", "When users post photos on Facebook, they have the option of allowing their friends, followers, or anyone at all to subsequently reshare the photo. A portion of the billions of photos posted to Facebook generates cascades of reshares, enabling many additional users to see, like, comment, and reshare the photos. In this paper we present characteristics of such cascades in aggregate, finding that a small fraction of photos account for a significant proportion of reshare activity and generate cascades of non-trivial size and depth. We also show that the true influence chains in such cascades can be much deeper than what is visible through direct attribution. To illuminate how large cascades can form, we study the diffusion trees of two widely distributed photos: one posted on President Barack Obama’s page following his reelection victory, and another posted by an individual Facebook user hoping to garner enough likes for a cause. We show that the two cascades, despite achieving comparable total sizes, are markedly different in their time evolution, reshare depth distribution, predictability of subcascade sizes, and the demographics of users who propagate them. The findings suggest not only that cascades can achieve considerable size but that they can do so in distinct ways.", "The diffusion of information on online social and information networks has been a popular topic of study in recent years, but attention has typically focused on speed of dissemination and recall (i.e. the fraction of users getting a piece of information). In this paper, we study the complementary notion of the precision of information diffusion. Our model of information dissemination is \"broadcast-based'', i.e., one where every message (original or forwarded) from a user goes to a fixed set of recipients, often called the user's friends'' or followers'', as in Facebook and Twitter. The precision of the diffusion process is then defined as the fraction of received messages that a user finds interesting. On first glance, it seems that broadcast-based information diffusion is a \"blunt\" targeting mechanism, and must necessarily suffer from low precision. Somewhat surprisingly, we present preliminary experimental and analytical evidence to the contrary: it is possible to simultaneously have high precision (i.e. is bounded below by a constant), high recall, and low diameter! We start by presenting a set of conditions on the structure of user interests, and analytically show the necessity of each of these conditions for obtaining high precision. We also present preliminary experimental evidence from Twitter verifying that these conditions are satisfied. We then prove that the Kronecker-graph based generative model of satisfies these conditions given an appropriate and natural definition of user interests. Further, we show that this model also has high precision, high recall, and low diameter. We finally present preliminary experimental evidence showing Twitter has high precision, validating our conclusion. This is perhaps a first step towards a formal understanding of the immense popularity of online social networks as an information dissemination mechanism.", "People's interests and people's social relationships are intuitively connected, but understanding their interplay and whether they can help predict each other has remained an open question. We examine the interface of two decisive structures forming the backbone of online social media: the graph structure of social networks - who connects with whom - and the set structure of topical affiliations - who is interested in what. In studying this interface, we identify key relationships whereby each of these structures can be understood in terms of the other. The context for our analysis is Twitter, a complex social network of both follower relationships and communication relationships. On Twitter, \"hashtags\" are used to label conversation topics, and we examine hashtag usage alongside these social structures. We find that the hashtags that users adopt can predict their social relationships, and also that the social relationships between the initial adopters of a hashtag can predict the future popularity of that hashtag. By studying weighted social relationships, we observe that while strong reciprocated ties are the easiest to predict from hashtag structure, they are also much less useful than weak directed ties for predicting hashtag popularity. Importantly, we show that computationally simple structural determinants can provide remarkable performance in both tasks. While our analyses focus on Twitter, we view our findings as broadly applicable to topical affiliations and social relationships in a host of diverse contexts, including the movies people watch, the brands people like, or the locations people frequent.", "Viral products and ideas are intuitively understood to grow through a person-to-person diffusion process analogous to the spread of an infectious disease; however, until recently it has been prohibitively difficult to directly observe purportedly viral events, and thus to rigorously quantify or characterize their structural properties. Here we propose a formal measure of what we label “structural virality” that interpolates between two conceptual extremes: content that gains its popularity through a single, large broadcast and that which grows through multiple generations with any one individual directly responsible for only a fraction of the total adoption. We use this notion of structural virality to analyze a unique data set of a billion diffusion events on Twitter, including the propagation of news stories, videos, images, and petitions. We find that across all domains and all sizes of events, online diffusion is characterized by surprising structural diversity; that is, popular events regularly grow via both broadcast and viral mechanisms, as well as essentially all conceivable combinations of the two. Nevertheless, we find that structural virality is typically low, and remains so independent of size, suggesting that popularity is largely driven by the size of the largest broadcast. Finally, we attempt to replicate these findings with a model of contagion characterized by a low infection rate spreading on a scale-free network. We find that although several of our empirical findings are consistent with such a model, it fails to replicate the observed diversity of structural virality, thereby suggesting new directions for future modeling efforts. This paper was accepted by Lorin Hitt, information systems.", "Current social media research mainly focuses on temporal trends of the information flow and on the topology of the social graph that facilitates the propagation of information. In this paper we study the effect of the content of the idea on the information propagation. We present an efficient hybrid approach based on a linear regression for predicting the spread of an idea in a given time frame. We show that a combination of content features with temporal and topological features minimizes prediction error. Our algorithm is evaluated on Twitter hashtags extracted from a dataset of more than 400 million tweets. We analyze the contribution and the limitations of the various feature types to the spread of information, demonstrating that content aspects can be used as strong predictors thus should not be disregarded. We also study the dependencies between global features such as graph topology and content features.", "We study several longstanding questions in media communications research, in the context of the microblogging service Twitter, regarding the production, flow, and consumption of information. To do so, we exploit a recently introduced feature of Twitter known as \"lists\" to distinguish between elite users - by which we mean celebrities, bloggers, and representatives of media outlets and other formal organizations - and ordinary users. Based on this classification, we find a striking concentration of attention on Twitter, in that roughly 50 of URLs consumed are generated by just 20K elite users, where the media produces the most information, but celebrities are the most followed. We also find significant homophily within categories: celebrities listen to celebrities, while bloggers listen to bloggers etc; however, bloggers in general rebroadcast more information than the other categories. Next we re-examine the classical \"two-step flow\" theory of communications, finding considerable support for it on Twitter. Third, we find that URLs broadcast by different categories of users or containing different types of content exhibit systematically different lifespans. And finally, we examine the attention paid by the different user categories to different news topics.", "We examine the growth, survival, and context of 256 novel hashtags during the 2012 U.S. presidential debates. Our analysis reveals the trajectories of hashtag use fall into two distinct classes: \"winners\" that emerge more quickly and are sustained for longer periods of time than other \"also-rans\" hashtags. We propose a \"conversational vibrancy\" framework to capture dynamics of hashtags based on their topicality, interactivity, diversity, and prominence. Statistical analyses of the growth and persistence of hashtags reveal novel relationships between features of this framework and the relative success of hashtags. Specifically, retweets always contribute to faster hashtag adoption, replies extend the life of \"winners\" while having no effect on \"also-rans.\" This is the first study on the lifecycle of hashtag adoption and use in response to purely exogenous shocks. We draw on theories of uses and gratification, organizational ecology, and language evolution to discuss these findings and their implications for understanding social influence and collective action in social media more generally.", "We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We analyze how user behavior varies within user communities defined by a recommendation network. Product purchases follow a ‘long tail’ where a significant share of purchases belongs to rarely sold items. We establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies communities, product, and pricing categories for which viral marketing seems to be very effective.", "In this paper, we study the linking patterns and discussion topics of political bloggers. Our aim is to measure the degree of interaction between liberal and conservative blogs, and to uncover any differences in the structure of the two communities. Specifically, we analyze the posts of 40 \"A-list\" blogs over the period of two months preceding the U.S. Presidential Election of 2004, to study how often they referred to one another and to quantify the overlap in the topics they discussed, both within the liberal and conservative communities, and also across communities. We also study a single day snapshot of over 1,000 political blogs. This snapshot captures blogrolls (the list of links to other blogs frequently found in sidebars), and presents a more static picture of a broader blogosphere. Most significantly, we find differences in the behavior of liberal and conservative blogs, with conservative blogs linking to each other more frequently and in a denser pattern.", "Understanding the ways in which information achieves widespread public awareness is a research question of significant interest. We consider whether, and how, the way in which the information is phrased --- the choice of words and sentence structure --- can affect this process. To this end, we develop an analysis framework and build a corpus of movie quotes, annotated with memorability information, in which we are able to control for both the speaker and the setting of the quotes. We find that there are significant differences between memorable and non-memorable quotes in several key dimensions, even after controlling for situational and contextual factors. One is lexical distinctiveness: in aggregate, memorable quotes use less common word choices, but at the same time are built upon a scaffolding of common syntactic patterns. Another is that memorable quotes tend to be more general in ways that make them easy to apply in new contexts --- that is, more portable. We also show how the concept of \"memorable language\" can be extended across domains.", "Compounding of natural language units is a very common phenomena. In this paper, we show, for the first time, that Twitter hashtags which, could be considered as correlates of such linguistic units, undergo compounding. We identify reasons for this compounding and propose a prediction model that can identify with 77.07 accuracy if a pair of hashtags compounding in the near future (i.e., 2 months after compounding) shall become popular. At longer times T = 6, 10 months the accuracies are 77.52 and 79.13 respectively. This technique has strong implications to trending hashtag recommendation since newly formed hashtag compounds can be recommended early, even before the compounding has taken place. Further, humans can predict compounds with an overall accuracy of only 48.7 (treated as baseline). Notably, while humans can discriminate the relatively easier cases, the automatic framework is successful in classifying the relatively harder cases." ] }
1702.06673
2593945573
Cascades on social and information networks have been a tremendously popular subject of study in the past decade, and there is a considerable literature on phenomena such as diffusion mechanisms, virality, cascade prediction, and peer network effects. Against the backdrop of this research, a basic question has received comparatively little attention: how desirable are cascades on a social media platform from the point of view of users' While versions of this question have been considered from the perspective of the producers of cascades, any answer to this question must also take into account the effect of cascades on their audience --- the viewers of the cascade who do not directly participate in generating the content that launched it. In this work, we seek to fill this gap by providing a consumer perspective of information cascades. Users on social and information networks play the dual role of producers and consumers, and our work focuses on how users perceive cascades as consumers. Starting from this perspective, we perform an empirical study of the interaction of Twitter users with retweet cascades. We measure how often users observe retweets in their home timeline, and observe a phenomenon that we term the Impressions Paradox: the share of impressions for cascades of size k decays much more slowly than frequency of cascades of size k. Thus, the audience for cascades can be quite large even for rare large cascades. We also measure audience engagement with retweet cascades in comparison to non-retweeted or organic content. Our results show that cascades often rival or exceed organic content in engagement received per impression. This result is perhaps surprising in that consumers didn't opt in to see tweets from these authors. Furthermore, although cascading content is widely popular, one would expect it to eventually reach parts of the audience that may not be interested in the content. Motivated by the tension in these empirical findings, we posit a simple theoretical model that focuses on the effect of cascades on the audience (rather than the cascade producers). Our results on this model highlight the balance between retweeting as a high-quality content selection mechanism and the role of network users in filtering irrelevant content. In particular, the results suggest that together these two effects enable the audience to consume a high quality stream of content in the presence of cascades.
In addition, it has been shown that only a very small fraction of cascades become viral @cite_3 but the ones that do become viral cover a large diverse set of users. In other words, if you are the source of a cascade you have a low chance of creating a viral cascade but, once we switch to the consumer's point of view we observe that a large fraction of a user's timeline is made up of these diffusing pieces of content. Another related theme on this work has been the observation that a small number of elite'' users produce a substantial fraction of original content on Twitter @cite_19 . As with other studies, this one also focused on active cascade participants, and our work is differentiated by the focus on cascade audience.
{ "cite_N": [ "@cite_19", "@cite_3" ], "mid": [ "2112896229", "2178843456" ], "abstract": [ "We study several longstanding questions in media communications research, in the context of the microblogging service Twitter, regarding the production, flow, and consumption of information. To do so, we exploit a recently introduced feature of Twitter known as \"lists\" to distinguish between elite users - by which we mean celebrities, bloggers, and representatives of media outlets and other formal organizations - and ordinary users. Based on this classification, we find a striking concentration of attention on Twitter, in that roughly 50 of URLs consumed are generated by just 20K elite users, where the media produces the most information, but celebrities are the most followed. We also find significant homophily within categories: celebrities listen to celebrities, while bloggers listen to bloggers etc; however, bloggers in general rebroadcast more information than the other categories. Next we re-examine the classical \"two-step flow\" theory of communications, finding considerable support for it on Twitter. Third, we find that URLs broadcast by different categories of users or containing different types of content exhibit systematically different lifespans. And finally, we examine the attention paid by the different user categories to different news topics.", "Viral products and ideas are intuitively understood to grow through a person-to-person diffusion process analogous to the spread of an infectious disease; however, until recently it has been prohibitively difficult to directly observe purportedly viral events, and thus to rigorously quantify or characterize their structural properties. Here we propose a formal measure of what we label “structural virality” that interpolates between two conceptual extremes: content that gains its popularity through a single, large broadcast and that which grows through multiple generations with any one individual directly responsible for only a fraction of the total adoption. We use this notion of structural virality to analyze a unique data set of a billion diffusion events on Twitter, including the propagation of news stories, videos, images, and petitions. We find that across all domains and all sizes of events, online diffusion is characterized by surprising structural diversity; that is, popular events regularly grow via both broadcast and viral mechanisms, as well as essentially all conceivable combinations of the two. Nevertheless, we find that structural virality is typically low, and remains so independent of size, suggesting that popularity is largely driven by the size of the largest broadcast. Finally, we attempt to replicate these findings with a model of contagion characterized by a low infection rate spreading on a scale-free network. We find that although several of our empirical findings are consistent with such a model, it fails to replicate the observed diversity of structural virality, thereby suggesting new directions for future modeling efforts. This paper was accepted by Lorin Hitt, information systems." ] }
1702.06355
2590174509
Object detection in videos has drawn increasing attention recently with the introduction of the large-scale ImageNet VID dataset. Different from object detection in static images, temporal information in videos is vital for object detection. To fully utilize temporal information, state-of-the-art methods [15, 14] are based on spatiotemporal tubelets, which are essentially sequences of associated bounding boxes across time. However, the existing methods have major limitations in generating tubelets in terms of quality and efficiency. Motion-based [14] methods are able to obtain dense tubelets efficiently, but the lengths are generally only several frames, which is not optimal for incorporating long-term temporal information. Appearance-based [15] methods, usually involving generic object tracking, could generate long tubelets, but are usually computationally expensive. In this work, we propose a framework for object detection in videos, which consists of a novel tubelet proposal network to efficiently generate spatiotemporal proposals, and a Long Short-term Memory (LSTM) network that incorporates temporal information from tubelet proposals for achieving high object detection accuracy in videos. Experiments on the large-scale ImageNet VID dataset demonstrate the effectiveness of the proposed framework for object detection in videos.
Object detection in videos. Since the introduction of the VID task by the ImageNet challenge, there have been multiple object detection systems for detecting objects in videos. These methods focused on post-processing class scores by static-image detectors to enforce temporal consistency of the scores. Han al @cite_32 associated initial detection results into sequences. Weaker class scores along the sequences within the same video were boosted to improve the initial frame-by-frame detection results. Kang al @cite_17 generated new tubelet proposals by applying tracking algorithms to static-image bounding box proposals. The class scores along the tubelet were first evaluated by the static-image object detector and then re-scored by a 1D CNN model. The same group @cite_20 also tried a different strategy for tubelet classification and re-scoring. In addition, initial detection boxes were propagated to nearby frames according to optical flows between frames, and the class scores not belonging to the top classes were suppressed to enforce temporal consistency of class scores.
{ "cite_N": [ "@cite_32", "@cite_20", "@cite_17" ], "mid": [ "2282391807", "2336589871", "2335901184" ], "abstract": [ "Video object detection is challenging because objects that are easily detected in one frame may be difficult to detect in another frame within the same clip. Recently, there have been major advances for doing object detection in a single image. These methods typically contain three phases: (i) object proposal generation (ii) object classification and (iii) post-processing. We propose a modification of the post-processing phase that uses high-scoring object detections from nearby frames to boost scores of weaker detections within the same clip. We show that our method obtains superior results to state-of-the-art single image object detection techniques. Our method placed 3rd in the video object detection (VID) task of the ImageNet Large Scale Visual Recognition Challenge 2015 (ILSVRC2015).", "The state-of-the-art performance for object detection has been significantly improved over the past two years. Besides the introduction of powerful deep neural networks, such as GoogleNet and VGG, novel object detection frameworks, such as R-CNN and its successors, Fast R-CNN, and Faster R-CNN, play an essential role in improving the state of the art. Despite their effectiveness on still images, those frameworks are not specifically designed for object detection from videos. Temporal and contextual information of videos are not fully investigated and utilized. In this paper, we propose a deep learning framework that incorporates temporal and contextual information from tubelets obtained in videos, which dramatically improves the baseline performance of existing still-image detection frameworks when they are applied to videos. It is called T-CNN, i.e., tubelets with convolutional neueral networks. The proposed framework won newly introduced an object-detection-from-video task with provided data in the ImageNet Large-Scale Visual Recognition Challenge 2015. Code is publicly available at https: github.com myfavouritekk T-CNN .", "Deep Convolution Neural Networks (CNNs) have shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. For object detection, particularly in still images, the performance has been significantly increased last year thanks to powerful deep networks (e.g. GoogleNet) and detection frameworks (e.g. Regions with CNN features (RCNN)). The lately introduced ImageNet [6] task on object detection from video (VID) brings the object detection task into the video domain, in which objects' locations at each frame are required to be annotated with bounding boxes. In this work, we introduce a complete framework for the VID task based on still-image object detection and general object tracking. Their relations and contributions in the VID task are thoroughly studied and evaluated. In addition, a temporal convolution network is proposed to incorporate temporal information to regularize the detection results and shows its effectiveness for the task. Code is available at https: github.com myfavouritekk vdetlib." ] }
1702.06355
2590174509
Object detection in videos has drawn increasing attention recently with the introduction of the large-scale ImageNet VID dataset. Different from object detection in static images, temporal information in videos is vital for object detection. To fully utilize temporal information, state-of-the-art methods [15, 14] are based on spatiotemporal tubelets, which are essentially sequences of associated bounding boxes across time. However, the existing methods have major limitations in generating tubelets in terms of quality and efficiency. Motion-based [14] methods are able to obtain dense tubelets efficiently, but the lengths are generally only several frames, which is not optimal for incorporating long-term temporal information. Appearance-based [15] methods, usually involving generic object tracking, could generate long tubelets, but are usually computationally expensive. In this work, we propose a framework for object detection in videos, which consists of a novel tubelet proposal network to efficiently generate spatiotemporal proposals, and a Long Short-term Memory (LSTM) network that incorporates temporal information from tubelet proposals for achieving high object detection accuracy in videos. Experiments on the large-scale ImageNet VID dataset demonstrate the effectiveness of the proposed framework for object detection in videos.
Object localization in videos. There have been works and datasets @cite_3 @cite_25 @cite_8 on object localization in videos. However, they have a simplified problem setting, where each video is assumed to contain only one known or unknown class and requires annotating only one of the objects in each frame.
{ "cite_N": [ "@cite_25", "@cite_3", "@cite_8" ], "mid": [ "95926497", "1952794764", "1973054923" ], "abstract": [ "In this paper, we tackle the problem of performing efficient co-localization in images and videos. Co-localization is the problem of simultaneously localizing (with bounding boxes) objects of the same class across a set of distinct images or videos. Building upon recent state-of-the-art methods, we show how we are able to naturally incorporate temporal terms and constraints for video co-localization into a quadratic programming framework. Furthermore, by leveraging the Frank-Wolfe algorithm (or conditional gradient), we show how our optimization formulations for both images and videos can be reduced to solving a succession of simple integer programs, leading to increased efficiency in both memory and speed. To validate our method, we present experimental results on the PASCAL VOC 2007 dataset for images and the YouTube-Objects dataset for videos, as well as a joint combination of the two.", "Learning a new object class from cluttered training images is very challenging when the location of object instances is unknown. Previous works generally require objects covering a large portion of the images. We present a novel approach that can cope with extensive clutter as well as large scale and appearance variations between object instances. To make this possible we propose a conditional random field that starts from generic knowledge and then progressively adapts to the new class. Our approach simultaneously localizes object instances while learning an appearance model specific for the class. We demonstrate this on the challenging PASCAL VOC 2007 dataset. Furthermore, our method enables to train any state-of-the-art object detector in a weakly supervised fashion, although it would normally require object location annotations.", "Object detectors are typically trained on a large set of still images annotated by bounding-boxes. This paper introduces an approach for learning object detectors from real-world web videos known only to contain objects of a target class. We propose a fully automatic pipeline that localizes objects in a set of videos of the class and learns a detector for it. The approach extracts candidate spatio-temporal tubes based on motion segmentation and then selects one tube per video jointly over all videos. To compare to the state of the art, we test our detector on still images, i.e., Pascal VOC 2007. We observe that frames extracted from web videos can differ significantly in terms of quality to still images taken by a good camera. Thus, we formulate the learning from videos as a domain adaptation task. We show that training from a combination of weakly annotated videos and fully annotated still images using domain adaptation improves the performance of a detector trained from still images alone." ] }
1702.06362
2592236440
Identifying significant location categories visited by mobile users is the key to a variety of applications. This is an extremely challenging task due to the possible deviation between the estimated location coordinate and the actual location, which could be on the order of kilometers. To estimate the actual location category more precisely, we propose a novel tensor factorization framework, through several key observations including the intrinsic correlations between users, to infer the most likely location categories within the location uncertainty circle. In addition, the proposed algorithm can also predict where users are even in the absence of location information. In order to efficiently solve the proposed framework, we propose a parameter-free and scalable optimization algorithm by effectively exploring the sparse and low-rank structure of the tensor. Our empirical studies show that the proposed algorithm is both efficient and effective: it can solve problems with millions of users and billions of location updates, and also provides superior prediction accuracies on real-world location updates and check-in data sets.
Location semantic meaning identification To go one step further, many location-aware applications also care about the semantic meanings of stay points. To address this problem, a typical idea is to first cluster the stay points to identify regions of interest, and then use a cluster ID to represent stay points belonging to this cluster. Popular clustering approaches in this area include time-based clustering, density-based clustering, and partitioning clustering, as summarized in @cite_33 . In particular, the authors in @cite_28 use a variant of @math -means algorithm to cluster GPS data for detecting users' significant locations. In addition, a density-based clustering algorithm was applied in @cite_35 to infer individual life patterns from GPS trajectory data. The authors in @cite_6 estimate user similarities in terms of semantic location history using a hierarchical clustering-based approach. The work in @cite_30 identifies home and work locations by first transforming user trajectory records into user-location signatures, and then applying @math -Means clustering on these signatures.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_33", "@cite_28", "@cite_6" ], "mid": [ "2506170039", "2109182426", "2098914134", "", "2075190119" ], "abstract": [ "There is a growing interest in leveraging geo-spatial data to provide location-aware services. With a large amount of collected geo-spatial data, a crucial step is to identify important \"base\" locations (e.g., home or work) and understand users' behavior at these locations. In this paper, we propose an unsupervised collaborative learning approach to identifying home and work locations of individuals from geo-spatial trajectory data. Our approach transforms user trajectory records into intuitive and insightful user-location signatures, clusters these signatures, and then identifies location types based on cluster characteristics. This clustering model can be used to identify base locations for new users. We validate this approach using Open Street Map and Foursquare location tags and obtain an accuracy of 80 .", "The increasing pervasiveness of location-acquisition technologies (GPS, GSM networks, etc.) enables people to conveniently log their location history into spatial-temporal data, thus giving rise to the necessity as well as opportunity to discovery valuable knowledge from this type of data. In this paper, we propose the novel notion of individual life pattern, which captures individual's general life style and regularity. Concretely, we propose the life pattern normal form (the LP-normal form) to formally describe which kind of life regularity can be discovered from location history; then we propose the LP-Mine framework to effectively retrieve life patterns from raw individual GPS data. Our definition of life pattern focuses on significant places of individual life and considers diverse properties to combine the significant places. LP-Mine is comprised of two phases: the modelling phase and the mining phase. The modelling phase pre-processes GPS data into an available format as the input of the mining phase. The mining phase applies separate strategies to discover different types of pattern. Finally, we conduct extensive experiments using GPS data collected by volunteers in the real world to verify the effectiveness of the framework.", "The discovery of a person's personally important places involves obtaining the physical locations for a person's places that matter to his daily life and routines. This problem is driven by the requirements from emerging location-aware applications, which allow a user to pose queries awl obtain, information in reference, to places, e.g., ''home\", ''work\" or ''Northwest Health Club\". It is a challenge to map from physical locations to jxtrsonally meaningful places because GPS tracks are continuous data both spatially and temporally, while most existing data mining techniques expect discrete data. Previous work has explored algorithms to discover personal places from location data. However, they all have limitations. Our work proposes a two-step approach that discretized continuous GPS data into places and learns important places from the place features. Our approach was validated using real user data and shown to have good accuracy when applied in predicting not only important and frequent places, but also important and not so frequent places.", "", "In this paper, we aim to estimate the similarity between users according to their GPS trajectories. Our approach first models a user's GPS trajectories with a semantic location history (SLH), e.g., shopping malls → restaurants → cinemas. Then, we measure the similarity between different users' SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user's interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. We evaluate our method based on a real-world GPS dataset collected by 109 users in a period of 1 year. As a result, SLH-MTM outperforms the related works [4]." ] }
1702.06451
2590234360
Abstract In this paper, we focus on fully automatic traffic surveillance camera calibration, which we use for speed measurement of passing vehicles. We improve over a recent state-of-the-art camera calibration method for traffic surveillance based on two detected vanishing points. More importantly, we propose a novel automatic scene scale inference method. The method is based on matching bounding boxes of rendered 3D models of vehicles with detected bounding boxes in the image. The proposed method can be used from arbitrary viewpoints, since it has no constraints on camera placement. We evaluate our method on the recent comprehensive dataset for speed measurement BrnoCompSpeed. Experiments show that our automatic camera calibration method by detection of two vanishing points reduces error by 50 (mean distance ratio error reduced from 0.18 to 0.09) compared to the previous state-of-the-art method. We also show that our scene scale inference method is more precise, outperforming both state-of-the-art automatic calibration method for speed measurement (error reduction by 86 – 7.98 km h to 1.10 km h) and manual calibration (error reduction by 19 – 1.35 km h to 1.10 km h). We also present qualitative results of the proposed automatic camera calibration method on video sequences obtained from real surveillance cameras in various places, and under different lighting conditions (night, dawn, day).
Several authors deal with alignment of 3D models and vehicles and use this technique for gathering data in the context of traffic surveillance. @cite_12 propose to jointly optimize 3D model fitting and fine-grained classification, and @cite_3 align edges formulated as an Active Shape Model . @cite_10 and propose the use of synthetic data to train geometry and viewpoint classifiers for 3D model and 2D image alignment. @cite_0 use detected SIFT features to align 3D vehicle models and the vehicle's observation. They use the alignment mainly to overcome vehicle appearance variation under different viewpoints. However, in our case, as the precise viewpoint on the vehicle is known (), such alignment does not have to be performed. Hence, we adopt a simpler and more efficient method based on 2D bounding boxes -- simplifying the procedure considerably without sacrificing the accuracy.
{ "cite_N": [ "@cite_0", "@cite_3", "@cite_10", "@cite_12" ], "mid": [ "2153500651", "2029315852", "2138011018", "196211074" ], "abstract": [ "We present a method for recognizing a vehicle's make and model in a video clip taken from an arbitrary viewpoint. This is an improvement over existing methods which require a front view. In addition, we present a Bayesian approach for establishing accurate correspondences in multiple view geometry.", "We present a new approach for recognizing the make and model of a car from a single image. While most previous methods are restricted to fixed or limited viewpoints, our system is able to verify a car's make and model from an arbitrary view. Our model consists of 3D space curves obtained by backprojecting image curves onto silhouette-based visual hulls and then refining them using three-view curve matching. These 3D curves are then matched to 2D image curves using a 3D view-based alignment technique. We present two different methods for estimating the pose of a car, which we then use to initialize the 3D curve matching. Our approach is able to verify the exact make and model of a car over a wide range of viewpoints in cluttered scenes.", "While 3D object representations are being revived in the context of multi-view object class detection and scene understanding, they have not yet attained wide-spread use in fine-grained categorization. State-of-the-art approaches achieve remarkable performance when training data is plentiful, but they are typically tied to flat, 2D representations that model objects as a collection of unconnected views, limiting their ability to generalize across viewpoints. In this paper, we therefore lift two state-of-the-art 2D object representations to 3D, on the level of both local feature appearance and location. In extensive experiments on existing and newly proposed datasets, we show our 3D object representations outperform their state-of-the-art 2D counterparts for fine-grained categorization and demonstrate their efficacy for estimating 3D geometry from images via ultra-wide baseline matching and 3D reconstruction.", "3D object modeling and fine-grained classification are often treated as separate tasks. We propose to optimize 3D model fitting and fine-grained classification jointly. Detailed 3D object representations encode more information (e.g., precise part locations and viewpoint) than traditional 2D-based approaches, and can therefore improve fine-grained classification performance. Meanwhile, the predicted class label can also improve 3D model fitting accuracy, e.g., by providing more detailed class-specific shape models. We evaluate our method on a new fine-grained 3D car dataset (FG3DCar), demonstrating our method outperforms several state-of-the-art approaches. Furthermore, we also conduct a series of analyses to explore the dependence between fine-grained classification performance and 3D models." ] }
1702.06451
2590234360
Abstract In this paper, we focus on fully automatic traffic surveillance camera calibration, which we use for speed measurement of passing vehicles. We improve over a recent state-of-the-art camera calibration method for traffic surveillance based on two detected vanishing points. More importantly, we propose a novel automatic scene scale inference method. The method is based on matching bounding boxes of rendered 3D models of vehicles with detected bounding boxes in the image. The proposed method can be used from arbitrary viewpoints, since it has no constraints on camera placement. We evaluate our method on the recent comprehensive dataset for speed measurement BrnoCompSpeed. Experiments show that our automatic camera calibration method by detection of two vanishing points reduces error by 50 (mean distance ratio error reduced from 0.18 to 0.09) compared to the previous state-of-the-art method. We also show that our scene scale inference method is more precise, outperforming both state-of-the-art automatic calibration method for speed measurement (error reduction by 86 – 7.98 km h to 1.10 km h) and manual calibration (error reduction by 19 – 1.35 km h to 1.10 km h). We also present qualitative results of the proposed automatic camera calibration method on video sequences obtained from real surveillance cameras in various places, and under different lighting conditions (night, dawn, day).
When it comes to camera calibration in general, various approaches exist. The widely used method by @cite_11 uses a calibration checkerboard to obtain intrinsic and extrinsic camera parameters (relative to the checkerboard). @cite_8 use controlled panning or tilting with stereo matching to calibrate the camera. Correspondences of lines and points are used by @cite_4 . @cite_1 focus on automatic camera calibration for tennis videos from detected tennis court lines.
{ "cite_N": [ "@cite_1", "@cite_8", "@cite_4", "@cite_11" ], "mid": [ "2044316123", "1531492214", "2048382330", "2167667767" ], "abstract": [ "This paper presents an original algorithm to automatically acquire accurate camera calibration from broadcast tennis video (BTV) as well as demonstrates two of its many applications. Accurate camera calibration from BTV is challenging because the frame-data of BTV is often heavily distorted and full of errors, resulting in wildly fluctuating camera parameters. To meet this challenge, we propose a frame grouping technique, which is based on the observation that many frames in BTV possess the same camera viewpoint. Leveraging on this fact, our algorithm groups frames according to the camera viewpoints. We then perform a group-wise data analysis to obtain a more stable estimate of the camera parameters. Recognizing the fact that some of these parameters do vary somewhat even if they have similar camera viewpoint, we further employ a Hough-like search to tune such parameters, minimizing the reprojection disparity. This two-tiered process gains stability in the estimates of the camera parameters, and yet ensures good match between the model and the reprojected camera view via the tuning step. To demonstrate the utility of such stable calibration, we apply the camera matrix acquired to two applications: (a) 3D virtual content insertion; and (b) tennis-ball detection and tracking. The experimental results show that our algorithm is able to acquire accurate camera matrix and the two applications have very good performances.", "We propose a novel approach to fine-grained image classification in which instances from different classes share common parts but have wide variation in shape and appearance. We use dog breed identification as a test case to show that extracting corresponding parts improves classification performance. This domain is especially challenging since the appearance of corresponding parts can vary dramatically, e.g., the faces of bulldogs and beagles are very different. To find accurate correspondences, we build exemplar-based geometric and appearance models of dog breeds and their face parts. Part correspondence allows us to extract and compare descriptors in like image locations. Our approach also features a hierarchy of parts (e.g., face and eyes) and breed-specific part localization. We achieve 67 recognition rate on a large real-world dataset including 133 dog breeds and 8,351 images, and experimental results show that accurate part localization significantly increases classification performance compared to state-of-the-art approaches.", "We present a new method for solving the problem of camera pose and calibration from a limited number of correspondences between noisy 2D and 3D features. We show that the probabilistic estimation problem can be expressed as a partially linear problem, where point and line correspondences are mixed using a common formulation. Our Sampling-Solving algorithm enables to robustly estimate the parameters and evaluate the probability distribution of the estimated parameters. It solves the problem of pose estimation with unknown focal length using a minimum of only four correspondences (five if the principal point is also unknown). To our knowledge, this is the first calibration method using so few correspondences of both points and lines. Experimental results on minimal data sets show that the algorithm is very robust to Gaussian noise. Experimental comparisons show that our method is much more stable than existing camera calibration methods for small data sets. Finally, some tests show the potential of global uncertainty estimates on real data sets.", "We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use." ] }
1702.06331
2592591519
Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
@cite_30 attempt to define Fog computing, but views it from the narrow prism of network management and connectivity. They also tabulate various features and challenges in brief. Our interest in this paper is from the application, platform and middleware perspective. We further consider the system features and the applications that benefit or need Fog.
{ "cite_N": [ "@cite_30" ], "mid": [ "2154126105" ], "abstract": [ "The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as \"the fog\". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation." ] }
1702.06331
2592591519
Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
Some early research investigates platform and application models for Fog computing. @cite_10 propose a programming model for composing applications that run across mobile (Edge), Fog and Cloud layers as a Platform as a Service (PaaS). They offer a multi-way 3-level tree model where the computation is rooted in the Cloud, resources are elastically acquired in the Cloud and Fog layers, and communication is possible between Cloud and Fog, or Fog and Edge. A strictly hierarchical model while simple, limits the flexibility in application composition. Their example applications do not consider a role for the Cloud either, though their APIs support it. This degenerates to a client-server model between the edges and their Fog parent. Further, the interactions between edge devices and Fog layer should be actively used as well, rather than only vertical interactions. Lastly, while mobility of the edge is discussed, this does not consider when the Fog can be mobile as well.
{ "cite_N": [ "@cite_10" ], "mid": [ "2056959789" ], "abstract": [ "The ubiquitous deployment of mobile and sensor devices is creating a new environment, namely the Internet of Things(IoT), that enables a wide range of future Internet applications. In this work, we present Mobile Fog, a high level programming model for the future Internet applications that are geospatially distributed, large-scale, and latency-sensitive. We analyze use cases for the programming model with camera network and connected vehicle applications to show the efficacy of Mobile Fog. We also evaluate application performance through simulation." ] }
1702.06331
2592591519
Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
The role of virtualization in enabling Cloud computing is discussed in @cite_29 , and they see a similar role for Fog computing as well. They conceive of a VM encapsulating all necessary dependencies for an edge application or user to be hosted on a Cloudlet within 1 hop of the edge, with this VM moving with the edge user to remain at 1-hop distance. This virtualization architecture should expose API for the developers to offload data and processing, synchronization of data among replicas, discovery of Cloudlet resources, and the migration of the VMs across Cloudlets. However, we take a broader platform view and discuss the possible architectural designs for the Fog.
{ "cite_N": [ "@cite_29" ], "mid": [ "2295874103" ], "abstract": [ "Handoff mechanisms allow mobile users to move across multiple wireless access points while maintaining their voice and or data sessions. A traditional handoff process is concerned with smoothly transferring a mobile device session from its current access point (or cell) to a target access point (or cell). These handoff characteristics are sufficient for voice calls and background data transfers, however nowadays many mobile applications are heavily based on data and processing capabilities from the cloud. Such applications, especially those that require greater interactivity, often demand not only a smooth session transfer, but also the maintenance of quality of service requirements that impact a user's experience. In this context, the Fog Computing paradigm arises to overcome delays encountered when applications need low latency to access data or offload processing to the cloud. Fog computing introduces a distributed cloud layer, composed of cloudlets (i.e., \"small clouds\" with lower computational capacity), between the user and the cloud. Cloudlets allow low latency access to data or processing capabilities, which can be accomplished by offering a VM to the user. An overview of Fog computing is first providing, relating it to general concepts in Cloud-based systems, followed by a general architecture to support virtual machine migration in this emerging paradigm -- discussing both the benefits and challenges associated with such migration." ] }
1702.06331
2592591519
Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
Other literature examine specific applications that benefit from the Fog layer. @cite_1 discuss the role of Fog computing for IoT as Edge computing alone is inadequate to deal with multiple IoT applications. Fog helps with coordination of distributed edge devices and uses Cloud resources. They consider Sense-and-actuate and stream processing as two programming models. But we argue there can be more diverse application composition and coordination models. @cite_24 discuss the need of Fog computing for real-time applications such as sensing of gas pipelines, smart agriculture and control systems inside factories. They consider a hierarchical architecture where the data is analyzed and processed at one level and then sent to the higher level for further aggregation and analysis. Other possible architectural designs and types of applications are missing. Similarly, @cite_12 also discuss the need of Fog computing for smart city applications. Further they have looked into some of the privacy and security issues that are possible in fog computing. What is lacking in these is a broader examination of application characteristics rather than examples that motivate the Fog, which we address.
{ "cite_N": [ "@cite_24", "@cite_1", "@cite_12" ], "mid": [ "2013317495", "2511933885", "2035203720" ], "abstract": [ "This paper examines some of the most promising and challenging scenarios in IoT, and shows why current compute and storage models confined to data centers will not be able to meet the requirements of many of the applications foreseen for those scenarios. Our analysis is particularly centered on three interrelated requirements: 1) mobility; 2) reliable control and actuation; and 3) scalability, especially, in IoT scenarios that span large geographical areas and require real-time decisions based on data analytics. Based on our analysis, we expose the reasons why Fog Computing is the natural platform for IoT, and discuss the unavoidable interplay of the Fog and the Cloud in the coming years. In the process, we review some of the technologies that will require considerable advances in order to support the applications that the IoT market will demand.", "The Internet of Things (IoT) could enable innovations that enhance the quality of life, but it generates unprecedented amounts of data that are difficult for traditional systems, the cloud, and even edge computing to handle. Fog computing is designed to overcome these limitations.", "Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. In this article, we elaborate the motivation and advantages of Fog computing, and analyse its applications in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks. We discuss the state-of-the-art of Fog computing and similar work under the same umbrella. Security and privacy issues are further disclosed according to current Fog computing paradigm. As an example, we study a typical attack, man-in-the-middle attack, for the discussion of security in Fog computing. We investigate the stealthy features of this attack by examining its CPU and memory consumption on Fog device." ] }
1702.06331
2592591519
Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
@cite_4 attempt a similar effort such as our paper, but limit their exercise to mobile devices rather than edge devices at large. They recognize that the Fog can be static or mobile, similar to entertainment systems in vehicles. They highlight that Fog can deliver location-aware content unlike Cloud, but this reduces the value of a Fog to that of a CDN, only closer to the edge. However, it is important to note that many Cloud services are capable of using geolocation using network or device GPS to offer location sensitive information. Rather, we state that the physical proximity also offers a certain degree of trust by the client of the Fog, and the ability to host rich interactive services, not just content. Their discussion on research problems delves more on the networking and communication between Mobile Cloud and Fog, and between Fogs rather than on application and middleware.
{ "cite_N": [ "@cite_4" ], "mid": [ "1486457878" ], "abstract": [ "With smart devices, particular smartphones, becoming our everyday companions, the ubiquitous mobile Internet and computing applications pervade people daily lives. With the surge demand on high-quality mobile services at anywhere, how to address the ubiquitous user demand and accommodate the explosive growth of mobile traffics is the key issue of the next generation mobile networks. The Fog computing is a promising solution towards this goal. Fog computing extends cloud computing by providing virtualized resources and engaged location-based services to the edge of the mobile networks so as to better serve mobile traffics. Therefore, Fog computing is a lubricant of the combination of cloud computing and mobile applications. In this article, we outline the main features of Fog computing and describe its concept, architecture and design goals. Lastly, we discuss some of the future research issues from the networking perspective." ] }
1702.06331
2592591519
Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
@cite_28 offer a brief survey of Fog computing concepts, in which they include both resource poor devices (which we refer to as edge) and resource rich Cloudlets and Cisco's IOx. They highlight Augmented reality, Content delivery and Mobile Data Analytics as three motivating applications to reduce the latency delays, and reduce bandwidth costs. They do not offer any analysis of Fog computing architecture or platform dimensions. As before, network management using Network Virtualization and SDN appear as technical issues to tackle. They identify the need for QoS, programming APIs, resource provisioning, security and billing as aspects to address.
{ "cite_N": [ "@cite_28" ], "mid": [ "2045371716" ], "abstract": [ "Despite the increasing usage of cloud computing, there are still issues unsolved due to inherent problems of cloud computing such as unreliable latency, lack of mobility support and location-awareness. Fog computing can address those problems by providing elastic resources and services to end users at the edge of network, while cloud computing are more about providing resources distributed in the core network. This survey discusses the definition of fog computing and similar concepts, introduces representative application scenarios, and identifies various aspects of issues we may encounter when designing and implementing fog computing systems. It also highlights some opportunities and challenges, as direction of potential future work, in related techniques that need to be considered in the context of fog computing." ] }
1702.06331
2592591519
Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
@cite_15 present a survey and discuss a hierarchical Fog-based architecture. We discuss several alternative architectural and coordination designs rather than a one-size fits all. They too consider IoT applications but fail to present a taxonomy of the application characteristics that can benefit from Fog as we do.
{ "cite_N": [ "@cite_15" ], "mid": [ "2025725145" ], "abstract": [ "Cloud services to smart things face latency and intermittent connectivity issues. Fog devices are positioned between cloud and smart devices. Their high speed Internet connection to the cloud, and physical proximity to users, enable real time applications and location based services, and mobility support. Cisco promoted fog computing concept in the areas of smart grid, connected vehicles and wireless sensor and actuator networks. This survey article expands this concept to the decentralized smart building control, recognizes cloudlets as special case of fog computing, and relates it to the software defined networks (SDN) scenarios. Our literature review identifies a handful number of articles. Cooperative data scheduling and adaptive traffic light problems in SDN based vehicular networks, and demand response management in macro station and micro-grid based smart grids are discussed. Security, privacy and trust issues, control information overhead and network control policies do not seem to be studied so far within the fog computing concept." ] }
1702.06291
2953085354
One of the major challenges of model-free visual tracking problem has been the difficulty originating from the unpredictable and drastic changes in the appearance of objects we target to track. Existing methods tackle this problem by updating the appearance model on-line in order to adapt to the changes in the appearance. Despite the success of these methods however, inaccurate and erroneous updates of the appearance model result in a tracker drift. In this paper, we introduce a novel real-time visual tracking algorithm based on a template selection strategy constructed by deep reinforcement learning methods. The tracking algorithm utilizes this strategy to choose the appropriate template for tracking a given frame. The template selection strategy is self-learned by utilizing a simple policy gradient method on numerous training episodes randomly generated from a tracking benchmark dataset. Our proposed reinforcement learning framework is generally applicable to other confidence map based tracking algorithms. The experiment shows that our tracking algorithm runs in real-time speed of 43 fps and the proposed policy network effectively decides the appropriate template for successful visual tracking.
Recently, there have been approaches to utilize deep representations for the visual tracking task. Convolutional neural networks (CNN) @cite_46 have shown outstanding performance in a wide range of computer vision applications including image classification @cite_0 , object detection @cite_25 and much more. Their powerful representation capacity motivated visual tracking approaches such as @cite_35 @cite_20 @cite_10 @cite_39 @cite_49 . @cite_35 was the first to introduce deep representation learning to visual tracking problem. They build a stacked denoising autoencoder and utilize its intermediate representation for visual tracking. In @cite_20 , hierarchical correlation filters learned on the feature maps of VGG-19 network @cite_40 are efficiently integrated. @cite_49 also utilizes the feature maps generated from the VGG network to obtain multi-level information. @cite_39 used the structure of low-level kernels of VGG-M network @cite_40 and trained on visual tracking datasets to obtain multi-domain representation for a robust target appearance model.
{ "cite_N": [ "@cite_35", "@cite_39", "@cite_0", "@cite_40", "@cite_49", "@cite_46", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "2118097920", "1857884451", "", "1686810756", "2211629196", "2310919327", "2470456807", "2953106684", "2214352687" ], "abstract": [ "In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked de-noising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).", "We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks.", "", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "We propose a new approach for general object tracking with fully convolutional neural network. Instead of treating convolutional neural network (CNN) as a black-box feature extractor, we conduct in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet. The discoveries motivate the design of our tracking system. It is found that convolutional layers in different levels characterize the target from different perspectives. A top layer encodes more semantic features and serves as a category detector, while a lower layer carries more discriminative information and can better separate the target from distracters with similar appearance. Both layers are jointly used with a switch mechanism during tracking. It is also found that for a tracking target, only a subset of neurons are relevant. A feature map selection method is developed to remove noisy and irrelevant feature maps, which can reduce computation redundancy and improve tracking accuracy. Extensive evaluation on the widely used tracking benchmark [36] shows that the proposed tacker outperforms the state-of-the-art significantly.", "", "Due to the limited amount of training samples, finetuning pre-trained deep models online is prone to overfitting. In this paper, we propose a sequential training method for convolutional neural networks (CNNs) to effectively transfer pre-trained deep features for online applications. We regard a CNN as an ensemble with each channel of the output feature map as an individual base learner. Each base learner is trained using different loss criterions to reduce correlation and avoid over-training. To achieve the best ensemble online, all the base learners are sequentially sampled into the ensemble via important sampling. To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features. The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to significantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "Visual object tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we exploit features extracted from deep convolutional neural networks trained on object recognition datasets to improve tracking accuracy and robustness. The outputs of the last convolutional layers encode the semantic information of targets and such representations are robust to significant appearance variations. However, their spatial resolution is too coarse to precisely localize targets. In contrast, earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchies of convolutional layers as a nonlinear counterpart of an image pyramid representation and exploit these multiple levels of abstraction for visual tracking. Specifically, we adaptively learn correlation filters on each convolutional layer to encode the target appearance. We hierarchically infer the maximum response of each layer to locate targets. Extensive experimental results on a largescale benchmark dataset show that the proposed algorithm performs favorably against state-of-the-art methods." ] }
1702.06291
2953085354
One of the major challenges of model-free visual tracking problem has been the difficulty originating from the unpredictable and drastic changes in the appearance of objects we target to track. Existing methods tackle this problem by updating the appearance model on-line in order to adapt to the changes in the appearance. Despite the success of these methods however, inaccurate and erroneous updates of the appearance model result in a tracker drift. In this paper, we introduce a novel real-time visual tracking algorithm based on a template selection strategy constructed by deep reinforcement learning methods. The tracking algorithm utilizes this strategy to choose the appropriate template for tracking a given frame. The template selection strategy is self-learned by utilizing a simple policy gradient method on numerous training episodes randomly generated from a tracking benchmark dataset. Our proposed reinforcement learning framework is generally applicable to other confidence map based tracking algorithms. The experiment shows that our tracking algorithm runs in real-time speed of 43 fps and the proposed policy network effectively decides the appropriate template for successful visual tracking.
Based on deep representations, some outstanding performances were shown by using two-flow Siamese networks on stereo matching problem in @cite_51 and patch-based image matching problem in @cite_4 . Accordingly, approaches to solve the visual tracking problem as a patch matching problem have emerged in @cite_14 @cite_36 @cite_12 @cite_5 . @cite_14 and @cite_5 train the Siamese networks using videos to learn a patch similarity matching function that shares an invariant representation. @cite_36 and @cite_12 further expand this notion and proposes a more end-to-end approach to similarity matching where a Siamese architecture can localize an exemplar patch inside a search image using shared convolutional layers. In particular, @cite_36 proposes a fully-convolutional architecture that adopts a cross-correlation layer to obtain invariance to spatial transitions inside the search image, lowering the complexity of the training process significantly.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_36", "@cite_5", "@cite_51", "@cite_12" ], "mid": [ "2952558221", "1929856797", "2951584184", "2340000481", "", "2424629859" ], "abstract": [ "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.", "Motivated by recent successes on learning feature representations and on learning feature comparison functions, we propose a unified approach to combining both for training a patch matching system. Our system, dubbed Match-Net, consists of a deep convolutional network that extracts features from patches and a network of three fully connected layers that computes a similarity between the extracted features. To ensure experimental repeatability, we train MatchNet on standard datasets and employ an input sampler to augment the training set with synthetic exemplar pairs that reduce overfitting. Once trained, we achieve better computational efficiency during matching by disassembling MatchNet and separately applying the feature computation and similarity networks in two sequential stages. We perform a comprehensive set of experiments on standard datasets to carefully study the contributions of each aspect of MatchNet, with direct comparisons to established methods. Our results confirm that our unified approach improves accuracy over previous state-of-the-art results on patch matching datasets, while reducing the storage requirement for descriptors. We make pre-trained MatchNet publicly available.", "The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object's appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.", "Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps.", "", "The main challenges of visual object tracking arise from the arbitrary appearance of the objects that need to be tracked. Most existing algorithms try to solve this problem by training a new model to regenerate or classify each tracked object. As a result, the model needs to be initialized and retrained for each new object. In this paper, we propose to track different objects in an object-independent approach with a novel two-flow convolutional neural network (YCNN). The YCNN takes two inputs (one is an object image patch, the other is a larger searching image patch), then outputs a response map which predicts how likely and where the object would appear in the search patch. Unlike the object-specific approaches, the YCNN is actually trained to measure the similarity between the two image patches. Thus, this model will not be limited to any specific object. Furthermore, the network is end-to-end trained to extract both shallow and deep dedicated convolutional features for visual tracking. And once properly trained, the YCNN can be used to track all kinds of objects without further training and updating. As a result, our algorithm is able to run at a very high speed of 45 frames-per-second. The effectiveness of the proposed algorithm can also be proved by the experiments on two popular data sets: OTB-100 and VOT-2014." ] }
1702.06291
2953085354
One of the major challenges of model-free visual tracking problem has been the difficulty originating from the unpredictable and drastic changes in the appearance of objects we target to track. Existing methods tackle this problem by updating the appearance model on-line in order to adapt to the changes in the appearance. Despite the success of these methods however, inaccurate and erroneous updates of the appearance model result in a tracker drift. In this paper, we introduce a novel real-time visual tracking algorithm based on a template selection strategy constructed by deep reinforcement learning methods. The tracking algorithm utilizes this strategy to choose the appropriate template for tracking a given frame. The template selection strategy is self-learned by utilizing a simple policy gradient method on numerous training episodes randomly generated from a tracking benchmark dataset. Our proposed reinforcement learning framework is generally applicable to other confidence map based tracking algorithms. The experiment shows that our tracking algorithm runs in real-time speed of 43 fps and the proposed policy network effectively decides the appropriate template for successful visual tracking.
However, approaches such as @cite_5 @cite_12 use a naive on-line update strategy that cannot revise erroneous updates and recover from heavy occlusions. Moreover, approaches @cite_14 @cite_36 do not update the initial template, solely relying on the representation power of the pre-trained CNN. This approach may be effective for short-term video segments with no distractors, but the tracker can be attracted towards a distractor with a similar appearance to the target. Our proposed algorithm is aimed to solve both problems of the previous approaches, by utilizing previously seen examples to adapt to the recent appearance of the target and choosing the most adequate template for localizing the target, ruling out erroneously updated templates.
{ "cite_N": [ "@cite_36", "@cite_5", "@cite_14", "@cite_12" ], "mid": [ "2951584184", "2340000481", "2952558221", "2424629859" ], "abstract": [ "The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object's appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.", "Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps.", "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.", "The main challenges of visual object tracking arise from the arbitrary appearance of the objects that need to be tracked. Most existing algorithms try to solve this problem by training a new model to regenerate or classify each tracked object. As a result, the model needs to be initialized and retrained for each new object. In this paper, we propose to track different objects in an object-independent approach with a novel two-flow convolutional neural network (YCNN). The YCNN takes two inputs (one is an object image patch, the other is a larger searching image patch), then outputs a response map which predicts how likely and where the object would appear in the search patch. Unlike the object-specific approaches, the YCNN is actually trained to measure the similarity between the two image patches. Thus, this model will not be limited to any specific object. Furthermore, the network is end-to-end trained to extract both shallow and deep dedicated convolutional features for visual tracking. And once properly trained, the YCNN can be used to track all kinds of objects without further training and updating. As a result, our algorithm is able to run at a very high speed of 45 frames-per-second. The effectiveness of the proposed algorithm can also be proved by the experiments on two popular data sets: OTB-100 and VOT-2014." ] }
1702.06291
2953085354
One of the major challenges of model-free visual tracking problem has been the difficulty originating from the unpredictable and drastic changes in the appearance of objects we target to track. Existing methods tackle this problem by updating the appearance model on-line in order to adapt to the changes in the appearance. Despite the success of these methods however, inaccurate and erroneous updates of the appearance model result in a tracker drift. In this paper, we introduce a novel real-time visual tracking algorithm based on a template selection strategy constructed by deep reinforcement learning methods. The tracking algorithm utilizes this strategy to choose the appropriate template for tracking a given frame. The template selection strategy is self-learned by utilizing a simple policy gradient method on numerous training episodes randomly generated from a tracking benchmark dataset. Our proposed reinforcement learning framework is generally applicable to other confidence map based tracking algorithms. The experiment shows that our tracking algorithm runs in real-time speed of 43 fps and the proposed policy network effectively decides the appropriate template for successful visual tracking.
There also have been some recent approaches that employ deep reinforcement learning methodology on visual tracking algorithms. @cite_43 trained a policy network to generate actions for state transition in order to localize the target in a given frame. @cite_37 used YouTube videos to interactively learn a Q-value function where it makes decisions for the tracker to reinitialize, update or to keep tracking with the same appearance model. However, both trackers run at 3 fps and 10 fps respectively, both lacking the speed of real-time performance. Our algorithm runs at a real-time speed of 43 fps while maintaining a competitive performance. We achieve this by incorporating more lightweight and optimized structures for matching network and policy network.
{ "cite_N": [ "@cite_43", "@cite_37" ], "mid": [ "2738318237", "2739381051" ], "abstract": [ "This paper proposes a novel tracker which is controlled by sequentially pursuing actions learned by deep reinforcement learning. In contrast to the existing trackers using deep networks, the proposed tracker is designed to achieve a light computation as well as satisfactory tracking accuracy in both location and scale. The deep network to control actions is pre-trained using various training sequences and fine-tuned during tracking for online adaptation to target and background changes. The pre-training is done by utilizing deep reinforcement learning as well as supervised learning. The use of reinforcement learning enables even partially labeled data to be successfully utilized for semi-supervised learning. Through evaluation of the OTB dataset, the proposed tracker is validated to achieve a competitive performance that is three times faster than state-of-the-art, deep network–based trackers. The fast version of the proposed method, which operates in real-time on GPU, outperforms the state-of-the-art real-time trackers.", "We formulate tracking as an online decision-making process, where a tracking agent must follow an object despite ambiguous image frames and a limited computational budget. Crucially, the agent must decide where to look in the upcoming frames, when to reinitialize because it believes the target has been lost, and when to update its appearance model for the tracked object. Such decisions are typically made heuristically. Instead, we propose to learn an optimal decision-making policy by formulating tracking as a partially observable decision-making process (POMDP). We learn policies with deep reinforcement learning algorithms that need supervision (a reward signal) only when the track has gone awry. We demonstrate that sparse rewards allow us to quickly train on massive datasets, several orders of magnitude more than past work. Interestingly, by treating the data source of Internet videos as unlimited streams, we both learn and evaluate our trackers in a single, unified computational stream." ] }
1702.06318
2950859579
Food is an integral part of our life and what and how much we eat crucially affects our health. Our food choices largely depend on how we perceive certain characteristics of food, such as whether it is healthy, delicious or if it qualifies as a salad. But these perceptions differ from person to person and one person's "single lettuce leaf" might be another person's "side salad". Studying how food is perceived in relation to what it actually is typically involves a laboratory setup. Here we propose to use recent advances in image recognition to tackle this problem. Concretely, we use data for 1.9 million images from Instagram from the US to look at systematic differences in how a machine would objectively label an image compared to how a human subjectively does. We show that this difference, which we call the "perception gap", relates to a number of health outcomes observed at the county level. To the best of our knowledge, this is the first time that image recognition is being used to study the "misalignment" of how people describe food images vs. what they actually depict.
Killgore and Yurgelun-Todd @cite_20 showed a link between differences in orbitofrontal brain activity and (i) viewing high-calorie or low-calorie foods, and (ii) the body mass index of the person viewing the image. This suggests a relationship between weight status and responsiveness of the orbitofrontal cortex to rewarding food images.
{ "cite_N": [ "@cite_20" ], "mid": [ "2075366343" ], "abstract": [ "Little is known about the relationship between weight status and reward-related brain activity in normal weight humans.We correlated orbitofrontal and anterior cingulate cortex activity as measured by functional magnetic resonance imaging with body mass index in 13 healthy, normal-weight adult women as they viewed images of high-calorie and low-calorie foods, and dining-related utensils. Body mass index correlated negatively with both cingulate and orbitofrontal activity during high-calorie viewing, negatively with orbitofrontal activity during low-calorie viewing, and positively with orbitofrontal activity during presentations of nonedible utensils.With greater body mass, activity was reduced in brain regions important for evaluating and modifying learned stimulus^ reward associations, suggesting a relationship between weight status and responsiveness of the orbitofrontal cortex to rewarding food images. NeuroReport 16:859^ 863 � c 2005 Lippincott Williams & Wilkins." ] }
1702.06318
2950859579
Food is an integral part of our life and what and how much we eat crucially affects our health. Our food choices largely depend on how we perceive certain characteristics of food, such as whether it is healthy, delicious or if it qualifies as a salad. But these perceptions differ from person to person and one person's "single lettuce leaf" might be another person's "side salad". Studying how food is perceived in relation to what it actually is typically involves a laboratory setup. Here we propose to use recent advances in image recognition to tackle this problem. Concretely, we use data for 1.9 million images from Instagram from the US to look at systematic differences in how a machine would objectively label an image compared to how a human subjectively does. We show that this difference, which we call the "perception gap", relates to a number of health outcomes observed at the county level. To the best of our knowledge, this is the first time that image recognition is being used to study the "misalignment" of how people describe food images vs. what they actually depict.
@cite_39 showed that, after undergoing substantial weight loss, obese subjects demonstrated changes in brain activity elicited by food-related visual cues. Many of these changes in brain areas known to be involved in the regulatory, emotional, and cognitive control of food intake were reversed by leptin injection.
{ "cite_N": [ "@cite_39" ], "mid": [ "2140409590" ], "abstract": [ "Increased hunger and food intake during attempts to maintain weight loss are a critical problem in clinical management of obesity. To determine whether reduced body weight maintenance is accompanied by leptin-sensitive changes in neural activity in brain regions affecting regulatory and hedonic aspects of energy homeostasis, we examined brain region–specific neural activity elicited by food-related visual cues using functional MRI in 6 inpatient obese subjects. Subjects were assessed at their usual weight and, following stabilization at a 10 reduced body weight, while receiving either twice daily subcutaneous injections of leptin or placebo. Following weight loss, there were predictable changes in neural activity, many of which were reversed by leptin, in brain areas known to be involved in the regulatory, emotional, and cognitive control of food intake. Specifically, following weight loss there were leptin-reversible increases in neural activity in response to visual food cues in the brainstem, culmen, parahippocampal gyrus, inferior and middle frontal gyri, middle temporal gyrus, and lingual gyrus. There were also leptin-reversible decreases in activity in response to food cues in the hypothalamus, cingulate gyrus, and middle frontal gyrus. These data are consistent with a model of the weight-reduced state as one of relative leptin deficiency." ] }
1702.06318
2950859579
Food is an integral part of our life and what and how much we eat crucially affects our health. Our food choices largely depend on how we perceive certain characteristics of food, such as whether it is healthy, delicious or if it qualifies as a salad. But these perceptions differ from person to person and one person's "single lettuce leaf" might be another person's "side salad". Studying how food is perceived in relation to what it actually is typically involves a laboratory setup. Here we propose to use recent advances in image recognition to tackle this problem. Concretely, we use data for 1.9 million images from Instagram from the US to look at systematic differences in how a machine would objectively label an image compared to how a human subjectively does. We show that this difference, which we call the "perception gap", relates to a number of health outcomes observed at the county level. To the best of our knowledge, this is the first time that image recognition is being used to study the "misalignment" of how people describe food images vs. what they actually depict.
@cite_44 examined the relationship between goal-directed valuations of food images by both lean and overweight people in an MRI scanner and food consumption at a subsequent all-you-can-eat buffet. They observed that both lean and overweight participants showed similar patterns of value-based neural responses to health and taste attributes of foods. This suggests that a shift in obesity may lie in how the presence of food overcomes prior value-based decision-making.
{ "cite_N": [ "@cite_44" ], "mid": [ "2305315319" ], "abstract": [ "To develop more ecologically valid models of the neurobiology of obesity, it is critical to determine how the neural processes involved in food-related decision-making translate into real-world eating behaviors. We examined the relationship between goal-directed valuations of food images in the MRI scanner and food consumption at a subsequent ad libitum buffet meal. We observed that 23 lean and 40 overweight human participants showed similar patterns of value-based neural responses to health and taste attributes of foods. In both groups, these value-based responses in the ventromedial PFC were predictive of subsequent consumption at the buffet. However, overweight participants consumed a greater proportion of unhealthy foods. This was not predicted by in-scanner choices or neural response. Moreover, in overweight participants alone, impulsivity scores predicted greater consumption of unhealthy foods. Overall, our findings suggest that, while the hypothetical valuation of the health of foods is predictive of eating behavior in both lean and overweight people, it is only the real-world food choices that clearly distinguish them." ] }
1702.06318
2950859579
Food is an integral part of our life and what and how much we eat crucially affects our health. Our food choices largely depend on how we perceive certain characteristics of food, such as whether it is healthy, delicious or if it qualifies as a salad. But these perceptions differ from person to person and one person's "single lettuce leaf" might be another person's "side salad". Studying how food is perceived in relation to what it actually is typically involves a laboratory setup. Here we propose to use recent advances in image recognition to tackle this problem. Concretely, we use data for 1.9 million images from Instagram from the US to look at systematic differences in how a machine would objectively label an image compared to how a human subjectively does. We show that this difference, which we call the "perception gap", relates to a number of health outcomes observed at the county level. To the best of our knowledge, this is the first time that image recognition is being used to study the "misalignment" of how people describe food images vs. what they actually depict.
Whereas the three studies discussed above studied the perception at the level of brain activity, our own work only looks at data from perception reported in the form of hashtags. This, indirectly, relates to a review by @cite_51 of studies on the link between the (self-declared) palatability, i.e., the positive sensory perception of foods, and the food intake. All of their reviewed studies showed that increased palatability leads to increased intake. In , we study a similar aspect by looking at regional differences in what is tagged as #delicious and how this relates to obesity rates and other health outcomes.
{ "cite_N": [ "@cite_51" ], "mid": [ "1971280724" ], "abstract": [ "Effect of sensory perception of foods on appetite and food intake: a review of studies on humans" ] }
1702.06318
2950859579
Food is an integral part of our life and what and how much we eat crucially affects our health. Our food choices largely depend on how we perceive certain characteristics of food, such as whether it is healthy, delicious or if it qualifies as a salad. But these perceptions differ from person to person and one person's "single lettuce leaf" might be another person's "side salad". Studying how food is perceived in relation to what it actually is typically involves a laboratory setup. Here we propose to use recent advances in image recognition to tackle this problem. Concretely, we use data for 1.9 million images from Instagram from the US to look at systematic differences in how a machine would objectively label an image compared to how a human subjectively does. We show that this difference, which we call the "perception gap", relates to a number of health outcomes observed at the county level. To the best of our knowledge, this is the first time that image recognition is being used to study the "misalignment" of how people describe food images vs. what they actually depict.
Closer to the realm of social media is the concept of food porn''. @cite_21 discussed the danger that our growing exposure to such beautifully presented food images has detrimental consequences in particular on a hungry brain. They introduce the notion of visual hunger'', i.e., the desire to view beautiful images of food.
{ "cite_N": [ "@cite_21" ], "mid": [ "1859493718" ], "abstract": [ "Abstract One of the brain’s key roles is to facilitate foraging and feeding. It is presumably no coincidence, then, that the mouth is situated close to the brain in most animal species. However, the environments in which our brains evolved were far less plentiful in terms of the availability of food resources (i.e., nutriments) than is the case for those of us living in the Western world today. The growing obesity crisis is but one of the signs that humankind is not doing such a great job in terms of optimizing the contemporary food landscape. While the blame here is often put at the doors of the global food companies – offering addictive foods, designed to hit ‘the bliss point’ in terms of the pleasurable ingredients (sugar, salt, fat, etc.), and the ease of access to calorie-rich foods – we wonder whether there aren’t other implicit cues in our environments that might be triggering hunger more often than is perhaps good for us. Here, we take a closer look at the potential role of vision; Specifically, we question the impact that our increasing exposure to images of desirable foods (what is often labelled ‘food porn’, or ‘gastroporn’) via digital interfaces might be having, and ask whether it might not inadvertently be exacerbating our desire for food (what we call ‘visual hunger’). We review the growing body of cognitive neuroscience research demonstrating the profound effect that viewing such images can have on neural activity, physiological and psychological responses, and visual attention, especially in the ‘hungry’ brain." ] }
1702.06318
2950859579
Food is an integral part of our life and what and how much we eat crucially affects our health. Our food choices largely depend on how we perceive certain characteristics of food, such as whether it is healthy, delicious or if it qualifies as a salad. But these perceptions differ from person to person and one person's "single lettuce leaf" might be another person's "side salad". Studying how food is perceived in relation to what it actually is typically involves a laboratory setup. Here we propose to use recent advances in image recognition to tackle this problem. Concretely, we use data for 1.9 million images from Instagram from the US to look at systematic differences in how a machine would objectively label an image compared to how a human subjectively does. We show that this difference, which we call the "perception gap", relates to a number of health outcomes observed at the county level. To the best of our knowledge, this is the first time that image recognition is being used to study the "misalignment" of how people describe food images vs. what they actually depict.
Recent studies have shown that large scale, real time, non-intrusive monitoring can be done using social media to get aggregate statistics about the health and well being of a population @cite_8 @cite_31 @cite_50 . Twitter in particular has been widely used in studies on public health @cite_46 @cite_26 @cite_27 @cite_28 , due to its vast amount of data and the ease of availability of data.
{ "cite_N": [ "@cite_31", "@cite_26", "@cite_8", "@cite_28", "@cite_27", "@cite_50", "@cite_46" ], "mid": [ "2102742655", "1630939116", "1969894105", "", "2164912194", "1615870545", "201361503" ], "abstract": [ "We present a review of pharmacovigilance techniques from social media (SM) data.Our review discusses twenty-two studies, comparing them across various axes.We present a possible pathway for automated pharmacovigilance research from SM. ObjectiveAutomatic monitoring of Adverse Drug Reactions (ADRs), defined as adverse patient outcomes caused by medications, is a challenging research problem that is currently receiving significant attention from the medical informatics community. In recent years, user-posted data on social media, primarily due to its sheer volume, has become a useful resource for ADR monitoring. Research using social media data has progressed using various data sources and techniques, making it difficult to compare distinct systems and their performances. In this paper, we perform a methodical review to characterize the different approaches to ADR detection extraction from social media, and their applicability to pharmacovigilance. In addition, we present a potential systematic pathway to ADR monitoring from social media. MethodsWe identified studies describing approaches for ADR detection from social media from the Medline, Embase, Scopus and Web of Science databases, and the Google Scholar search engine. Studies that met our inclusion criteria were those that attempted to extract ADR information posted by users on any publicly available social media platform. We categorized the studies according to different characteristics such as primary ADR detection approach, size of corpus, data source(s), availability, and evaluation criteria. ResultsTwenty-two studies met our inclusion criteria, with fifteen (68 ) published within the last two years. However, publicly available annotated data is still scarce, and we found only six studies that made the annotations used publicly available, making system performance comparisons difficult. In terms of algorithms, supervised classification techniques to detect posts containing ADR mentions, and lexicon-based approaches for extraction of ADR mentions from texts have been the most popular. ConclusionOur review suggests that interest in the utilization of the vast amounts of available social media data for ADR monitoring is increasing. In terms of sources, both health-related and general social media data have been used for ADR detection-while health-related sources tend to contain higher proportions of relevant data, the volume of data from general social media websites is significantly higher. There is still very limited amount of annotated data publicly available , and, as indicated by the promising results obtained by recent supervised learning approaches, there is a strong need to make such data available to the research community.", "Public health-related topics are difficult to identify in large conversational datasets like Twitter. This study examines how to model and discover public health topics and themes in tweets. Tobacco use is chosen as a test case to demonstrate the effectiveness of topic modeling via LDA across a large, representational dataset from the United States, as well as across a smaller subset that was seeded by tobacco-related queries. Topic modeling across the large dataset uncovers several public health-related topics, although tobacco is not detected by this method. However, topic modeling across the tobacco subset provides valuable insight about tobacco use in the United States. The methods used in this paper provide a possible toolset for public health researchers and practitioners to better understand public health problems through large datasets of conversational data.", "Recent work in machine learning and natural language processing has studied the health content of tweets and demonstrated the potential for extracting useful public health information from their aggregation. This article examines the types of health topics discussed on Twitter, and how tweets can both augment existing public health capabilities and enable new ones. The author also discusses key challenges that researchers must address to deliver high-quality tools to the public health community.", "", "Traditional public health surveillance requires regular clinical reports and considerable effort by health professionals to analyze data. Therefore, a low cost alternative is of great practical use. As a platform used by over 500 million users worldwide to publish their ideas about many topics, including health conditions, Twitter provides researchers the freshest source of public health conditions on a global scale. We propose a framework for tracking public health condition trends via Twitter. The basic idea is to use frequent term sets from highly purified health-related tweets as queries into a Wikipedia article index -- treating the retrieval of medically-related articles as an indicator of a health-related condition. By observing fluctuations in frequent term sets and in turn medically-related articles over a series of time slices of tweets, we detect shifts in public health conditions and concerns over time. Compared to existing approaches, our framework provides a general a priori identification of emerging public health conditions rather than a specific illness (e.g., influenza) as is commonly done.", "The exponentially increasing stream of real time big data produced by Web 2.0 Internet and mobile networks created radically new interdisciplinary challenges for public health and computer science. Traditional public health disease surveillance systems have to utilize the potential created by new situation-aware realtime signals from social media, mobile sensor networks and citizens? participatory surveillance systems providing invaluable free realtime event-based signals for epidemic intelligence. However, rather than improving existing isolated systems, an integrated solution bringing together existing epidemic intelligence systems scanning news media (e.g., GPHIN, MedISys) with real-time social media intelligence (e.g., Twitter, participatory systems) is required to substantially improve and automate early warning, outbreak detection and preparedness operations. However, automatic monitoring and novel verification methods for these multichannel event-based real time signals has to be integrated with traditional case-based surveillance systems from microbiological laboratories and clinical reporting. Finally, the system needs effectively support coordination of epidemiological teams, risk communication with citizens and implementation of prevention measures. However, from computational perspective, signal detection, analysis and verification of very high noise realtime big data provide a number of interdisciplinary challenges for computer science. Novel approaches integrating current systems into a digital public health dashboard can enhance signal verification methods and automate the processes assisting public health experts in providing better informed and more timely response. In this paper, we describe the roadmap to such a system, components of an integrated public health surveillance services and computing challenges to be resolved to create an integrated real world solution.", "Analyzing user messages in social media can measure different population characteristics, including public health measures. For example, recent work has correlated Twitter messages with influenza rates in the United States; but this has largely been the extent of mining Twitter for public health. In this work, we consider a broader range of public health applications for Twitter. We apply the recently introduced Ailment Topic Aspect Model to over one and a half million health related tweets and discover mentions of over a dozen ailments, including allergies, obesity and insomnia. We introduce extensions to incorporate prior knowledge into this model and apply it to several tasks: tracking illnesses over times (syndromic surveillance), measuring behavioral risk factors, localizing illnesses by geographic region, and analyzing symptoms and medication usage. We show quantitative correlations with public health data and qualitative evaluations of model output. Our results suggest that Twitter has broad applicability for public health research." ] }
1702.06318
2950859579
Food is an integral part of our life and what and how much we eat crucially affects our health. Our food choices largely depend on how we perceive certain characteristics of food, such as whether it is healthy, delicious or if it qualifies as a salad. But these perceptions differ from person to person and one person's "single lettuce leaf" might be another person's "side salad". Studying how food is perceived in relation to what it actually is typically involves a laboratory setup. Here we propose to use recent advances in image recognition to tackle this problem. Concretely, we use data for 1.9 million images from Instagram from the US to look at systematic differences in how a machine would objectively label an image compared to how a human subjectively does. We show that this difference, which we call the "perception gap", relates to a number of health outcomes observed at the county level. To the best of our knowledge, this is the first time that image recognition is being used to study the "misalignment" of how people describe food images vs. what they actually depict.
Connecting the previous discussion on the perception of food and food images to public health analysis via social media is work by @cite_7 . They study data from 10 million images with the hashtag #foodporn and find that, globally, sugary foods such as chocolate or cake are most commonly labeled this way. However, they also report a strong relationship (r=0.51) between the GDP per capita and the #foodporn-healthiness assocation.
{ "cite_N": [ "@cite_7" ], "mid": [ "2292233400" ], "abstract": [ "What food is so good as to be considered pornographic? Worldwide, the popular #foodporn hashtag has been used to share appetizing pictures of peoples' favorite culinary experiences. But social scientists ask whether #foodporn promotes an unhealthy relationship with food, as pornography would contribute to an unrealistic view of sexuality. In this study, we examine nearly 10 million Instagram posts by 1.7 million users worldwide. An overwhelming (and uniform across the nations) obsession with chocolate and cake shows the domination of sugary dessert over local cuisines. Yet, we find encouraging traits in the association of emotion and health-related topics with #foodporn, suggesting food can serve as motivation for a healthy lifestyle. Social approval also favors the healthy posts, with users posting with healthy hashtags having an average of 1,000 more followers than those with unhealthy ones. Finally, we perform a demographic analysis which shows nation-wide trends of behavior, such as a strong relationship (r=0.51) between the GDP per capita and the attention to healthiness of their favorite food. Our results expose a new facet of food \"pornography\", revealing potential avenues for utilizing this precarious notion for promoting healthy lifestyles." ] }
1702.06318
2950859579
Food is an integral part of our life and what and how much we eat crucially affects our health. Our food choices largely depend on how we perceive certain characteristics of food, such as whether it is healthy, delicious or if it qualifies as a salad. But these perceptions differ from person to person and one person's "single lettuce leaf" might be another person's "side salad". Studying how food is perceived in relation to what it actually is typically involves a laboratory setup. Here we propose to use recent advances in image recognition to tackle this problem. Concretely, we use data for 1.9 million images from Instagram from the US to look at systematic differences in how a machine would objectively label an image compared to how a human subjectively does. We show that this difference, which we call the "perception gap", relates to a number of health outcomes observed at the county level. To the best of our knowledge, this is the first time that image recognition is being used to study the "misalignment" of how people describe food images vs. what they actually depict.
In the work most similar to ours, @cite_30 use image annotations obtained by Imagga http: imagga.com auto-tagging-demo to explore the value of machine tags for modeling public health variation. They find that, generally, human annotations provide better signals. They do, however, report encouraging results for modeling alcohol abuse using machine annotations. Furthermore, due to their reliance on a third party system, they could only obtain annotations for a total of 200k images. Whereas our work focuses on the in how machines and humans annotate the same images, their main focus is on building models for public health monitoring.
{ "cite_N": [ "@cite_30" ], "mid": [ "2950919774" ], "abstract": [ "Several projects have shown the feasibility to use textual social media data to track public health concerns, such as temporal influenza patterns or geographical obesity patterns. In this paper, we look at whether geo-tagged images from Instagram also provide a viable data source. Especially for \"lifestyle\" diseases, such as obesity, drinking or smoking, images of social gatherings could provide information that is not necessarily shared in, say, tweets. In this study, we explore whether (i) tags provided by the users and (ii) annotations obtained via automatic image tagging are indeed valuable for studying public health. We find that both user-provided and machine-generated tags provide information that can be used to infer a county's health statistics. Whereas for most statistics user-provided tags are better features, for predicting excessive drinking machine-generated tags such as \"liquid\" and \"glass\" yield better models. This hints at the potential of using machine-generated tags to study substance abuse." ] }
1702.06318
2950859579
Food is an integral part of our life and what and how much we eat crucially affects our health. Our food choices largely depend on how we perceive certain characteristics of food, such as whether it is healthy, delicious or if it qualifies as a salad. But these perceptions differ from person to person and one person's "single lettuce leaf" might be another person's "side salad". Studying how food is perceived in relation to what it actually is typically involves a laboratory setup. Here we propose to use recent advances in image recognition to tackle this problem. Concretely, we use data for 1.9 million images from Instagram from the US to look at systematic differences in how a machine would objectively label an image compared to how a human subjectively does. We show that this difference, which we call the "perception gap", relates to a number of health outcomes observed at the county level. To the best of our knowledge, this is the first time that image recognition is being used to study the "misalignment" of how people describe food images vs. what they actually depict.
Previously, Culotta @cite_41 and @cite_15 used Twitter in conjunction with psychometric lexicons such as LIWC and PERMA to predict county-level health statistics such as obesity, teen pregnancy and diabetes. Their overall approach of building regression models for regional variations in health statistics is similar to ours. @cite_11 make use of Twitter data to identify health related topics and use these to characterize the discussion of health online. @cite_22 use Foursquare and Instagram images to study food consumption patterns in the US, and find a correlation between obesity and fast food restaurants.
{ "cite_N": [ "@cite_41", "@cite_15", "@cite_22", "@cite_11" ], "mid": [ "2104925568", "2001488574", "1974289028", "2079591709" ], "abstract": [ "Understanding the relationships among environment, behavior, and health is a core concern of public health researchers. While a number of recent studies have investigated the use of social media to track infectious diseases such as influenza, little work has been done to determine if other health concerns can be inferred. In this paper, we present a large-scale study of 27 health-related statistics, including obesity, health insurance coverage, access to healthy foods, and teen birth rates. We perform a linguistic analysis of the Twitter activity in the top 100 most populous counties in the U.S., and find a significant correlation with 6 of the 27 health statistics. When compared to traditional models based on demographic variables alone, we find that augmenting models with Twitter-derived information improves predictive accuracy for 20 of 27 statistics, suggesting that this new methodology can complement existing approaches.", "Food is an integral part of our lives, cultures, and well-being, and is of major interest to public health. The collection of daily nutritional data involves keeping detailed diaries or periodic surveys and is limited in scope and reach. Alternatively, social media is infamous for allowing its users to update the world on the minutiae of their daily lives, including their eating habits. In this work we examine the potential of Twitter to provide insight into US-wide dietary choices by linking the tweeted dining experiences of 210K users to their interests, demographics, and social networks. We validate our approach by relating the caloric values of the foods mentioned in the tweets to the state-wide obesity rates, achieving a Pearson correlation of 0.77 across the 50 US states and the District of Columbia. We then build a model to predict county-wide obesity and diabetes statistics based on a combination of demographic variables and food names mentioned on Twitter. Our results show significant improvement over previous CHI research (Culotta 2014). We further link this data to societ al and economic factors, such as education and income, illustrating that areas with higher education levels tweet about food that is significantly less caloric. Finally, we address the somewhat controversial issue of the social nature of obesity (Christakis & Fowler 2007) by inducing two social networks using mentions and reciprocal following relationships.", "We present a large-scale analysis of Instagram pictures taken at 164,753 restaurants by millions of users. Motivated by the obesity epidemic in the United States, our aim is three-fold: (i) to assess the relationship between fast food and chain restaurants and obesity, (ii) to better understand people's thoughts on and perceptions of their daily dining experiences, and (iii) to reveal the nature of social reinforcement and approval in the context of dietary health on social media. When we correlate the prominence of fast food restaurants in US counties with obesity, we find the Foursquare data to show a greater correlation at 0.424 than official survey data from the County Health Rankings would show. Our analysis further reveals a relationship between small businesses and local foods with better dietary health, with such restaurants getting more attention in areas of lower obesity. However, even in such areas, social approval favors the unhealthy foods high in sugar, with donut shops producing the most liked photos. Thus, the dietary landscape our study reveals is a complex ecosystem, with fast food playing a role alongside social interactions and personal perceptions, which often may be at odds.", "By aggregating self-reported health statuses across millions of users, we seek to characterize the variety of health information discussed in Twitter. We describe a topic modeling framework for discovering health topics in Twitter, a social media website. This is an exploratory approach with the goal of understanding what health topics are commonly discussed in social media. This paper describes in detail a statistical topic model created for this purpose, the Ailment Topic Aspect Model (ATAM), as well as our system for filtering general Twitter data based on health keywords and supervised classification. We show how ATAM and other topic models can automatically infer health topics in 144 million Twitter messages from 2011 to 2013. ATAM discovered 13 coherent clusters of Twitter messages, some of which correlate with seasonal influenza (r = 0.689) and allergies (r = 0.810) temporal surveillance data, as well as exercise (r = .534) and obesity (r = −.631) related geographic survey data in the United States. These results demonstrate that it is possible to automatically discover topics that attain statistically significant correlations with ground truth data, despite using minimal human supervision and no historical data to train the model, in contrast to prior work. Additionally, these results demonstrate that a single general-purpose model can identify many different health topics in social media." ] }
1702.06318
2950859579
Food is an integral part of our life and what and how much we eat crucially affects our health. Our food choices largely depend on how we perceive certain characteristics of food, such as whether it is healthy, delicious or if it qualifies as a salad. But these perceptions differ from person to person and one person's "single lettuce leaf" might be another person's "side salad". Studying how food is perceived in relation to what it actually is typically involves a laboratory setup. Here we propose to use recent advances in image recognition to tackle this problem. Concretely, we use data for 1.9 million images from Instagram from the US to look at systematic differences in how a machine would objectively label an image compared to how a human subjectively does. We show that this difference, which we call the "perception gap", relates to a number of health outcomes observed at the county level. To the best of our knowledge, this is the first time that image recognition is being used to study the "misalignment" of how people describe food images vs. what they actually depict.
@cite_9 use smile recognition from images posted on social media to study and quantify the overall societ al happiness. @cite_43 study depression related images on Instagram and establish[ed] the importance of visual imagery as a vehicle for expressing aspects of depression". Though these papers do not explicitly try to model public health statistics, they illustrate the value of image recognition techniques in the health domain. In the following we review computer vision work in more depth.
{ "cite_N": [ "@cite_43", "@cite_9" ], "mid": [ "1998830562", "2041003138" ], "abstract": [ "Despite the well-established finding that people share negative emotions less openly than positive ones, a hashtag search for depression-related terms in Instagram yields millions of images. In this study, we examined depression-related images on Instagram along with their accompanying captions. We want to better understand the role of photo sharing in the lives of people who suffer from depression or who frame their experience as such; specifically, whether this practice engages support networks and how social computing systems can be designed to support such interactions. To lay the groundwork for further investigation, we report here on content analysis of depression-related posts.", "The increasing adoption of social media provides unprecedented opportunities to gain insight into human nature at vastly broader scales. Regarding the study of population-wide sentiment, prior research commonly focuses on text-based analyses and ignores a treasure trove of sentiment-laden content: images. In this paper, we make methodological and computational contributions by introducing the Smile Index as a formalized measure of societ al happiness. Detecting smiles in 9 million geo-located tweets over 16 months, we validate our Smile Index against both text-based techniques and self-reported happiness. We further make observational contributions by applying our metric to explore temporal trends in sentiment, relate public mood to societ al events, and predict economic indicators. Reflecting upon the innate, language-independent aspects of facial expressions, we recommend future improvements and applications to enable robust, global-level analyses. We conclude with implications for researchers studying and facilitating the expression of collective emotion through socio-technical systems." ] }
1702.06318
2950859579
Food is an integral part of our life and what and how much we eat crucially affects our health. Our food choices largely depend on how we perceive certain characteristics of food, such as whether it is healthy, delicious or if it qualifies as a salad. But these perceptions differ from person to person and one person's "single lettuce leaf" might be another person's "side salad". Studying how food is perceived in relation to what it actually is typically involves a laboratory setup. Here we propose to use recent advances in image recognition to tackle this problem. Concretely, we use data for 1.9 million images from Instagram from the US to look at systematic differences in how a machine would objectively label an image compared to how a human subjectively does. We show that this difference, which we call the "perception gap", relates to a number of health outcomes observed at the county level. To the best of our knowledge, this is the first time that image recognition is being used to study the "misalignment" of how people describe food images vs. what they actually depict.
Although images and other rich multimedia form a major chunk of content being shared in social media, almost all the methods above rely on textual content. Automatic image annotation has greatly improved over the last couple of years, owing to the recent development in deep learning @cite_0 @cite_10 @cite_36 . Robust object recognition @cite_45 @cite_25 and image captioning @cite_14 have become possible because of these new developments. For example, @cite_14 use deep learning to produce descriptions of images, which compete with (and sometimes beat) human generated labels. A few studies already make use of these advances to identify @cite_16 @cite_18 @cite_47 @cite_35 and study @cite_42 food consumption from pictures. For instance on the Food-101 dataset @cite_1 , one of the major benchmarks on food recognition, the classification accuracy improved from @math 101K @math @math 3.7M @math 1,170$ categories).
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_14", "@cite_36", "@cite_42", "@cite_1", "@cite_0", "@cite_45", "@cite_47", "@cite_16", "@cite_10", "@cite_25" ], "mid": [ "2951457461", "2206370378", "2951805548", "2949650786", "2168154353", "12634471", "", "1563686443", "2296448531", "2087006489", "1686810756", "1994002998" ], "abstract": [ "Worldwide, in 2014, more than 1.9 billion adults, 18 years and older, were overweight. Of these, over 600 million were obese. Accurately documenting dietary caloric intake is crucial to manage weight loss, but also presents challenges because most of the current methods for dietary assessment must rely on memory to recall foods eaten. The ultimate goal of our research is to develop computer-aided technical solutions to enhance and improve the accuracy of current measurements of dietary intake. Our proposed system in this paper aims to improve the accuracy of dietary assessment by analyzing the food images captured by mobile devices (e.g., smartphone). The key technique innovation in this paper is the deep learning-based food image recognition algorithms. Substantial research has demonstrated that digital imaging accurately estimates dietary intake in many environments and it has many advantages over other methods. However, how to derive the food information (e.g., food type and portion size) from food image effectively and efficiently remains a challenging and open research problem. We propose a new Convolutional Neural Network (CNN)-based food image recognition algorithm to address this problem. We applied our proposed approach to two real-world food image data sets (UEC-256 and Food-101) and achieved impressive results. To the best of our knowledge, these results outperformed all other reported work using these two data sets. Our experiments have demonstrated that the proposed approach is a promising solution for addressing the food image recognition problem. Our future work includes further improving the performance of the algorithms and integrating our system into a real-world mobile and cloud computing-based system to enhance the accuracy of current measurements of dietary intake.", "We present a system which can recognize the contents of your meal from a single image, and then predict its nutritional contents, such as calories. The simplest version assumes that the user is eating at a restaurant for which we know the menu. In this case, we can collect images offline to train a multi-label classifier. At run time, we apply the classifier (running on your phone) to predict which foods are present in your meal, and we lookup the corresponding nutritional facts. We apply this method to a new dataset of images from 23 different restaurants, using a CNN-based classifier, significantly outperforming previous work. The more challenging setting works outside of restaurants. In this case, we need to estimate the size of the foods, as well as their labels. This requires solving segmentation and depth volume estimation from a single image. We present CNN-based approaches to these problems, with promising preliminary results.", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Estimating the nutritional value of food based on image recognition is important to health support services employing mobile devices. The estimation accuracy can be improved by recognizing regions of food objects and ingredients contained in those regions. In this paper, we propose a method that estimates nutritional information based on segmentation and labeling of food regions of an image by adopting a semantic segmentation method, in which we consider recipes as corresponding sets of food images and ingredient labels. Any food object or ingredient in a test food image can be annotated as long as the ingredient is contained in a training food image, even if the menu containing the food image appears for the first time. Experimental results show that better estimation is achieved through regression analysis using ingredient labels associated with the segmented regions than when using the local feature of pixels as the predictor variable.", "In this paper we address the problem of automatically recognizing pictured dishes. To this end, we introduce a novel method to mine discriminative parts using Random Forests (rf), which allows us to mine for parts simultaneously for all classes and to share knowledge among them. To improve efficiency of mining and classification, we only consider patches that are aligned with image superpixels, which we call components. To measure the performance of our rf component mining for food recognition, we introduce a novel and challenging dataset of 101 food categories, with 101’000 images. With an average accuracy of 50.76 , our model outperforms alternative classification methods except for cnn, including svm classification on Improved Fisher Vectors and existing discriminative part-mining algorithms by 11.88 and 8.13 , respectively. On the challenging mit-Indoor dataset, our method compares nicely to other s-o-a component-based classification methods.", "", "We present a state-of-the-art image recognition system, Deep Image, developed using end-to-end deep learning. The key components are a custom-built supercomputer dedicated to deep learning, a highly optimized parallel algorithm using new strategies for data partitioning and communication, larger deep neural network models, novel data augmentation approaches, and usage of multi-scale high-resolution images. Our method achieves excellent results on multiple challenging computer vision benchmarks.", "This paper deals with automatic systems for image recipe recognition. For this purpose, we compare and evaluate leading vision-based and text-based technologies on a new very large multimodal dataset (UPMC Food-101) containing about 100,000 recipes for a total of 101 food categories. Each item in this dataset is represented by one image plus textual information. We present deep experiments of recipe recognition on our dataset using visual, textual information and fusion. Additionally, we present experiments with text-based embedding technology to represent any food word in a semantical continuous space. We also compare our dataset features with a twin dataset provided by ETHZ university: we revisit their data collection protocols and carry out transfer learning schemes to highlight similarities and differences between both datasets. Finally, we propose a real application for daily users to identify recipes. This application is a web search engine that allows any mobile device to send a query image and retrieve the most relevant recipes in our dataset.", "We propose a mobile food recognition system, FoodCam, the purposes of which are estimating calorie and nutrition of foods and recording a user's eating habits. In this paper, we propose image recognition methods which are suitable for mobile devices. The proposed method enables real-time food image recognition on a consumer smartphone. This characteristic is completely different from the existing systems which require to send images to an image recognition server. To recognize food items, a user draws bounding boxes by touching the screen first, and then the system starts food item recognition within the indicated bounding boxes. To recognize them more accurately, we segment each food item region by GrubCut, extract image features and finally classify it into one of the one hundred food categories with a linear SVM. As image features, we adopt two kinds of features: one is the combination of the standard bag-of-features and color histograms with ?2 kernel feature maps, and the other is a HOG patch descriptor and a color patch descriptor with the state-of-the-art Fisher Vector representation. In addition, the system estimates the direction of food regions where the higher SVM output score is expected to be obtained, and it shows the estimated direction in an arrow on the screen in order to ask a user to move a smartphone camera. This recognition process is performed repeatedly and continuously. We implemented this system as a standalone mobile application for Android smartphones so as to use multiple CPU cores effectively for real-time recognition. In the experiments, we have achieved the 79.2 classification rate for the top 5 category candidates for a 100-category food dataset with the ground-truth bounding boxes when we used HOG and color patches with the Fisher Vector coding as image features. In addition, we obtained positive evaluation by a user study compared to the food recording system without object recognition.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available." ] }
1702.06559
2590833223
Recent advances in one-shot learning have produced models that can learn from a handful of labeled examples, for passive classification and regression tasks. This paper combines reinforcement learning with one-shot learning, allowing the model to decide, during classification, which examples are worth labeling. We introduce a classification task in which a stream of images are presented and, on each time step, a decision must be made to either predict a label or pay to receive the correct label. We present a recurrent neural network based action-value function, and demonstrate its ability to learn how and when to request labels. Through the choice of reward function, the model can achieve a higher prediction accuracy than a similar model on a purely supervised task, or trade prediction accuracy for fewer label requests.
Active learning deals with the problem of choosing an example, or examples, to be labeled from a set of unlabeled examples @cite_7 . We consider the setting of single pass active learning, in which a decision must be made on examples as they are pulled from a stream. Generally, methods for doing so have relied on heuristics such as similarity metrics between the current example and examples seen so far @cite_8 , or uncertainty measures in the label prediction @cite_8 @cite_5 . The premise of active learning is that there are costs associated with labeling and with making an incorrect prediction. Reinforcement learning allows for the explicit specification of those costs, and directly finds a labelling policy to optimize those costs. Thus, we believe that reinforcement learning is a natural fit for active learning. We use a deep recurrent neural network function approximator for representing the action-value function. While there have been numerous applications of deep neural networks to the related problem of semi-supervised learning @cite_11 @cite_10 , the application of deep learning to active learning problems has been limited @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "2950527759", "2903158431", "2108740451", "", "2952229419", "2949416428" ], "abstract": [ "We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.", "", "Unlabeled samples can be intelligently selected for labeling to minimize classification error. In many real-world applications, a large number of unlabeled samples arrive in a streaming manner, making it impossible to maintain all the data in a candidate pool. In this work, we focus on binary classification problems and study selective labeling in data streams where a decision is required on each sample sequentially. We consider the unbiasedness property in the sampling process, and design optimal instrumental distributions to minimize the variance in the stochastic process. Meanwhile, Bayesian linear classifiers with weighted maximum likelihood are optimized online to estimate parameters. In empirical evaluation, we collect a data stream of user-generated comments on a commercial news portal in 30 consecutive days, and carry out offline evaluation to compare various sampling strategies, including unbiased active learning, biased variants, and random sampling. Experimental results verify the usefulness of online active learning, especially in the non-stationary situation with concept drift.", "", "We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on the Ladder network proposed by Valpola (2015), which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification, in addition to permutation-invariant MNIST classification with all labels.", "The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning." ] }
1702.06559
2590833223
Recent advances in one-shot learning have produced models that can learn from a handful of labeled examples, for passive classification and regression tasks. This paper combines reinforcement learning with one-shot learning, allowing the model to decide, during classification, which examples are worth labeling. We introduce a classification task in which a stream of images are presented and, on each time step, a decision must be made to either predict a label or pay to receive the correct label. We present a recurrent neural network based action-value function, and demonstrate its ability to learn how and when to request labels. Through the choice of reward function, the model can achieve a higher prediction accuracy than a similar model on a purely supervised task, or trade prediction accuracy for fewer label requests.
Our model is very closely related to recent approaches to meta-learning and one-shot learning. Meta-learning has been successfully applied to supervised learning tasks @cite_12 @cite_2 , with key insights being training on short episodes with few class examples and randomizing the labels and classes in the episode. We propose to combine such approaches for one-shot learning with reinforcement learning, to learn an agent that can make labelling decisions online. The task and model we propose is most similar to the model proposed by @cite_12 , in which the model must predict the label for a new image at each time step, with the true label received, as input, one time step later. We extend their task to the active learning domain by withholding the true label, unless the model requests it, and training the model with reinforcement learning, rewarding accurate predictions and penalizing incorrect predictions and label requests. Thus, the model must learn to consider its own uncertainty before making a prediction or requesting the true label.
{ "cite_N": [ "@cite_12", "@cite_2" ], "mid": [ "2399033357", "2432717477" ], "abstract": [ "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of \"one-shot learning.\" Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory location-based focusing mechanisms.", "Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank." ] }
1702.06086
2593927017
Label distribution learning (LDL) is a general learning framework, which assigns to an instance a distribution over a set of labels rather than a single label or multiple labels. Current LDL methods have either restricted assumptions on the expression form of the label distribution or limitations in representation learning, e.g., to learn deep features in an end-to-end manner. This paper presents label distribution learning forests (LDLFs) - a novel label distribution learning algorithm based on differentiable decision trees, which have several advantages: 1) Decision trees have the potential to model any general form of label distributions by a mixture of leaf node predictions. 2) The learning of differentiable decision trees can be combined with representation learning. We define a distribution-based loss function for a forest, enabling all the trees to be learned jointly, and show that an update function for leaf node predictions, which guarantees a strict decrease of the loss function, can be derived by variational bounding. The effectiveness of the proposed LDLFs is verified on several LDL tasks and a computer vision application, showing significant improvements to the state-of-the-art LDL methods.
. Random forests or randomized decision trees @cite_3 @cite_5 @cite_17 @cite_20 , are a popular ensemble predictive model suitable for many machine learning tasks. In the past, learning of a decision tree was based on heuristics such as a greedy algorithm where locally-optimal hard decisions are made at each split node @cite_5 , and thus, cannot be integrated into in a deep learning framework, i.e., be combined with representation learning in an end-to-end manner.
{ "cite_N": [ "@cite_5", "@cite_20", "@cite_3", "@cite_17" ], "mid": [ "2120240539", "137456267", "1930624869", "" ], "abstract": [ "We explore a new approach to shape recognition based on a virtually infinite family of binary features (queries) of the image data, designed to accommodate prior information about shape invariance and regularity. Each query corresponds to a spatial arrangement of several local topographic codes (or tags), which are in themselves too primitive and common to be informative about shape. All the discriminating power derives from relative angles and distances among the tags. The important attributes of the queries are a natural partial ordering corresponding to increasing structure and complexity; semi-invariance, meaning that most shapes of a given class will answer the same way to two queries that are successive in the ordering; and stability, since the queries are not based on distinguished points and substructures. No classifier based on the full feature set can be evaluated, and it is impossible to determine a priori which arrangements are informative. Our approach is to select informative features and build tree classifiers at the same time by inductive learning. In effect, each tree provides an approximation to the full posterior where the features chosen depend on the branch that is traversed. Due to the number and nature of the queries, standard decision tree construction based on a fixed-length feature vector is not feasible. Instead we entertain only a small random sample of queries at each node, constrain their complexity to increase with tree depth, and grow multiple trees. The terminal nodes are labeled by estimates of the corresponding posterior distribution over shape classes. An image is classified by sending it down every tree and aggregating the resulting distributions. The method is applied to classifying handwritten digits and synthetic linear and nonlinear deformations of three hundred L AT E X symbols. Stateof-the-art error rates are achieved on the National Institute of Standards and Technology database of digits. The principal goal of the experiments on L AT E X symbols is to analyze invariance, generalization error and related issues, and a comparison with artificial neural networks methods is presented in this context.", "This practical and easy-to-follow text explores the theoretical underpinnings of decision forests, organizing the vast existing literature on the field within a new, general-purpose forest model. Topics and features: with a foreword by Prof. Y. Amit and Prof. D. Geman, recounting their participation in the development of decision forests; introduces a flexible decision forest model, capable of addressing a large and diverse set of image and video analysis tasks; investigates both the theoretical foundations and the practical implementation of decision forests; discusses the use of decision forests for such tasks as classification, regression, density estimation, manifold learning, active learning and semi-supervised classification; includes exercises and experiments throughout the text, with solutions, slides, demo videos and other supplementary material provided at an associated website; provides a free, user-friendly software library, enabling the reader to experiment with forests in a hands-on manner.", "Decision trees are attractive classifiers due to their high execution speed. But trees derived with traditional methods often cannot be grown to arbitrary complexity for possible loss of generalization accuracy on unseen data. The limitation on complexity usually means suboptimal accuracy on training data. Following the principles of stochastic modeling, we propose a method to construct tree-based classifiers whose capacity can be arbitrarily expanded for increases in accuracy for both training and unseen data. The essence of the method is to build multiple trees in randomly selected subspaces of the feature space. Trees in, different subspaces generalize their classification in complementary ways, and their combined classification can be monotonically improved. The validity of the method is demonstrated through experiments on the recognition of handwritten digits.", "" ] }
1702.06086
2593927017
Label distribution learning (LDL) is a general learning framework, which assigns to an instance a distribution over a set of labels rather than a single label or multiple labels. Current LDL methods have either restricted assumptions on the expression form of the label distribution or limitations in representation learning, e.g., to learn deep features in an end-to-end manner. This paper presents label distribution learning forests (LDLFs) - a novel label distribution learning algorithm based on differentiable decision trees, which have several advantages: 1) Decision trees have the potential to model any general form of label distributions by a mixture of leaf node predictions. 2) The learning of differentiable decision trees can be combined with representation learning. We define a distribution-based loss function for a forest, enabling all the trees to be learned jointly, and show that an update function for leaf node predictions, which guarantees a strict decrease of the loss function, can be derived by variational bounding. The effectiveness of the proposed LDLFs is verified on several LDL tasks and a computer vision application, showing significant improvements to the state-of-the-art LDL methods.
The newly proposed (dNDFs) @cite_27 overcomes this problem by introducing a soft differentiable decision function at the split nodes and a global loss function defined on a tree. This ensures that the split node parameters can be learned by back-propagation and leaf node predictions can be updated by a discrete iterative function.
{ "cite_N": [ "@cite_27" ], "mid": [ "2220384803" ], "abstract": [ "We present Deep Neural Decision Forests - a novel approach that unifies classification trees with the representation learning functionality known from deep convolutional networks, by training them in an end-to-end manner. To combine these two worlds, we introduce a stochastic and differentiable decision tree model, which steers the representation learning usually conducted in the initial layers of a (deep) convolutional network. Our model differs from conventional deep networks because a decision forest provides the final predictions and it differs from conventional decision forests since we propose a principled, joint and global optimization of split and leaf node parameters. We show experimental results on benchmark machine learning datasets like MNIST and ImageNet and find on-par or superior results when compared to state-of-the-art deep models. Most remarkably, we obtain Top5-Errors of only 7.84 6.38 on ImageNet validation data when integrating our forests in a single-crop, single seven model GoogLeNet architecture, respectively. Thus, even without any form of training data set augmentation we are improving on the 6.67 error obtained by the best GoogLeNet architecture (7 models, 144 crops)." ] }
1702.06086
2593927017
Label distribution learning (LDL) is a general learning framework, which assigns to an instance a distribution over a set of labels rather than a single label or multiple labels. Current LDL methods have either restricted assumptions on the expression form of the label distribution or limitations in representation learning, e.g., to learn deep features in an end-to-end manner. This paper presents label distribution learning forests (LDLFs) - a novel label distribution learning algorithm based on differentiable decision trees, which have several advantages: 1) Decision trees have the potential to model any general form of label distributions by a mixture of leaf node predictions. 2) The learning of differentiable decision trees can be combined with representation learning. We define a distribution-based loss function for a forest, enabling all the trees to be learned jointly, and show that an update function for leaf node predictions, which guarantees a strict decrease of the loss function, can be derived by variational bounding. The effectiveness of the proposed LDLFs is verified on several LDL tasks and a computer vision application, showing significant improvements to the state-of-the-art LDL methods.
To sum up, w.r.t. dNDFs @cite_27 , the contributions of LDLFs are: first, we extend from classification @cite_27 to distribution learning by proposing a distribution-based loss for the forests and derive the gradient to learn splits nodes w.r.t. this loss; second, we derived the update function for leaf nodes by variational bounding (having observed that the update function in @cite_27 was a special case of variational bounding); last but not the least, we propose above three strategies to learning the ensemble of multiple trees, which are different from @cite_27 , but we show are effective.
{ "cite_N": [ "@cite_27" ], "mid": [ "2220384803" ], "abstract": [ "We present Deep Neural Decision Forests - a novel approach that unifies classification trees with the representation learning functionality known from deep convolutional networks, by training them in an end-to-end manner. To combine these two worlds, we introduce a stochastic and differentiable decision tree model, which steers the representation learning usually conducted in the initial layers of a (deep) convolutional network. Our model differs from conventional deep networks because a decision forest provides the final predictions and it differs from conventional decision forests since we propose a principled, joint and global optimization of split and leaf node parameters. We show experimental results on benchmark machine learning datasets like MNIST and ImageNet and find on-par or superior results when compared to state-of-the-art deep models. Most remarkably, we obtain Top5-Errors of only 7.84 6.38 on ImageNet validation data when integrating our forests in a single-crop, single seven model GoogLeNet architecture, respectively. Thus, even without any form of training data set augmentation we are improving on the 6.67 error obtained by the best GoogLeNet architecture (7 models, 144 crops)." ] }
1702.06086
2593927017
Label distribution learning (LDL) is a general learning framework, which assigns to an instance a distribution over a set of labels rather than a single label or multiple labels. Current LDL methods have either restricted assumptions on the expression form of the label distribution or limitations in representation learning, e.g., to learn deep features in an end-to-end manner. This paper presents label distribution learning forests (LDLFs) - a novel label distribution learning algorithm based on differentiable decision trees, which have several advantages: 1) Decision trees have the potential to model any general form of label distributions by a mixture of leaf node predictions. 2) The learning of differentiable decision trees can be combined with representation learning. We define a distribution-based loss function for a forest, enabling all the trees to be learned jointly, and show that an update function for leaf node predictions, which guarantees a strict decrease of the loss function, can be derived by variational bounding. The effectiveness of the proposed LDLFs is verified on several LDL tasks and a computer vision application, showing significant improvements to the state-of-the-art LDL methods.
. A number of specialized algorithms have been proposed to address the LDL task, and have shown their effectiveness in many computer vision applications, such as facial age estimation @cite_10 @cite_22 @cite_14 , expression recognition @cite_29 and hand orientation estimation @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_29", "@cite_10" ], "mid": [ "2022566595", "2572206410", "2066454034", "2014713986", "" ], "abstract": [ "Accurate ground truth pose is essential to the training of most existing head pose estimation algorithms. However, in many cases, the \"ground truth\" pose is obtained in rather subjective ways, such as asking the human subjects to stare at different markers on the wall. In such case, it is better to use soft labels rather than explicit hard labels. Therefore, this paper proposes to associate a multivariate label distribution (MLD) to each image. An MLD covers a neighborhood around the original pose. Labeling the images with MLD can not only alleviate the problem of inaccurate pose labels, but also boost the training examples associated to each pose without actually increasing the total amount of training examples. Two algorithms are proposed to learn from the MLD by minimizing the weighted Jeffrey's divergence between the predicted MLD and the ground truth MLD. Experimental results show that the MLD-based methods perform significantly better than the compared state-of-the-art head pose estimation algorithms.", "By observing that the faces at close ages are similar, some Label Distribution Learning (LDL) methods have been proposed to solve age estimation tasks that they treat age distributions as the training targets. However, these existent LDL methods are limited because they can hardly extract enough useful information from complex image features. In this paper, Sparsity Conditional Energy Label Distribution Learning (SCE-LDL) is proposed to solve this problem. In the proposed SCE-LDL, age distributions are used as the training targets and energy function is utilized to define the age distribution. By assigning a suitable energy function, SCE-LDL can learn distributed representations, which provides the model with strong expressiveness for capturing enough of the complexity of interest from image features. The sparsity constraints are also incorporated to ameliorate the model. Experiment results in two age datasets show remarkable advantages of the proposed SCE-LDL model over the previous proposed age estimation methods.", "One of the main difficulties in facial age estimation is that the learning algorithms cannot expect sufficient and complete training data. Fortunately, the faces at close ages look quite similar since aging is a slow and smooth process. Inspired by this observation, instead of considering each face image as an instance with one label (age), this paper regards each face image as an instance associated with a label distribution. The label distribution covers a certain number of class labels, representing the degree that each label describes the instance. Through this way, one face image can contribute to not only the learning of its chronological age, but also the learning of its adjacent ages. Two algorithms, named IIS-LLD and CPNN, are proposed to learn from such label distributions. Experimental results on two aging face databases show remarkable advantages of the proposed label distribution learning algorithms over the compared single-label learning algorithms, either specially designed for age estimation or for general purpose.", "Most existing facial expression recognition methods assume the availability of a single emotion for each expression in the training set. However, in practical applications, an expression rarely expresses pure emotion, but often a mixture of different emotions. To address this problem, this paper deals with a more common case where multiple emotions are associated to each expression. The key idea is to learn the specific description degrees of all basic emotions for each expression and the mapping from the expression images to the emotion distributions by the proposed emotion distribution learning (EDL) method.The databases used in the experiments are the s-JAFFE database and the s-BU database as they are the databases with explicit scores for each emotion on each expression image. Experimental results show that EDL can effectively deal with the emotion distribution recognition problem and perform remarkably better than the state-of-the-art multi-label learning methods.", "" ] }
1702.06086
2593927017
Label distribution learning (LDL) is a general learning framework, which assigns to an instance a distribution over a set of labels rather than a single label or multiple labels. Current LDL methods have either restricted assumptions on the expression form of the label distribution or limitations in representation learning, e.g., to learn deep features in an end-to-end manner. This paper presents label distribution learning forests (LDLFs) - a novel label distribution learning algorithm based on differentiable decision trees, which have several advantages: 1) Decision trees have the potential to model any general form of label distributions by a mixture of leaf node predictions. 2) The learning of differentiable decision trees can be combined with representation learning. We define a distribution-based loss function for a forest, enabling all the trees to be learned jointly, and show that an update function for leaf node predictions, which guarantees a strict decrease of the loss function, can be derived by variational bounding. The effectiveness of the proposed LDLFs is verified on several LDL tasks and a computer vision application, showing significant improvements to the state-of-the-art LDL methods.
Another way to address the LDL task, is to extend existing learning algorithms to deal with label distributions. Geng and Hou @cite_7 proposed LDSVR, a LDL method by extending support vector regressor, which fit a sigmoid function to each component of the distribution simultaneously by a support vector machine. Xing @cite_26 then extended boosting to address the LDL task by additive weighted regressors. They showed that using the vector tree model as the weak regressor can lead to better performance and named this method AOSO-LDLLogitBoost. As the learning of this tree model is based on locally-optimal hard data partition functions at each split node, AOSO-LDLLogitBoost is unable to be combined with representation learning. Extending current deep learning algorithms to address the LDL task is an interesting topic. But, the existing such a method, called DLDL @cite_8 , still focuses on maximum entropy model based LDL.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_8" ], "mid": [ "2442264559", "2269758193", "2553156677" ], "abstract": [ "Label Distribution Learning (LDL) is a general learning framework which includes both single label and multi-label learning as its special cases. One of the main assumptions made in traditional LDL algorithms is the derivation of the parametric model as the maximum entropy model. While it is a reasonable assumption without additional information, there is no particular evidence supporting it in the problem of LDL. Alternatively, using a general LDL model family to approximate this parametric model can avoid the potential influence of the specific model. In order to learn this general model family, this paper uses a method called Logistic Boosting Regression (LogitBoost) which can be seen as an additive weighted function regression from the statistical viewpoint. For each step, we can fit individual weighted regression function (base learner) to realize the optimization gradually. The base learners are chosen as weighted regression tree and vector tree, which constitute two algorithms named LDLogitBoost and AOSO-LDLogitBoost in this paper. Experiments on facial expression recognition, crowd opinion prediction on movies and apparent age estimation show that LDLogitBoost and AOSO-LDLogitBoost can achieve better performance than traditional LDL algorithms as well as other LogitBoost algorithms.", "This paper studies an interesting problem: is it possible to predict the crowd opinion about a movie before the movie is actually released? The crowd opinion is here expressed by the distribution of ratings given by a sufficient amount of people. Consequently, the pre-release crowd opinion prediction can be regarded as a Label Distribution Learning (LDL) problem. In order to solve this problem, a Label Distribution Support Vector Regressor (LDSVR) is proposed in this paper. The basic idea of LDSVR is to fit a sigmoid function to each component of the label distribution simultaneously by a multi-output support vector machine. Experimental results show that LDSVR can accurately predict peoples's rating distribution about a movie just based on the pre-release metadata of the movie.", "Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some domains, such as apparent age estimation, head pose estimation, multilabel classification, and semantic segmentation. Fortunately, there is ambiguous information among labels, which makes these tasks different from traditional classification. Based on this observation, we convert the label of each image into a discrete label distribution, and learn the label distribution by minimizing a Kullback–Leibler divergence between the predicted and ground-truth label distributions using deep ConvNets. The proposed deep label distribution learning (DLDL) method effectively utilizes the label ambiguity in both feature learning and classifier learning, which help prevent the network from overfitting even when the training set is small. Experimental results show that the proposed approach produces significantly better results than the state-of-the-art methods for age estimation and head pose estimation. At the same time, it also improves recognition performance for multi-label classification and semantic segmentation tasks." ] }
1702.05891
2592628593
Multi-label image classification is a fundamental but challenging task in computer vision. Great progress has been achieved by exploiting semantic relations between labels in recent years. However, conventional approaches are unable to model the underlying spatial relations between labels in multi-label images, because spatial annotations of the labels are generally not provided. In this paper, we propose a unified deep neural network that exploits both semantic and spatial relations between labels with only image-level supervisions. Given a multi-label image, our proposed Spatial Regularization Network (SRN) generates attention maps for all labels and captures the underlying relations between them via learnable convolutions. By aggregating the regularized classification results with original results by a ResNet-101 network, the classification performance can be consistently improved. The whole deep neural network is trained end-to-end with only image-level annotations, thus requires no additional efforts on image annotations. Extensive evaluations on 3 public datasets with different types of labels show that our approach significantly outperforms state-of-the-arts and has strong generalization capability. Analysis of the learned SRN model demonstrates that it can effectively capture both semantic and spatial relations of labels for improving classification performance.
Approaches that learn to capture label relations were also proposed. Read al @cite_26 extended the binary relevance method by training a chain of binary classifiers, where each classifier makes predictions based on both image features and previously predicted labels. A more common way of modeling label relations is to use probabilistic graphical models @cite_38 . There were also methods on determining structures of the label relation graphs. Xue al @cite_2 directly thresholded the label correlation matrix to obtain the label structure. Li al @cite_41 used a maximum spanning tree over mutual information matrix of labels to create the graph. Li al @cite_25 proposed to learn image-dependent conditional label structures base on Graphical Lasso framework @cite_33 . Recently, deep neural networks have also been explored for learning label relations. Hu al @cite_4 proposed a structured inference neural network that transfers predictions across multiple concept layers. Wang al @cite_22 treated multi-label classification as a sequential prediction problem, and solved label dependency by Recurrent Neural Networks (RNN). Although classification accuracy has been greatly improved by learning semantic relations of labels, the above mentioned approaches fail to explore the underlying spatial relations between labels.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_4", "@cite_33", "@cite_22", "@cite_41", "@cite_2", "@cite_25" ], "mid": [ "1511986666", "1999954155", "2256558689", "", "1567302070", "", "2133128938", "" ], "abstract": [ "Most tasks require a person or an automated system to reason -- to reach conclusions based on available information. The framework of probabilistic graphical models, presented in this book, provides a general approach for this task. The approach is model-based, allowing interpretable models to be constructed and then manipulated by reasoning algorithms. These models can also be learned automatically from data, allowing the approach to be used in cases where manually constructing a model is difficult or even impossible. Because uncertainty is an inescapable aspect of most real-world applications, the book focuses on probabilistic models, which make the uncertainty explicit and provide models that are more faithful to reality. Probabilistic Graphical Models discusses a variety of models, spanning Bayesian networks, undirected Markov networks, discrete and continuous models, and extensions to deal with dynamical systems and relational data. For each class of models, the text describes the three fundamental cornerstones: representation, inference, and learning, presenting both basic concepts and advanced techniques. Finally, the book considers the use of the proposed framework for causal reasoning and decision making under uncertainty. The main text in each chapter provides the detailed technical development of the key ideas. Most chapters also include boxes with additional material: skill boxes, which describe techniques; case study boxes, which discuss empirical cases related to the approach described in the text, including applications in computer vision, robotics, natural language understanding, and computational biology; and concept boxes, which present significant concepts drawn from the material in the chapter. Instructors (and readers) can group chapters in various combinations, from core topics to more technically advanced material, to suit their particular needs.", "The widely known binary relevance method for multi-label classification, which considers each label as an independent binary problem, has often been overlooked in the literature due to the perceived inadequacy of not directly modelling label correlations. Most current methods invest considerable complexity to model interdependencies between labels. This paper shows that binary relevance-based methods have much to offer, and that high predictive performance can be obtained without impeding scalability to large datasets. We exemplify this with a novel classifier chains method that can model label correlations while maintaining acceptable computational complexity. We extend this approach further in an ensemble framework. An extensive empirical evaluation covers a broad range of multi-label datasets with a variety of evaluation metrics. The results illustrate the competitiveness of the chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity.", "Images of scenes have various objects as well as abundant attributes, and diverse levels of visual categorization are possible. A natural image could be assigned with fine-grained labels that describe major components, coarse-grained labels that depict high level abstraction or a set of labels that reveal attributes. Such categorization at different concept layers can be modeled with label graphs encoding label information. In this paper, we exploit this rich information with a state-of-art deep learning framework, and propose a generic structured model that leverages diverse label relations to improve image classification performance. Our approach employs a novel stacked label prediction neural network, capturing both inter-level and intra-level label semantics. We evaluate our method on benchmark image datasets, and empirical results illustrate the efficacy of our model.", "", "Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5 by HCP only and 93.2 after the fusion with our complementary result in [12] based on hand-crafted features on the VOC 2012 dataset.", "", "In this paper, each image is viewed as a bag of local regions, as well as it is investigated globally. A novel method is developed for achieving multi-label multi-instance image annotation, where image-level (bag-level) labels and region-level (instance-level) labels are both obtained. The associations between semantic concepts and visual features are mined both at the image level and at the region level. Inter-label correlations are captured by a co-occurence matrix of concept pairs. The cross-level label coherence encodes the consistency between the labels at the image level and the labels at the region level. The associations between visual features and semantic concepts, the correlations among the multiple labels, and the cross-level label coherence are sufficiently leveraged to improve annotation performance. Structural max-margin technique is used to formulate the proposed model and multiple interrelated classifiers are learned jointly. To leverage the available image-level labeled samples for the model training, the region-level label identification on the training set is firstly accomplished by building the correspondences between the multiple bag-level labels and the image regions. JEC distance based kernels are employed to measure the similarities both between images and between regions. Experimental results on real image datasets MSRC and Corel demonstrate the effectiveness of our method.", "" ] }
1702.05891
2592628593
Multi-label image classification is a fundamental but challenging task in computer vision. Great progress has been achieved by exploiting semantic relations between labels in recent years. However, conventional approaches are unable to model the underlying spatial relations between labels in multi-label images, because spatial annotations of the labels are generally not provided. In this paper, we propose a unified deep neural network that exploits both semantic and spatial relations between labels with only image-level supervisions. Given a multi-label image, our proposed Spatial Regularization Network (SRN) generates attention maps for all labels and captures the underlying relations between them via learnable convolutions. By aggregating the regularized classification results with original results by a ResNet-101 network, the classification performance can be consistently improved. The whole deep neural network is trained end-to-end with only image-level annotations, thus requires no additional efforts on image annotations. Extensive evaluations on 3 public datasets with different types of labels show that our approach significantly outperforms state-of-the-arts and has strong generalization capability. Analysis of the learned SRN model demonstrates that it can effectively capture both semantic and spatial relations of labels for improving classification performance.
Attention mechanism was proven to be beneficial in many vision tasks, such as visual tracking @cite_5 , object recognition @cite_15 @cite_37 , image captioning @cite_40 , image question answering @cite_1 , and segmentation @cite_8 . The spatial attention mechanism adaptively focuses on related regions of the image when the deep networks are trained with spatially-related labels. In this paper, we utilize attention mechanism for improving multi-label image classification, which captures the underlying spatial relations of labels and provides spatial regularization for the final classification results.
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_1", "@cite_40", "@cite_5", "@cite_15" ], "mid": [ "1484210532", "2257483379", "2963954913", "2950178297", "2183231851", "2951527505" ], "abstract": [ "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "We propose a novel weakly-supervised semantic segmentation algorithm based on Deep Convolutional Neural Network (DCNN). Contrary to existing weakly-supervised approaches, our algorithm exploits auxiliary segmentation annotations available for different categories to guide segmentations on images with only image-level class labels. To make segmentation knowledge transferrable across categories, we design a decoupled encoder-decoder architecture with attention model. In this architecture, the model generates spatial highlights of each category presented in images using an attention model, and subsequently performs binary segmentation for each highlighted region using decoder. Combining attention model, the decoder trained with segmentation annotations in different categories boosts accuracy of weakly-supervised semantic segmentation. The proposed algorithm demonstrates substantially improved performance compared to the state-of-theart weakly-supervised techniques in PASCAL VOC 2012 dataset when our model is trained with the annotations in 60 exclusive categories in Microsoft COCO dataset.", "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "We propose a novel attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of the human perceptual system, the model consists of two interacting pathways: ventral and dorsal. The ventral pathway models object appearance and classification using deep (factored)-restricted Boltzmann machines. At each point in time, the observations consist of retinal images, with decaying resolution toward the periphery of the gaze. The dorsal pathway models the location, orientation, scale and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the dorsal pathway, we encounter an attentional mechanism that learns to control gazes so as to minimize tracking uncertainty. The approach is modular (with each module easily replaceable with more sophisticated algorithms), straightforward to implement, practically efficient, and works well in simple video sequences.", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so." ] }
1702.05809
2590961078
Insider trading is one of the numerous white collar crimes that can contribute to the instability of the economy. Traditionally, the detection of illegal insider trades has been a human-driven process. In this paper, we collect the insider tradings made available by the US Securities and Exchange Commissions (SEC) through the EDGAR system, with the aim of initiating an automated large-scale and data-driven approach to the problem of identifying illegal insider tradings. The goal of the study is the identification of interesting patterns, which can be indicators of potential anomalies. We use the collected data to construct networks that capture the relationship between trading behaviors of insiders. We explore different ways of building networks from insider trading data, and argue for a need of a structure that is capable of capturing higher order relationships among traders. Our results suggest the discovery of interesting patterns.
A variety of methods has been introduced for graph-based anomaly detection, although very little has been done in the area of illegal insider trading. The approach depends on the nature of the graph, e.g. attributed vs. non-attributed, or static vs. dynamic. A complete overview of such methods is beyond the scope of this paper. A survey of the various approaches is found in @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2089554624" ], "abstract": [ "Detecting anomalies in data is a vital task, with numerous high-impact applications in areas such as security, finance, health care, and law enforcement. While numerous techniques have been developed in past years for spotting outliers and anomalies in unstructured collections of multi-dimensional points, with graph data becoming ubiquitous, techniques for structured graph data have been of focus recently. As objects in graphs have long-range correlations, a suite of novel technology has been developed for anomaly detection in graph data. This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs. As a key contribution, we give a general framework for the algorithms categorized under various settings: unsupervised versus (semi-)supervised approaches, for static versus dynamic graphs, for attributed versus plain graphs. We highlight the effectiveness, scalability, generality, and robustness aspects of the methods. What is more, we stress the importance of anomaly attribution and highlight the major techniques that facilitate digging out the root cause, or the why', of the detected anomalies for further analysis and sense-making. Finally, we present several real-world applications of graph-based anomaly detection in diverse domains, including financial, auction, computer traffic, and social networks. We conclude our survey with a discussion on open theoretical and practical challenges in the field." ] }
1702.06228
2949228953
Researchers often summarize their work in the form of scientific posters. Posters provide a coherent and efficient way to convey core ideas expressed in scientific papers. Generating a good scientific poster, however, is a complex and time consuming cognitive task, since such posters need to be readable, informative, and visually aesthetic. In this paper, for the first time, we study the challenging problem of learning to generate posters from scientific papers. To this end, a data-driven framework, that utilizes graphical models, is proposed. Specifically, given content to display, the key elements of a good poster, including attributes of each panel and arrangements of graphical elements are learned and inferred from data. During the inference stage, an MAP inference framework is employed to incorporate some design principles. In order to bridge the gap between panel attributes and the composition within each panel, we also propose a recursive page splitting algorithm to generate the panel layout for a poster. To learn and validate our model, we collect and release a new benchmark dataset, called NJU-Fudan Paper-Poster dataset, which consists of scientific papers and corresponding posters with exhaustively labelled panels and attributes. Qualitative and quantitative results indicate the effectiveness of our approach.
Graphical design has been studied extensively in computer graphics community. This involves several related, yet different topics. Geigel @cite_28 made use of genetic algorithm @cite_2 @cite_18 for , which addresses the placement of each photo in an album. Yu @cite_10 automatically synthesized furniture objects arrangements using simulated annealing algorithm. In contrast, Merrell @cite_5 applied some simple design guidelines to solve a similar problem. Other graphical design problems such as @cite_12 , @cite_15 , and @cite_24 have also been studied. These works often present an optimization framework along with some design guidelines to synthesize and evaluate plausible layouts.
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_24", "@cite_2", "@cite_5", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "1639032689", "2149611572", "2102664288", "1659842140", "2149846167", "66087739", "2130634053", "2166252986" ], "abstract": [ "From the Publisher: This book brings together - in an informal and tutorial fashion - the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems in many fields. Major concepts are illustrated with running examples, and major algorithms are illustrated by Pascal computer programs. No prior knowledge of GAs or genetics is assumed, and only a minimum of computer programming and mathematics background is required.", "We describe a system that uses a genetic algorithm to interactively generate personalized album pages for visual content collections on the Internet. The system has three modules: preprocessing, page creation, and page layout. We focus on the details of the genetic algorithm used in the page-layout task.", "From the Publisher: This book is designed to describe fundamental algorithmic techniques for constructing drawings of graphs. Suitable as a book or reference manual, its chapters offer an accurate, accessible reflection of the rapidly expanding field of graph drawing.", "From the Publisher: Genetic algorithms are playing an increasingly important role in studies of complex adaptive systems, ranging from adaptive agents in economic theory to the use of machine learning techniques in the design of complex devices such as aircraft turbines and integrated circuits. Adaptation in Natural and Artificial Systems is the book that initiated this field of study, presenting the theoretical foundations and exploring applications. In its most familiar form, adaptation is a biological process, whereby organisms evolve by rearranging genetic material to survive in environments confronting them. In this now classic work, Holland presents a mathematical model that allows for the nonlinearity of such complex interactions. He demonstrates the model's universality by applying it to economics, physiological psychology, game theory, and artificial intelligence and then outlines the way in which this approach modifies the traditional views of mathematical genetics. Initially applying his concepts to simply defined artificial systems with limited numbers of parameters, Holland goes on to explore their use in the study of a wide range of complex, naturally occuring processes, concentrating on systems having multiple factors that interact in nonlinear ways. Along the way he accounts for major effects of coadaptation and coevolution: the emergence of building blocks, or schemata, that are recombined and passed on to succeeding generations to provide, innovations and improvements. John H. Holland is Professor of Psychology and Professor of Electrical Engineering and Computer Science at the University of Michigan. He is also Maxwell Professor at the Santa Fe Institute and isDirector of the University of Michigan Santa Fe Institute Advanced Research Program.", "We present an interactive furniture layout system that assists users by suggesting furniture arrangements that are based on interior design guidelines. Our system incorporates the layout guidelines as terms in a density function and generates layout suggestions by rapidly sampling the density function using a hardware-accelerated Monte Carlo sampler. Our results demonstrate that the suggestion generation functionality measurably increases the quality of furniture arrangements produced by participants with no prior training in interior design.", "Issues in timing driven layout, M. Marek-Sadowska binary formulations for placement and routing problems, M. Sriram, S.M. Kang a survey of parallel algorithms for placement, P. Banerjee near optimal fast solution to graph and hypergraph partitioning, F. Makedon, S. Tragoudas LP formulation of global routing and placement, T. Lengauer, M. Lugering circuit partitioning algorithms based on geometry model, T. Asano & Tokuyama on the Manhattan and knock-knee routing modes, D. Zhou, F.P. Preparata a note on the complexity of Stockmeyer's floorplan optimization technique, T.C. Wang, D.F. Wong the virtual height of a straight line embedding of a plane graph, T. Takahashi, Y. Kajitani routing around two rectangles to minimize the layout area, T. Gonzalez, S.L. Lee.", "We present a system that automatically synthesizes indoor scenes realistically populated by a variety of furniture objects. Given examples of sensibly furnished indoor scenes, our system extracts, in advance, hierarchical and spatial relationships for various furniture objects, encoding them into priors associated with ergonomic factors, such as visibility and accessibility, which are assembled into a cost function whose optimization yields realistic furniture arrangements. To deal with the prohibitively large search space, the cost function is optimized by simulated annealing using a Metropolis-Hastings state search step. We demonstrate that our system can synthesize multiple realistic furniture arrangements and, through a perceptual study, investigate whether there is a significant difference in the perceived functionality of the automatically synthesized results relative to furniture arrangements produced by human designers.", "Decision-theoretic optimization is becoming a popular tool in the user interface community, but creating accurate cost (or utility) functions has become a bottleneck --- in most cases the numerous parameters of these functions are chosen manually, which is a tedious and error-prone process. This paper describes ARNAULD, a general interactive tool for eliciting user preferences concerning concrete outcomes and using this feedback to automatically learn a factored cost function. We empirically evaluate our machine learning algorithm and two automatic query generation approaches and report on an informal user study." ] }
1702.06228
2949228953
Researchers often summarize their work in the form of scientific posters. Posters provide a coherent and efficient way to convey core ideas expressed in scientific papers. Generating a good scientific poster, however, is a complex and time consuming cognitive task, since such posters need to be readable, informative, and visually aesthetic. In this paper, for the first time, we study the challenging problem of learning to generate posters from scientific papers. To this end, a data-driven framework, that utilizes graphical models, is proposed. Specifically, given content to display, the key elements of a good poster, including attributes of each panel and arrangements of graphical elements are learned and inferred from data. During the inference stage, an MAP inference framework is employed to incorporate some design principles. In order to bridge the gap between panel attributes and the composition within each panel, we also propose a recursive page splitting algorithm to generate the panel layout for a poster. To learn and validate our model, we collect and release a new benchmark dataset, called NJU-Fudan Paper-Poster dataset, which consists of scientific papers and corresponding posters with exhaustively labelled panels and attributes. Qualitative and quantitative results indicate the effectiveness of our approach.
Due to the popularity of comics, many related research topics, such as @cite_31 , @cite_25 and @cite_13 have drawn considerable research attention in computer graphics community. Particularly, several techniques have been studied to facilitate layout generation. For example, Arai @cite_35 and Pang @cite_26 studied how to automatically extract each panel from e-comics; and display e-comics on different devices. In order to convert conversational videos to comics, Jing @cite_27 made use of a rule based optimization scheme for layout generation. Cao @cite_17 presented a generative probabilistic framework to arrange input artworks into a manga page, and then used optimization techniques to refine it. Furthermore, Cao @cite_33 took text balloons and picture subjects into consideration for manga layout generation and guided the reader's attention. However, in our poster generation, one has to consider both texts and graphical elements composition within each panel, which has not been discussed previously.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_33", "@cite_27", "@cite_31", "@cite_13", "@cite_25", "@cite_17" ], "mid": [ "1991926808", "2070323006", "", "2188210106", "2063166622", "", "2013491007", "1973733686" ], "abstract": [ "A method for automatic e-comic scene frame extraction is proposed for displaying large scale of comic scenes onto relatively small size of display screen such as mobile devices. In line with the rapid development of mobile devices, reading comic in small screen mobile devices is also demanded and required. The challenge in providing e-comics for small screen is how to separate comic scene frames and display it in the right order to read. We propose Automatic E-Comic Frame Extraction (ACFE) for extraction of scene frames from a digital comic page automatically. ACFE is based on the blob extraction method using connected component labeling algorithm, together with a filter combination pre-processing and efficient method for detection of line between frames. Experimental results show that 91.483 percent of 634 pages in 5 digital comics are successfully extracted into scene frames by the proposed method.", "Automatically extracting frames panels from digital comic pages is crucial for techniques that facilitate comic reading on mobile devices with limited display areas. However, automatic panel extraction for manga, i.e., Japanese comics, can be especially challenging, largely because of its complex panel layout design mixed with various visual symbols throughout the page. In this paper, we propose a robust method for automatically extracting panels from digital manga pages. Our method first extracts the panel block by closing open panels and identifying a page background mask. It then performs a recursive binary splitting to partition the panel block into a set of sub-blocks, where an optimal splitting line at each recursive level is determined adaptively.", "", "We introduce in this paper a new approach that conveniently converts conversational videos into comics with manga-style layout. With our approach, the manga-style layout of a comic page is achieved in a content-driven manner, and the main components, including panels and word balloons, that constitute a visually pleasing comic page are intelligently organized . Our approach extracts key frames on speakers by using a speaker detection technique such that word balloons can be placed near the corresponding speakers. We qualitatively measure the information contained in a comic page. With the initial layout automatically determined, the final comic page is obtained by maximizing such a measure and optimizing the parameters relating to the optimal display of comics. An efficient Markov chain Monte Carlo sampling algorithm is designed for the optimization. Our user study demonstrates that users much prefer our manga-style comics to purely Western style comics. Extensive experiments and comparisons against previous work also verify the effectiveness of our approach.", "The existing content aware image retargeting methods are mainly suitable for natural images, and do not perform well on line drawings. Such methods tend to regard homogeneous areas as less important. For line drawings, this is not always true.", "", "This research proposes a novel method to present \"thumbnails\" of episodes of digitized comics, in order to improve the efficiency of comic search. Comic episode thumbnails are generated based on image analysis technologies developed especially for comic images. Namely, the following procedures are developed for our system: automatic comic frame segmentation, text balloon extraction, and a linear regression based model to calculate the importance score of each extracted frame. The system then selects frames from each episode with high importance score, and aligns the selected frames to create the episode thumbnail, which is presented to the system user as a compact preview of the episode. User experiments conducted with actual Japanese comic images prove that the proposed method significantly decreases the time necessary to search for specific episodes from a large scaled comic data collection.", "Manga layout is a core component in manga production, characterized by its unique styles. However, stylistic manga layouts are difficult for novices to produce as it requires hands-on experience and domain knowledge. In this paper, we propose an approach to automatically generate a stylistic manga layout from a set of input artworks with user-specified semantics, thus allowing less-experienced users to create high-quality manga layouts with minimal efforts. We first introduce three parametric style models that encode the unique stylistic aspects of manga layouts, including layout structure, panel importance, and panel shape. Next, we propose a two-stage approach to generate a manga layout: 1) an initial layout is created that best fits the input artworks and layout structure model, according to a generative probabilistic framework; 2) the layout and artwork geometries are jointly refined using an efficient optimization procedure, resulting in a professional-looking manga layout. Through a user study, we demonstrate that our approach enables novice users to easily and quickly produce higher-quality layouts that exhibit realistic manga styles, when compared to a commercially-available manual layout tool." ] }
1702.06228
2949228953
Researchers often summarize their work in the form of scientific posters. Posters provide a coherent and efficient way to convey core ideas expressed in scientific papers. Generating a good scientific poster, however, is a complex and time consuming cognitive task, since such posters need to be readable, informative, and visually aesthetic. In this paper, for the first time, we study the challenging problem of learning to generate posters from scientific papers. To this end, a data-driven framework, that utilizes graphical models, is proposed. Specifically, given content to display, the key elements of a good poster, including attributes of each panel and arrangements of graphical elements are learned and inferred from data. During the inference stage, an MAP inference framework is employed to incorporate some design principles. In order to bridge the gap between panel attributes and the composition within each panel, we also propose a recursive page splitting algorithm to generate the panel layout for a poster. To learn and validate our model, we collect and release a new benchmark dataset, called NJU-Fudan Paper-Poster dataset, which consists of scientific papers and corresponding posters with exhaustively labelled panels and attributes. Qualitative and quantitative results indicate the effectiveness of our approach.
Our panel layout generation method is partly inspired by the recent work on manga layout @cite_17 . We use a binary tree to represent the panel layout. By contrast, the manga layout trains a Dirichlet distribution to sample a splitting configuration, and different Dirichlet distribution for each kind of instance has to be trained as a result. Instead, we propose a recursive algorithm to search for the best splitting configuration along a binary tree.
{ "cite_N": [ "@cite_17" ], "mid": [ "1973733686" ], "abstract": [ "Manga layout is a core component in manga production, characterized by its unique styles. However, stylistic manga layouts are difficult for novices to produce as it requires hands-on experience and domain knowledge. In this paper, we propose an approach to automatically generate a stylistic manga layout from a set of input artworks with user-specified semantics, thus allowing less-experienced users to create high-quality manga layouts with minimal efforts. We first introduce three parametric style models that encode the unique stylistic aspects of manga layouts, including layout structure, panel importance, and panel shape. Next, we propose a two-stage approach to generate a manga layout: 1) an initial layout is created that best fits the input artworks and layout structure model, according to a generative probabilistic framework; 2) the layout and artwork geometries are jointly refined using an efficient optimization procedure, resulting in a professional-looking manga layout. Through a user study, we demonstrate that our approach enables novice users to easily and quickly produce higher-quality layouts that exhibit realistic manga styles, when compared to a commercially-available manual layout tool." ] }
1702.06228
2949228953
Researchers often summarize their work in the form of scientific posters. Posters provide a coherent and efficient way to convey core ideas expressed in scientific papers. Generating a good scientific poster, however, is a complex and time consuming cognitive task, since such posters need to be readable, informative, and visually aesthetic. In this paper, for the first time, we study the challenging problem of learning to generate posters from scientific papers. To this end, a data-driven framework, that utilizes graphical models, is proposed. Specifically, given content to display, the key elements of a good poster, including attributes of each panel and arrangements of graphical elements are learned and inferred from data. During the inference stage, an MAP inference framework is employed to incorporate some design principles. In order to bridge the gap between panel attributes and the composition within each panel, we also propose a recursive page splitting algorithm to generate the panel layout for a poster. To learn and validate our model, we collect and release a new benchmark dataset, called NJU-Fudan Paper-Poster dataset, which consists of scientific papers and corresponding posters with exhaustively labelled panels and attributes. Qualitative and quantitative results indicate the effectiveness of our approach.
The emergence of data and information that we need to present, challenges our ability to present it manually; thus, automated layout of presentations is becoming increasingly important @cite_32 . For , early works, such as @cite_0 @cite_22 , focused largely on line breaking, paragraph arrangement and some other micro-typography problems. A common way to solve these problems is as a constrained optimization problem @cite_34 . More recent works pay attention to . Jacobs @cite_8 presented a grid based dynamic programming method to select a page layout template. Damera-Venkata @cite_19 made use of Probabilistic Document Model (PDM) to facilitate document layout. By contrast, we focus on both macro-typography problems (e.g panel layout) and micro-typograph (e.g. graphical elements size decision) in this paper. Additionally, rather than using simple design guidelines as previous work @cite_0 @cite_22 , we learn our layout generating model from the annotated training datasets.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_32", "@cite_0", "@cite_19", "@cite_34" ], "mid": [ "1989399140", "2048183501", "2136384865", "1977304153", "2014740140", "1632809586" ], "abstract": [ "The formalization of the architecture of documents and text formatting are the central issues of this paper. Besides a fundamental and theoretical approach toward these topics, an overview is presented of the COBATEF system. The COBATEF system is a context-based text formatting system, for which a software, as well as a hardware, implementation is available. A unique feature of the system is its automatic text-element recognition mechanism, which is context based and consequently takes advantage of the implicit structure of text. A predefined layout for each type of text element then opens the way for a fully automatic text-processing system in which user control information can be reduced to an absolute minimum.", "Grid-based page designs are ubiquitous in commercially printed publications, such as newspapers and magazines. Yet, to date, no one has invented a good way to easily and automatically adapt such designs to arbitrarily-sized electronic displays. The difficulty of generalizing grid-based designs explains the generally inferior nature of on-screen layouts when compared to their printed counterparts, and is arguably one of the greatest remaining impediments to creating on-line reading experiences that rival those of ink on paper. In this work, we present a new approach to adaptive grid-based document layout, which attempts to bridge this gap. In our approach, an adaptive layout style is encoded as a set of grid-based templates that know how to adapt to a range of page sizes and other viewing conditions. These templates include various types of layout elements (such as text, figures, etc.) and define, through constraint-based relationships, just how these elements are to be laid out together as a function of both the properties of the content itself, such as a figure's size and aspect ratio, and the properties of the viewing conditions under which the content is being displayed. We describe an XML-based representation for our templates and content, which maintains a clean separation between the two. We also describe the various parts of our research prototype system: a layout engine for formatting the page; a paginator for determining a globally optimal allocation of content amongst the pages, as well as an optimal pairing of templates with content; and a graphical user interface for interactively creating adaptive templates. We also provide numerous examples demonstrating the capabilities of this prototype, including this paper, itself, which has been laid out with our system.", "We review the literature on automatic document formatting with an emphasis on recent work in the field. One common way to frame document formatting is as a constrained optimization problem where decision variables encode element placement, constraints enforce required geometric relationships, and the objective function measures layout quality. We present existing research using this framework, describing the kind of optimization problem being solved and the basic optimization techniques used to solve it. Our review focuses on the formatting of primarily textual documents, including both micro- and macro-typographic concerns. We also cover techniques for automatic table layout. Related problems such as widget and diagram layout, as well as temporal layout issues that arise in multimedia documents are outside the scope of this review.", "This paper discusses a new approach to the problem of dividing the text of a paragraph into lines of approximately equal length. Instead of simply making decisions one line at a time, the method considers the paragraph as a whole, so that the final appearance of a given line might be influenced by the text on succeeding lines. A system based on three simple primitive concepts called ‘boxes’, ‘glue’, and ‘penalties’ provides the ability to deal satisfactorily with a wide variety of typesetting problems in a unified framework, using a single algorithm that determines optimum breakpoints. The algorithm avoids backtracking by a judicious use of the techniques of dynamic programming. Extensive computational experience confirms that the approach is both efficient and effective in producing high-quality output. The paper concludes with a brief history of line-breaking methods, and an appendix presents a simplified algorithm that requires comparatively few resources.", "We present a new paradigm for automated document composition based on a generative, unified probabilistic document model (PDM) that models document composition. The model formally incorporates key design variables such as content pagination, relative arrangement possibilities for page elements and possible page edits. These design choices are modeled jointly as coupled random variables (a Bayesian Network) with uncertainty modeled by their probability distributions. The overall joint probability distribution for the network assigns higher probability to good design choices. Given this model, we show that the general document layout problem can be reduced to probabilistic inference over the Bayesian network. We show that the inference task may be accomplished efficiently, scaling linearly with the content in the best case. We provide a useful specialization of the general model and use it to illustrate the advantages of soft probabilistic encodings over hard one-way constraints in specifying design aesthetics.", "Layout refers to the process of determining the sizes and positions of the visual objects that are part of an information presentation. Automated layout refers to the use of a computer program to automate either all or part of the layout process. This field of research lies at the crossroads between artificial intelligence and human computer interaction. Automated layout of presentations is becoming increasingly important as the amount of data that we need to present rapidly overtakes our ability to present it manually. We survey and analyze the techniques used by research systems that have automated layout components and suggest possible areas of future work." ] }
1702.06228
2949228953
Researchers often summarize their work in the form of scientific posters. Posters provide a coherent and efficient way to convey core ideas expressed in scientific papers. Generating a good scientific poster, however, is a complex and time consuming cognitive task, since such posters need to be readable, informative, and visually aesthetic. In this paper, for the first time, we study the challenging problem of learning to generate posters from scientific papers. To this end, a data-driven framework, that utilizes graphical models, is proposed. Specifically, given content to display, the key elements of a good poster, including attributes of each panel and arrangements of graphical elements are learned and inferred from data. During the inference stage, an MAP inference framework is employed to incorporate some design principles. In order to bridge the gap between panel attributes and the composition within each panel, we also propose a recursive page splitting algorithm to generate the panel layout for a poster. To learn and validate our model, we collect and release a new benchmark dataset, called NJU-Fudan Paper-Poster dataset, which consists of scientific papers and corresponding posters with exhaustively labelled panels and attributes. Qualitative and quantitative results indicate the effectiveness of our approach.
Another piece of relate work is called @cite_1 , which made use of an energy-based model derived from design principles for graphic design layout. However, they regard texts as a rectangle block rather than text flow, which is inappropriate for scientific poster generation. Harrington @cite_6 described a measure of document aesthetics, and an aesthetics driven layout engine is proposed in @cite_29 . However, these approaches do not put constraints on the ordering of content, which is clearly important for scientific poster generation.
{ "cite_N": [ "@cite_29", "@cite_1", "@cite_6" ], "mid": [ "1965871650", "2030073376", "" ], "abstract": [ "The digital networked world is enabling and requiring a new emphasis on personalized document creation. The new, more dynamic digital environment demands tools that can reproduce both the contents and the layout automatically, tailored to personal needs and transformed for the presentation device, and can enable novices to easily create such documents. In order to achieve such automated document assembly and transformation, we have formalized custom document creation as a multiobjective optimization problem, and use a genetic algorithm to assemble and transform compound personalized documents. While we have found that such an automated process for document creation opens new possibilities and new workflows, we have also found several areas where further research would enable the approach to be more broadly and practically applied. This paper reviews the current system and outlines several areas where future research will broaden its current capabilities.", "This paper presents an approach for automatically creating graphic design layouts using a new energy-based model derived from design principles. The model includes several new algorithms for analyzing graphic designs, including the prediction of perceived importance, alignment detection, and hierarchical segmentation. Given the model, we use optimization to synthesize new layouts for a variety of single-page graphic designs. Model parameters are learned with Nonlinear Inverse Optimization (NIO) from a small number of example layouts. To demonstrate our approach, we show results for applications including generating design layouts in various styles, retargeting designs to new sizes, and improving existing designs. We also compare our automatic results with designs created using crowdsourcing and show that our approach performs slightly better than novice designers.", "" ] }
1702.05839
2594620777
This paper introduces Progressively Diffused Networks (PDNs) for unifying multi-scale context modeling with deep feature learning, by taking semantic image segmentation as an exemplar application. Prior neural networks, such as ResNet, tend to enhance representational power by increasing the depth of architectures and driving the training objective across layers. However, we argue that spatial dependencies in different layers, which generally represent the rich contexts among data elements, are also critical to building deep and discriminative representations. To this end, our PDNs enables to progressively broadcast information over the learned feature maps by inserting a stack of information diffusion layers, each of which exploits multi-dimensional convolutional LSTMs (Long-Short-Term Memory Structures). In each LSTM unit, a special type of atrous filters are designed to capture the short range and long range dependencies from various neighbors to a certain site of the feature map and pass the accumulated information to the next layer. From the extensive experiments on semantic image segmentation benchmarks (e.g., ImageNet Parsing, PASCAL VOC2012 and PASCAL-Part), our framework demonstrates the effectiveness to substantially improve the performances over the popular existing neural network models, and achieves state-of-the-art on ImageNet Parsing for large scale semantic segmentation.
A typical representation learning model is Convolutional Neural Networks (CNN) @cite_8 @cite_28 @cite_20 , which is designed to process the data with multiple arrays such as images @cite_28 or videos @cite_9 . By stacking several convolution-pooling layers, this model transforms the visual representation from one level into a slightly more abstract level. Recently, many works tend to enhance representational power of CNN by increasing the depth of architectures @cite_32 @cite_38 @cite_15 @cite_17 , and achieve great success on image classification @cite_28 @cite_17 . The dense prediction task, such as semantic segmentation, has also benefited from such deep feature learning methods @cite_20 @cite_29 . @cite_20 , Long . firstly replaced fully-connected layers of CNN with convolutional layers, making it possible to accomplish pixel-wise prediction in the whole image by the deep model. Chen . @cite_37 further proposed the atrous convolution to explicitly control the resolution of feature responses, and exhibited the atrous spatial pyramid pooling for dense predicting at multiple scales.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_8", "@cite_28", "@cite_9", "@cite_29", "@cite_32", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "2950179405", "2412782625", "2147800946", "", "", "2964288706", "2962835968", "", "2952632681", "2194775991" ], "abstract": [ "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.", "", "", "Abstract: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ] }
1702.05839
2594620777
This paper introduces Progressively Diffused Networks (PDNs) for unifying multi-scale context modeling with deep feature learning, by taking semantic image segmentation as an exemplar application. Prior neural networks, such as ResNet, tend to enhance representational power by increasing the depth of architectures and driving the training objective across layers. However, we argue that spatial dependencies in different layers, which generally represent the rich contexts among data elements, are also critical to building deep and discriminative representations. To this end, our PDNs enables to progressively broadcast information over the learned feature maps by inserting a stack of information diffusion layers, each of which exploits multi-dimensional convolutional LSTMs (Long-Short-Term Memory Structures). In each LSTM unit, a special type of atrous filters are designed to capture the short range and long range dependencies from various neighbors to a certain site of the feature map and pass the accumulated information to the next layer. From the extensive experiments on semantic image segmentation benchmarks (e.g., ImageNet Parsing, PASCAL VOC2012 and PASCAL-Part), our framework demonstrates the effectiveness to substantially improve the performances over the popular existing neural network models, and achieves state-of-the-art on ImageNet Parsing for large scale semantic segmentation.
Meanwhile, in order to explicitly discover the intricate structures in the visual data for dense labeling, the graphic models were applied to explore the rich information (e.g. long-range dependencies or high-order potentials) in the image by defining the spatial constrains. @cite_29 , the confidence maps generated by the Fully Convolutional Networks (FCN) @cite_20 were fed into the Conditional Random Field (CRF) with simple pairwise potentials for post-processing, but this model treated the FCN and CRF as separated components, limiting the joint optimization of the model. In contrast, Schwing . @cite_24 jointly train the FCN and Markov Random Field (MRF) by passing the error generated by MRF back to the neural networks. However, the iterative inference algorithm (i.e. Mean Field inference) used in this method is time consuming. To improve computational efficiency, Liu . @cite_12 solve MRF by the convolution operations, which devises the additional layers to approximate the mean field inference for pairwise terms. Although these methods significantly improve the performance of dense labelling, the contextual information is still not explicitly encoded into the pixel-wise representations.
{ "cite_N": [ "@cite_24", "@cite_29", "@cite_12", "@cite_20" ], "mid": [ "2102492119", "2964288706", "2111077768", "2952632681" ], "abstract": [ "Convolutional neural networks with many layers have recently been shown to achieve excellent results on many high-level tasks such as image classification, object detection and more recently also semantic segmentation. Particularly for semantic segmentation, a two-stage procedure is often employed. Hereby, convolutional networks are trained to provide good local pixel-wise features for the second step being traditionally a more global graphical model. In this work we unify this two-stage process into a single joint training algorithm. We demonstrate our method on the semantic image segmentation task and show encouraging results on the challenging PASCAL VOC 2012 dataset.", "Abstract: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "This paper addresses semantic image segmentation by incorporating rich information into Markov Random Field (MRF), including high-order relations and mixture of label contexts. Unlike previous works that optimized MRFs using iterative algorithm, we solve MRF by proposing a Convolutional Neural Network (CNN), namely Deep Parsing Network (DPN), which enables deterministic end-toend computation in a single forward pass. Specifically, DPN extends a contemporary CNN architecture to model unary terms and additional layers are carefully devised to approximate the mean field algorithm (MF) for pairwise terms. It has several appealing properties. First, different from the recent works that combined CNN and MRF, where many iterations of MF were required for each training image during back-propagation, DPN is able to achieve high performance by approximating one iteration of MF. Second, DPN represents various types of pairwise terms, making many existing works as its special cases. Third, DPN makes MF easier to be parallelized and speeded up in Graphical Processing Unit (GPU). DPN is thoroughly evaluated on the PASCAL VOC 2012 dataset, where a single DPN model yields a new state-of-the-art segmentation accuracy of 77.5 .", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image." ] }
1702.05839
2594620777
This paper introduces Progressively Diffused Networks (PDNs) for unifying multi-scale context modeling with deep feature learning, by taking semantic image segmentation as an exemplar application. Prior neural networks, such as ResNet, tend to enhance representational power by increasing the depth of architectures and driving the training objective across layers. However, we argue that spatial dependencies in different layers, which generally represent the rich contexts among data elements, are also critical to building deep and discriminative representations. To this end, our PDNs enables to progressively broadcast information over the learned feature maps by inserting a stack of information diffusion layers, each of which exploits multi-dimensional convolutional LSTMs (Long-Short-Term Memory Structures). In each LSTM unit, a special type of atrous filters are designed to capture the short range and long range dependencies from various neighbors to a certain site of the feature map and pass the accumulated information to the next layer. From the extensive experiments on semantic image segmentation benchmarks (e.g., ImageNet Parsing, PASCAL VOC2012 and PASCAL-Part), our framework demonstrates the effectiveness to substantially improve the performances over the popular existing neural network models, and achieves state-of-the-art on ImageNet Parsing for large scale semantic segmentation.
In the literature, the Long Short Term Memory (LSTM) Network has been introduced to deal with the long-range dependencies in the representation modeling, and this advanced Recurrent Neural Network (RNN) has achieved great success in many intelligent tasks @cite_26 @cite_16 @cite_13 @cite_6 . In recent years, it has been extended to multi-dimensional communication @cite_21 @cite_11 @cite_39 and adapted to represent the rich contexts in image spatial @cite_3 @cite_10 . @cite_3 , a recent advance in LSTM-based context modeling was achieved by considering both short dependencies from local area and long-distance global information from the whole image. Liang . @cite_10 further extended this work from multi-dimensional data to general graph-structured data, and constructed an adaptive graph topology to propagate contextual information between adjacent superpixels. Nevertheless, in these works, the feature representation of each position is affected by a limited local factors (i.e. the adjacent positions), which restricts the capacity of involving diverse visual correlations in a large range. Different from using limited local LSTM units, the proposed PDNs captures the short-range and long-range dependencies from various neighbors and can generate more informative representation for pixel-wise prediction.
{ "cite_N": [ "@cite_26", "@cite_10", "@cite_21", "@cite_6", "@cite_39", "@cite_3", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "", "2951729963", "1771459135", "", "", "2963758239", "", "2950178297", "2953250761" ], "abstract": [ "", "By taking the semantic object parsing task as an exemplar application scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network, which is the generalization of LSTM from sequential data or multi-dimensional data to general graph-structured data. Particularly, instead of evenly and fixedly dividing an image to pixels or patches in existing multi-dimensional LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each arbitrary-shaped superpixel as a semantically consistent node, and adaptively construct an undirected graph for each image, where the spatial relations of the superpixels are naturally used as edges. Constructed on such an adaptive graph topology, the Graph LSTM is more naturally aligned with the visual patterns in the image (e.g., object boundaries or appearance similarities) and provides a more economical information propagation route. Furthermore, for each optimization step over Graph LSTM, we propose to use a confidence-driven scheme to update the hidden and memory states of nodes progressively till all nodes are updated. In addition, for each node, the forgets gates are adaptively learned to capture different degrees of semantic correlation with neighboring nodes. Comprehensive evaluations on four diverse semantic object parsing datasets well demonstrate the significant superiority of our Graph LSTM over other state-of-the-art solutions.", "This paper introduces Grid Long Short-Term Memory, a network of LSTM cells arranged in a multidimensional grid that can be applied to vectors, sequences or higher dimensional data such as images. The network differs from existing deep LSTM architectures in that the cells are connected between network layers as well as along the spatiotemporal dimensions of the data. The network provides a unified way of using LSTM for both deep and sequential computation. We apply the model to algorithmic tasks such as 15-digit integer addition and sequence memorization, where it is able to significantly outperform the standard LSTM. We then give results for two empirical tasks. We find that 2D Grid LSTM achieves 1.47 bits per character on the Wikipedia character prediction benchmark, which is state-of-the-art among neural approaches. In addition, we use the Grid LSTM to define a novel two-dimensional translation model, the Reencoder, and show that it outperforms a phrase-based reference system on a Chinese-to-English translation task.", "", "", "Semantic object parsing is a fundamental task for understanding objects in detail in computer vision community, where incorporating multi-level contextual information is critical for achieving such fine-grained pixel-level recognition. Prior methods often leverage the contextual information through post-processing predicted confidence maps. In this work, we propose a novel deep Local-Global Long Short-Term Memory (LG-LSTM) architecture to seamlessly incorporate short-distance and long-distance spatial dependencies into the feature learning over all pixel positions. In each LG-LSTM layer, local guidance from neighboring positions and global guidance from the whole image are imposed on each position to better exploit complex local and global contextual information. Individual LSTMs for distinct spatial dimensions are also utilized to intrinsically capture various spatial layouts of semantic parts in the images, yielding distinct hidden and memory cells of each position for each dimension. In our parsing approach, several LG-LSTM layers are stacked and appended to the intermediate convolutional layers to directly enhance visual features, allowing network parameters to be learned in an end-to-end way. The long chains of sequential computation by stacked LG-LSTM layers also enable each pixel to sense a much larger region for inference benefiting from the memorization of previous dependencies in all positions along all dimensions. Comprehensive evaluations on three public datasets well demonstrate the significant superiority of our LG-LSTM over other state-of-the-art methods.", "", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multi-dimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting." ] }
1702.05729
2592388817
Searching persons in large-scale image databases with the query of natural language description has important applications in video surveillance. Existing methods mainly focused on searching persons with image-based or attribute-based queries, which have major limitations for a practical usage. In this paper, we study the problem of person search with natural language description. Given the textual description of a person, the algorithm of the person search is required to rank all the samples in the person database then retrieve the most relevant sample corresponding to the queried description. Since there is no person dataset or benchmark with textual description available, we collect a large-scale person description dataset with detailed natural language annotations and person samples from various sources, termed as CUHK Person Description Dataset (CUHK-PEDES). A wide range of possible models and baselines have been evaluated and compared on the person search benchmark. An Recurrent Neural Network with Gated Neural Attention mechanism (GNA-RNN) is proposed to establish the state-of-the art performance on person search.
Different from convolutional neural network which works well in image classification @cite_1 @cite_7 and object detection @cite_23 @cite_29 @cite_32 , recurrent neural network is more suitable in processing sequential data. A large number of deep models for vision tasks @cite_39 @cite_2 @cite_21 @cite_31 @cite_11 @cite_40 @cite_24 have been proposed in recent years. For image captioning, Mao al @cite_0 learned feature embedding for each word in a sentence, and connected it with the image CNN features by a multi-modal layer to generate image captions. Vinyal al @cite_33 extracted high-level image features from CNN and fed it into LSTM for estimating the output sequence. The NeuralTalk @cite_13 looked for the latent alignment between segments of sentences and image regions in a joint embedding space for sentence generation.
{ "cite_N": [ "@cite_33", "@cite_7", "@cite_29", "@cite_21", "@cite_1", "@cite_32", "@cite_39", "@cite_24", "@cite_0", "@cite_40", "@cite_23", "@cite_2", "@cite_31", "@cite_13", "@cite_11" ], "mid": [ "2951912364", "2949650786", "2336589871", "2950728047", "", "2590174509", "2950178297", "2949769367", "1811254738", "2122180654", "", "2950761309", "2963758027", "2951805548", "2189070436" ], "abstract": [ "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "The state-of-the-art performance for object detection has been significantly improved over the past two years. Besides the introduction of powerful deep neural networks, such as GoogleNet and VGG, novel object detection frameworks, such as R-CNN and its successors, Fast R-CNN, and Faster R-CNN, play an essential role in improving the state of the art. Despite their effectiveness on still images, those frameworks are not specifically designed for object detection from videos. Temporal and contextual information of videos are not fully investigated and utilized. In this paper, we propose a deep learning framework that incorporates temporal and contextual information from tubelets obtained in videos, which dramatically improves the baseline performance of existing still-image detection frameworks when they are applied to videos. It is called T-CNN, i.e., tubelets with convolutional neueral networks. The proposed framework won newly introduced an object-detection-from-video task with provided data in the ImageNet Large-Scale Visual Recognition Challenge 2015. Code is publicly available at https: github.com myfavouritekk T-CNN .", "In this paper we approach the novel problem of segmenting an image based on a natural language expression. This is different from traditional semantic segmentation over a predefined set of semantic classes, as e.g., the phrase \"two men sitting on the right bench\" requires segmenting only the two people on the right bench and no one standing or sitting on another bench. Previous approaches suitable for this task were limited to a fixed set of categories and or rectangular regions. To produce pixelwise segmentation for the language expression, we propose an end-to-end trainable recurrent and convolutional network model that jointly learns to process visual and linguistic information. In our model, a recurrent LSTM network is used to encode the referential expression into a vector representation, and a fully convolutional network is used to a extract a spatial feature map from the image and output a spatial response map for the target object. We demonstrate on a benchmark dataset that our model can produce quality segmentation output from the natural language expression, and outperforms baseline methods by a large margin.", "", "Object detection in videos has drawn increasing attention recently with the introduction of the large-scale ImageNet VID dataset. Different from object detection in static images, temporal information in videos is vital for object detection. To fully utilize temporal information, state-of-the-art methods [15, 14] are based on spatiotemporal tubelets, which are essentially sequences of associated bounding boxes across time. However, the existing methods have major limitations in generating tubelets in terms of quality and efficiency. Motion-based [14] methods are able to obtain dense tubelets efficiently, but the lengths are generally only several frames, which is not optimal for incorporating long-term temporal information. Appearance-based [15] methods, usually involving generic object tracking, could generate long tubelets, but are usually computationally expensive. In this work, we propose a framework for object detection in videos, which consists of a novel tubelet proposal network to efficiently generate spatiotemporal proposals, and a Long Short-term Memory (LSTM) network that incorporates temporal information from tubelet proposals for achieving high object detection accuracy in videos. Experiments on the large-scale ImageNet VID dataset demonstrate the effectiveness of the proposed framework for object detection in videos.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1 . When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34 of the time.", "In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html .", "In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. We propose learning this mapping using a recurrent neural network. Unlike previous approaches that map both sentences and images to a common embedding, we enable the generation of novel sentences given an image. Using the same model, we can also reconstruct the visual features associated with an image given its visual description. We use a novel recurrent visual memory that automatically learns to remember long-term visual concepts to aid in both sentence generation and visual feature reconstruction. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are preferred by humans over @math of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features.", "", "We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).", "We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.", "In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7 of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: http: idl.baidu.com FM-IQA.html ." ] }
1702.05729
2592388817
Searching persons in large-scale image databases with the query of natural language description has important applications in video surveillance. Existing methods mainly focused on searching persons with image-based or attribute-based queries, which have major limitations for a practical usage. In this paper, we study the problem of person search with natural language description. Given the textual description of a person, the algorithm of the person search is required to rank all the samples in the person database then retrieve the most relevant sample corresponding to the queried description. Since there is no person dataset or benchmark with textual description available, we collect a large-scale person description dataset with detailed natural language annotations and person samples from various sources, termed as CUHK Person Description Dataset (CUHK-PEDES). A wide range of possible models and baselines have been evaluated and compared on the person search benchmark. An Recurrent Neural Network with Gated Neural Attention mechanism (GNA-RNN) is proposed to establish the state-of-the art performance on person search.
Visual QA methods were proposed to answer questions about given images @cite_18 @cite_43 @cite_25 @cite_8 @cite_5 @cite_34 . Yang al @cite_25 presented a stacked attention network that refined the joint features by recursively attending question-related image regions, which leads to better QA accuracy. Noh al @cite_43 learned a dynamic parameter layer with hashing techniques, which adaptively adjusts image features based on different questions for accurate answer classification.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_43", "@cite_5", "@cite_34", "@cite_25" ], "mid": [ "2949218037", "2475269242", "2175714310", "2952246170", "2412400526", "2171810632" ], "abstract": [ "This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.", "Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers. Thus, it is critical to build an efficient embedding of images and texts. We implement DualNet, which fully takes advantage of discriminative power of both image and textual features by separately performing two operations. Building an ensemble of DualNet further boosts the performance. Contrary to common belief, our method proved effective in both real images and abstract scenes, in spite of significantly different properties of respective domain. Our method was able to outperform previous state-of-the-art methods in real images category even without explicitly employing attention mechanism, and also outperformed our own state-of-the-art method in abstract scenes category, which recently won the first place in VQA Challenge 2016.", "We tackle image question answering (ImageQA) problem by learning a convolutional neural network (CNN) with a dynamic parameter layer whose weights are determined adaptively based on questions. For the adaptive parameter prediction, we employ a separate parameter prediction network, which consists of gated recurrent unit (GRU) taking a question as its input and a fully-connected layer generating a set of candidate weights as its output. However, it is challenging to construct a parameter prediction network for a large number of parameters in the fully-connected dynamic parameter layer of the CNN. We reduce the complexity of this problem by incorporating a hashing technique, where the candidate weights given by the parameter prediction network are selected using a predefined hash function to determine individual weights in the dynamic parameter layer. The proposed network---joint network with the CNN for ImageQA and the parameter prediction network---is trained end-to-end through back-propagation, where its weights are initialized using a pre-trained CNN and GRU. The proposed algorithm illustrates the state-of-the-art performance on all available public ImageQA benchmarks.", "We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus.", "Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge.", "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer." ] }
1702.05729
2592388817
Searching persons in large-scale image databases with the query of natural language description has important applications in video surveillance. Existing methods mainly focused on searching persons with image-based or attribute-based queries, which have major limitations for a practical usage. In this paper, we study the problem of person search with natural language description. Given the textual description of a person, the algorithm of the person search is required to rank all the samples in the person database then retrieve the most relevant sample corresponding to the queried description. Since there is no person dataset or benchmark with textual description available, we collect a large-scale person description dataset with detailed natural language annotations and person samples from various sources, termed as CUHK Person Description Dataset (CUHK-PEDES). A wide range of possible models and baselines have been evaluated and compared on the person search benchmark. An Recurrent Neural Network with Gated Neural Attention mechanism (GNA-RNN) is proposed to establish the state-of-the art performance on person search.
Visual-semantic embedding methods @cite_3 @cite_13 @cite_20 @cite_41 @cite_10 learned to embed both language and images into a common space for image classification and retrieval. Reed al @cite_20 trained an end-to-end CNN-RNN model which jointly embeds the images and fine-grained visual descriptions into the same feature space for zero-shot learning. Text-to-image retrieval can be conducted by calculating the distances in the embedding space. Frome al @cite_3 associated semantic knowledge of text with visual objects by constructing a deep visual-semantic model that re-trained the neural language model and visual object recognition model jointly.
{ "cite_N": [ "@cite_13", "@cite_41", "@cite_3", "@cite_10", "@cite_20" ], "mid": [ "2951805548", "1958932515", "2123024445", "2396147015", "2951538594" ], "abstract": [ "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.", "Given the tremendous growth of online videos, video thumbnail, as the common visualization form of video content, is becoming increasingly important to influence user's browsing and searching experience. However, conventional methods for video thumbnail selection often fail to produce satisfying results as they ignore the side semantic information (e.g., title, description, and query) associated with the video. As a result, the selected thumbnail cannot always represent video semantics and the click-through rate is adversely affected even when the retrieved videos are relevant. In this paper, we have developed a multi-task deep visual-semantic embedding model, which can automatically select query-dependent video thumbnails according to both visual and side information. Different from most existing methods, the proposed approach employs the deep visual-semantic embedding model to directly compute the similarity between the query and video thumbnails by mapping them into a common latent semantic space, where even unseen query-thumbnail pairs can be correctly matched. In particular, we train the embedding model by exploring the large-scale and freely accessible click-through video and image data, as well as employing a multi-task learning strategy to holistically exploit the query-thumbnail relevance from these two highly related datasets. Finally, a thumbnail is selected by fusing both the representative and query relevance scores. The evaluations on 1,000 query-thumbnail dataset labeled by 191 workers in Amazon Mechanical Turk have demonstrated the effectiveness of our proposed method.", "Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.", "", "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch; i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech UCSD Birds 200-2011 dataset." ] }
1702.05650
2950409738
In this paper, we propose a simple but effective method for fast image segmentation. We re-examine the locality-preserving character of spectral clustering by constructing a graph over image regions with both global and local connections. Our novel approach to build graph connections relies on two key observations: 1) local region pairs that co-occur frequently will have a high probability to reside on a common object; 2) spatially distant regions in a common object often exhibit similar visual saliency, which implies their neighborship in a manifold. We present a novel energy function to efficiently conduct graph partitioning. Based on multiple high quality partitions, we show that the generated eigenvector histogram based representation can automatically drive effective unary potentials for a hierarchical random field model to produce multi-class segmentation. Sufficient experiments, on the BSDS500 benchmark, large-scale PASCAL VOC and COCO datasets, demonstrate the competitive segmentation accuracy and significantly improved efficiency of our proposed method compared with other state of the arts.
Image segmentation has been studied in the computer vision community for decades. Shi @cite_37 propose normalized cuts (NCut), which advanced spectral clustering based image region segmentation. @cite_20 enables its multi-class segmentation. Among the region based segmentation, diffusion based approaches @cite_3 @cite_57 , GraphCut @cite_0 , GrabCut @cite_0 , etc @cite_39 @cite_30 @cite_33 @cite_60 , have been explored to partition images. Building successful affinity matrices is critical @cite_58 . Many subsequent approaches have computed more effective affinity matrices using elaborately designed low-level features and metrics @cite_52 @cite_24 @cite_14 @cite_9 . To solve the limitation of NCut to capture affinities of distant pixels, several methods @cite_35 @cite_56 @cite_9 @cite_55 have been proposed base on multi-scaling affinity strategies. However, dense affinity suffers from optimization bottleneck, although approximation algorithms are explored @cite_24 @cite_56 @cite_6 . Our method is able to capture both local and global affinities as well keeps the sparsity of the affinity matrix.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_37", "@cite_14", "@cite_33", "@cite_60", "@cite_9", "@cite_55", "@cite_52", "@cite_3", "@cite_39", "@cite_57", "@cite_0", "@cite_24", "@cite_56", "@cite_6", "@cite_58", "@cite_20" ], "mid": [ "2186836976", "2120200937", "2121947440", "2110158442", "2592677135", "", "2010975286", "", "", "2084635976", "", "2125436763", "2124351162", "2097323414", "2108944208", "1991367009", "", "2135674549" ], "abstract": [ "This paper presents a novel image segmentation framework that combines image segmentation and feature extraction into a unified model. The proposed model consists of two parts: the segmentation part and the multiscale decomposition part. In the model, the segmentation part relies on the image intensities in the regions of interest while the multiscale decomposition part depends on the features in different scales. The multiscale decomposition facilitates the process of segmentation since the region of interest can be easily detected from a proper scale. The total variation projection regularization (TVPR) is used to preserve geometric shape of the segmented regions. According to the geometric significance of TVPR parameters, an adaptive TVPR parameters selection method is presented and edges of different region can be well preserved. The proposed method is able to deal with intensity inhomogeneities and mixed noises often occurred in real-world images, which present challenges in image segmentation. Numerical examples on synthetic and real images are given to demonstrate the effectiveness of the proposed method. This paper proposes a novel image segmentation framework.A multiscale image segmentation method is presented within our framework.Total variation projection regularization (TVPR) is used to the proposed model.We present an adaptive TVPR parameters selection method for image segmentation.The experimental results show the effectiveness of the proposed method.", "Perceptual organization is scale-invariant. In turn, a segmentation that separates features consistently at all scales is the desired one that reveals the underlying structural organization of an image. Addressing cross-scale correspondence with interior pixels, we develop this intuition into a general segmenter that handles texture and illusory contours through edges entirely without any explicit characterization of texture or curvilinearity. Experimental results demonstrate that our method not only performs on par with either texture segmentation or boundary completion methods on their specialized examples, but also works well on a variety of real images.", "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.", "We present an unsupervised multilevel segmentation scheme for automatically segmenting grayscale and color images.Fuzzy 2-partition entropy is combined with Graph Cut to form a bi-level segmentation operator that splits a given region into 2 parts based on both global optimal threshold and local spatial coherence.A multilevel segmentation scheme iteratively performs on selected regions and color channels, producing a coarse-to-fine hierarchy of segments.The presented algorithm is evaluated using the Berkeley Segmentation Database and achieves competitive results compared with the state-of-the-art methods. The fuzzy c-partition entropy has been widely adopted as a global optimization technique for finding the optimal thresholds when performing multilevel gray image segmentation. Nevertheless, existing fuzzy c-partition entropy approaches generally have two limitations, i.e., partition number cneeds to be manually tuned for different input and the methods can process grayscale images only. To address these two limitations, an unsupervised multilevel segmentation algorithm is presented in this paper. The core step of our algorithm is a bi-level segmentation operator, which uses binary graph cuts to maximize both fuzzy 2-partition entropy and segmentation smoothness. By iteratively performing this bi-level segmentation operator, multilevel image segmentation is achieved in a hierarchical manner: Starting from the input color image, our algorithm first picks the color channel that can best segment the image into two labels, and then iteratively selects channels to further split each labels until convergence. The experimental results demonstrate the presented hierarchical segmentation scheme can efficiently segment both grayscale and color images. Quantitative evaluations over classic gray images and the Berkeley Segmentation Database show that our method is comparable to the state-of-the-art multi-scale segmentation methods, yet has the advantage of being unsupervised, efficient, and easy to implement.", "", "Grouping cues can affect the performance of segmentation greatly. In this paper, we show that superpixels (image segments) can provide powerful grouping cues to guide segmentation, where superpixels can be collected easily by (over)-segmenting the image using any reasonable existing segmentation algorithms. Generated by different algorithms with varying parameters, superpixels can capture diverse and multi-scale visual patterns of a natural image. Successful integration of the cues from a large multitude of superpixels presents a promising yet not fully explored direction. In this paper, we propose a novel segmentation framework based on bipartite graph partitioning, which is able to aggregate multi-layer superpixels in a principled and very effective manner. Computationally, it is tailored to unbalanced bipartite graph structure and leads to a highly efficient, linear-time spectral algorithm. Our method achieves significantly better performance on the Berkeley Segmentation Database compared to state-of-the-art techniques.", "", "", "Computing a faithful affinity map is essential to the clustering and segmentation tasks. In this paper, we propose a graph-based affinity (metric) learning method and show its application to image clustering and segmentation. Our method, self-diffusion (SD), performs a diffusion process by propagating the similarity mass along the intrinsic manifold of data points. Theoretical analysis is given to the SD algorithm and we provide a way of deriving the critical time stamp t. Our method therefore has nearly no parameter tuning and leads to significantly improved affinity maps, which help to greatly enhance the quality of clustering. In addition, we show that much improved image segmentation results can be obtained by combining SD with e.g. the normalized cuts algorithm. The proposed method can be used to deliver robust affinity maps for a range of problems.", "", "Seeded image segmentation is a popular type of supervised image segmentation in computer vision and image processing. Previous methods of seeded image segmentation treat the image as a weighted graph and minimize an energy function on the graph to produce a segmentation. In this paper, we propose to conduct the seeded image segmentation according to the result of a heat diffusion process in which the seeded pixels are considered to be the heat sources and the heat diffuses on the image starting from the sources. After the diffusion reaches a stable state, the image is segmented based on the pixel temperatures. It is also shown that our proposed framework includes the RandomWalk algorithm for image segmentation as a special case which diffuses only along the two coordinate axes. To better control diffusion, we propose to incorporate the attributes (such as the geometric structure) of the image into the diffusion process, yielding an anisotropic diffusion method for image segmentation. The experiments show that the proposed anisotropic diffusion method usually produces better segmentation results. In particular, when the method is tested using the groundtruth dataset of Microsoft Research Cambridge (MSRC), an error rate of 4.42 can be achieved, which is lower than the reported error rates of other state-of-the-art algorithms.", "The problem of efficient, interactive foreground background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for \"border matting\" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools.", "We present a multiscale spectral image segmentation algorithm. In contrast to most multiscale image processing, this algorithm works on multiple scales of the image in parallel, without iteration, to capture both coarse and fine level details. The algorithm is computationally efficient, allowing to segment large images. We use the normalized cut graph partitioning framework of image segmentation. We construct a graph encoding pairwise pixel affinity, and partition the graph for image segmentation. We demonstrate that large image graphs can be compressed into multiple scales capturing image structure at increasingly large neighborhood. We show that the decomposition of the image segmentation graph into different scales can be determined by ecological statistics on the image grouping cues. Our segmentation algorithm works simultaneously across the graph scales, with an inter-scale constraint to ensure communication and consistency between the segmentations at each scale. As the results show, we incorporate long-range connections with linear-time complexity, providing high-quality segmentations efficiently. Images that previously could not be processed because of their size have been accurately segmented thanks to this method.", "We reexamine the role of multiscale cues in image segmentation using an architecture that constructs a globally coherent scale-space output representation. This characteristic is in contrast to many existing works on bottom-up segmentation, which prematurely compress information into a single scale. The architecture is a standard extension of Normalized Cuts from an image plane to an image pyramid, with cross-scale constraints enforcing consistency in the solution while allowing emergence of coarse-to-fine detail. We observe that multiscale processing, in addition to improving segmentation quality, offers a route by which to speed computation. We make a significant algorithmic advance in the form of a custom multigrid eigensolver for constrained Angular Embedding problems possessing coarse-to-fine structure. Multiscale Normalized Cuts is a special case. Our solver builds atop recent results on randomized matrix approximation, using a novel interpolation operation to mold its computational strategy according to cross-scale constraints in the problem definition. Applying our solver to multiscale segmentation problems demonstrates speedup by more than an order of magnitude. This speedup is at the algorithmic level and carries over to any implementation target.", "We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.", "", "We propose a principled account on multiclass spectral clustering. Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigen-decomposition. We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms. We then solve an optimal discretization problem, which seeks a discrete solution closest to the continuous optima. The discretization is efficiently computed in an iterative fashion using singular value decomposition and nonmaximum suppression. The resulting discrete solutions are nearly global-optimal. Our method is robust to random initialization and converges faster than other clustering methods. Experiments on real image segmentation are reported." ] }
1702.05650
2950409738
In this paper, we propose a simple but effective method for fast image segmentation. We re-examine the locality-preserving character of spectral clustering by constructing a graph over image regions with both global and local connections. Our novel approach to build graph connections relies on two key observations: 1) local region pairs that co-occur frequently will have a high probability to reside on a common object; 2) spatially distant regions in a common object often exhibit similar visual saliency, which implies their neighborship in a manifold. We present a novel energy function to efficiently conduct graph partitioning. Based on multiple high quality partitions, we show that the generated eigenvector histogram based representation can automatically drive effective unary potentials for a hierarchical random field model to produce multi-class segmentation. Sufficient experiments, on the BSDS500 benchmark, large-scale PASCAL VOC and COCO datasets, demonstrate the competitive segmentation accuracy and significantly improved efficiency of our proposed method compared with other state of the arts.
Edge detection plays an extreme importantly role in region based image segmentation @cite_10 @cite_53 @cite_11 @cite_13 @cite_19 . For example, Convolutional Oriented Boundaries (COB) proposes an accurate boundary detection method using convolutional neural networks (CNNs) and combines with @cite_45 to perform image and object segmentation. Another popular image segmentation direction is semantic segmentation. Current methods use CNNs @cite_42 @cite_49 @cite_7 to predict the semantic label of each pixel. These methods rely on large-scale training data. In contrast, our method aims at partition images into regions that can accurately segment objects from an image by observing its internal statistics in an unsupervised manner.
{ "cite_N": [ "@cite_7", "@cite_10", "@cite_53", "@cite_42", "@cite_19", "@cite_45", "@cite_49", "@cite_13", "@cite_11" ], "mid": [ "1745334888", "2950830676", "2574253917", "1903029394", "2325368899", "2168804568", "2412782625", "2405856298", "" ], "abstract": [ "We propose a novel semantic segmentation algorithm by learning a deep deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixelwise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction, our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained without using Microsoft COCO dataset through ensemble with the fully convolutional network.", "In the field of connectomics, neuroscientists seek to identify cortical connectivity comprehensively. Neuronal boundary detection from the Electron Microscopy (EM) images is often done to assist the automatic reconstruction of neuronal circuit. But the segmentation of EM images is a challenging problem, as it requires the detector to be able to detect both filament-like thin and blob-like thick membrane, while suppressing the ambiguous intracellular structure. In this paper, we propose multi-stage multi-recursive-input fully convolutional networks to address this problem. The multiple recursive inputs for one stage, i.e., the multiple side outputs with different receptive field sizes learned from the lower stage, provide multi-scale contextual boundary information for the consecutive learning. This design is biologically-plausible, as it likes a human visual system to compare different possible segmentation solutions to address the ambiguous boundary issue. Our multi-stage networks are trained end-to-end. It achieves promising results on two public available EM segmentation datasets, the mouse piriform cortex dataset and the ISBI 2012 EM dataset.", "We present Convolutional Oriented Boundaries (COB), which produces multiscale oriented contours and region hierarchies starting from generic image classification Convolutional Neural Networks (CNNs). COB is computationally efficient, because it requires a single CNN forward pass for multi-scale contour detection and it uses a novel sparse boundary representation for hierarchical segmentation; it gives a significant leap in performance over the state-of-the-art, and it generalizes very well to unseen categories and datasets. Particularly, we show that learning to estimate not only contour strength but also orientation provides more accurate results. We perform extensive experiments for low-level applications on BSDS, PASCAL Context, PASCAL Segmentation, and NYUD to evaluate boundary detection performance, showing that COB provides state-of-the-art contours and region hierarchies in all datasets. We also evaluate COB on high-level tasks when coupled with multiple pipelines for object proposals, semantic contours, semantic segmentation, and object detection on MS-COCO, SBD, and PASCAL; showing that COB also improves the results for all tasks.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Object skeleton is a useful cue for object detection, complementary to the object contour, as it provides a structural representation to describe the relationship among object parts. While object skeleton extraction in natural images is a very challenging problem, as it requires the extractor to be able to capture both local and global image context to determine the intrinsic scale of each skeleton pixel. Existing methods rely on per-pixel based multi-scale feature computation, which results in difficult modeling and high time consumption. In this paper, we present a fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the sequential stages in the network and the skeleton scales they can capture, we introduce a scale-associated side output to each stage. We impose supervision to different stages by guiding the scale-associated side outputs toward groundtruth skeletons of different scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to localize skeleton pixels with multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors.", "We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "Supervised contour detection methods usually require many labeled training images to obtain satisfactory performance. However, a large set of annotated data might be unavailable or extremely labor intensive. In this paper, we investigate the usage of semi-supervised learning (SSL) to obtain competitive detection accuracy with very limited training data (three labeled images). Specifically, we propose a semi-supervised structured ensemble learning approach for contour detection built on structured random forests (SRF). To allow SRF to be applicable to unlabeled data, we present an effective sparse representation approach to capture inherent structure in image patches by finding a compact and discriminative low-dimensional subspace representation in an unsupervised manner, enabling the incorporation of abundant unlabeled patches with their estimated structured labels to help SRF perform better node splitting. We re-examine the role of sparsity and propose a novel and fast sparse coding algorithm to boost the overall learning efficiency. To the best of our knowledge, this is the first attempt to apply SSL for contour detection. Extensive experiments on the BSDS500 segmentation dataset and the NYU Depth dataset demonstrate the superiority of the proposed method.", "" ] }
1702.05650
2950409738
In this paper, we propose a simple but effective method for fast image segmentation. We re-examine the locality-preserving character of spectral clustering by constructing a graph over image regions with both global and local connections. Our novel approach to build graph connections relies on two key observations: 1) local region pairs that co-occur frequently will have a high probability to reside on a common object; 2) spatially distant regions in a common object often exhibit similar visual saliency, which implies their neighborship in a manifold. We present a novel energy function to efficiently conduct graph partitioning. Based on multiple high quality partitions, we show that the generated eigenvector histogram based representation can automatically drive effective unary potentials for a hierarchical random field model to produce multi-class segmentation. Sufficient experiments, on the BSDS500 benchmark, large-scale PASCAL VOC and COCO datasets, demonstrate the competitive segmentation accuracy and significantly improved efficiency of our proposed method compared with other state of the arts.
Designing feature to build the affinity between pixels regions is important. Several studies have explored different cues, such as sophisticated combination of mixed image features @cite_14 , texture information @cite_38 , or saliency @cite_21 . Different from these low-level image features, we argue that high-level cues are equally important and sometimes even more effective. For example, co-occurrence statistics have been used to capture the semantic object context knowledge based on training data to help the inference in, for example, condition random field (CRF) @cite_61 . Different from this direction of research, our approach models region-wise co-occurrence probability based on pointwise mutual information @cite_44 to build local connections of our proposed graph learned from the image itself.
{ "cite_N": [ "@cite_61", "@cite_38", "@cite_14", "@cite_21", "@cite_44" ], "mid": [ "", "1970018890", "2110158442", "2187162064", "2066873261" ], "abstract": [ "", "The problem of segmenting a foreground object out from its complex background is of great interest in image processing and computer vision. Many interactive segmentation algorithms such as graph cut have been successfully developed. In this paper, we present four technical components to improve graph cut based algorithms, which are combining both color and texture information for graph cut, including structure tensors in the graph cut model, incorporating active contours into the segmentation process, and using a ''softbrush'' tool to impose soft constraints to refine problematic boundaries. The integration of these components provides an interactive segmentation method that overcomes the difficulties of previous segmentation algorithms in handling images containing textures or low contrast boundaries and producing a smooth and accurate segmentation boundary. Experiments on various images from the Brodatz, Berkeley and MSRC data sets are conducted and the experimental results demonstrate the high effectiveness of the proposed method to a wide range of images.", "This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.", "Existing transition region-based image thresholding methods are unstable, and fail to achieve satisfactory segmentation accuracy on images with overlapping gray levels between object and background. This is because?they only take the gray level mean of pixels in transition regions as the segmentation threshold of the whole image. To alleviate this issue, we proposed a robust hybrid single-object image segmentation method by exploiting salient transition region. Specifically, the proposed method first uses local complexity and local variance to identify transition regions of an image. Secondly, the transition region with the largest pixel number is chosen as salient transition region. Thirdly, a gray level interval is determined by using transition regions and image information, and one gray level of the interval is determined as the segmentation threshold by using the salient transition region. Finally, the image thresholding result is refined as final segmentation result by using the salient transition region to remove fake object regions. The proposed method has been extensively evaluated by experiments on 170 single-object real world images. Experimental results show that the proposed method achieves better segmentation accuracy and robustness than several types of image segmentation techniques, and enjoys its nature of simplicity and efficiency. We propose a robust salient transition region-based image segmentation method.It adequately uses transition region for image segmentation from a new viewpoint.It alleviates the limitations of transition region-based image thresholding.It significantly improves accuracy and robustness of image segmentation.It keeps the nature of image thresholding, easy operation and high efficiency.", "Disclosed is a process for treating aluminum wherein the surface is contacted with a solution containing phosphate, a tannin, titanium and fluoride prior to inking or lacquering. The process produces a coating exhibiting adhesion, color and corrosion resistance comparable to that obtained via conventional chromium-based processes without creating the pollution problems of chromium disposal." ] }
1702.05650
2950409738
In this paper, we propose a simple but effective method for fast image segmentation. We re-examine the locality-preserving character of spectral clustering by constructing a graph over image regions with both global and local connections. Our novel approach to build graph connections relies on two key observations: 1) local region pairs that co-occur frequently will have a high probability to reside on a common object; 2) spatially distant regions in a common object often exhibit similar visual saliency, which implies their neighborship in a manifold. We present a novel energy function to efficiently conduct graph partitioning. Based on multiple high quality partitions, we show that the generated eigenvector histogram based representation can automatically drive effective unary potentials for a hierarchical random field model to produce multi-class segmentation. Sufficient experiments, on the BSDS500 benchmark, large-scale PASCAL VOC and COCO datasets, demonstrate the competitive segmentation accuracy and significantly improved efficiency of our proposed method compared with other state of the arts.
Laplacian eigenmaps @cite_41 computes a low-dimensional embedding to preserve the pairwise affinity of data points in the manifold. Local linear embedding (LLE) @cite_1 , alternatively, preserves the linear structure among the local neighboring points. The locality-preserving character of these two methods implicitly encourages the clustering of data. However, Isomap @cite_26 , which preserves global data geodesic distances, does not possess the nature of clustering. Our method shows a distinct point of view on the side of manifold learning to enhance spectral clustering for image segmentation.
{ "cite_N": [ "@cite_41", "@cite_26", "@cite_1" ], "mid": [ "2156718197", "2001141328", "2053186076" ], "abstract": [ "Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.", "Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.", "Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in" ] }
1702.05456
2950527429
LCLs or locally checkable labelling problems (e.g. maximal independent set, maximal matching, and vertex colouring) in the LOCAL model of computation are very well-understood in cycles (toroidal 1-dimensional grids): every problem has a complexity of @math , @math , or @math , and the design of optimal algorithms can be fully automated. This work develops the complexity theory of LCL problems for toroidal 2-dimensional grids. The complexity classes are the same as in the 1-dimensional case: @math , @math , and @math . However, given an LCL problem it is undecidable whether its complexity is @math or @math in 2-dimensional grids. Nevertheless, if we correctly guess that the complexity of a problem is @math , we can completely automate the design of optimal algorithms. For any problem we can find an algorithm that is of a normal form @math , where @math is a finite function, @math is an algorithm for finding a maximal independent set in @math th power of the grid, and @math is a constant. Finally, partially with the help of automated design tools, we classify the complexity of several concrete LCL problems related to colourings and orientations.
problems on cycles. As we noted before, two-dimensional grids can be seen as a generalisation of the widely studied setting of cycles; indeed, problems were first studied on cycles in the distributed setting. Cole and Vishkin @cite_7 showed that cycles can be 3-coloured in time @math , and Linial @cite_0 showed that this is asymptotically optimal. This implies, via simple reductions, that many classical problems, such as maximal independent set and maximal matching, also have a complexity of @math on cycles.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "2054910423", "2067661972" ], "abstract": [ "This paper concerns a number of algorithmic problems on graphs and how they may be solved in a distributed fashion. The computational model is such that each node of the graph is occupied by a processor which has its own ID. Processors are restricted to collecting data from others which are at a distance at most t away from them in t time units, but are otherwise computationally unbounded. This model focuses on the issue of locality in distributed processing, namely, to what extent a global solution to a computational problem can be obtained from locally available data.Three results are proved within this model: • A 3-coloring of an n-cycle requires time @math . This bound is tight, by previous work of Cole and Vishkin. • Any algorithm for coloring the d-regular tree of radius r which runs for time at most @math requires at least @math colors. • In an n-vertex graph of largest degree @math , an @math -coloring may be found in time @math .", "The following problem is considered: given a linked list of length n , compute the distance from each element of the linked list to the end of the list. The problem has two standard deterministic algorithms: a linear time serial algorithm, and an O (log n ) time parallel algorithm using n processors. We present new deterministic parallel algorithms for the problem. Our strongest results are (1) O (log n log* n ) time using n (log n log* n ) processors (this algorithm achieves optimal speed-up); (2) O (log n ) time using n log ( k ) n log n processors, for any fixed positive integer k . The algorithms apply a novel “random-like” deterministic technique. This technique provides for a fast and efficient breaking of an apparently symmetric situation in parallel and distributed computation." ] }
1702.05456
2950527429
LCLs or locally checkable labelling problems (e.g. maximal independent set, maximal matching, and vertex colouring) in the LOCAL model of computation are very well-understood in cycles (toroidal 1-dimensional grids): every problem has a complexity of @math , @math , or @math , and the design of optimal algorithms can be fully automated. This work develops the complexity theory of LCL problems for toroidal 2-dimensional grids. The complexity classes are the same as in the 1-dimensional case: @math , @math , and @math . However, given an LCL problem it is undecidable whether its complexity is @math or @math in 2-dimensional grids. Nevertheless, if we correctly guess that the complexity of a problem is @math , we can completely automate the design of optimal algorithms. For any problem we can find an algorithm that is of a normal form @math , where @math is a finite function, @math is an algorithm for finding a maximal independent set in @math th power of the grid, and @math is a constant. Finally, partially with the help of automated design tools, we classify the complexity of several concrete LCL problems related to colourings and orientations.
problems on graphs of bounded maximum degree. Naor and Stockmeyer @cite_30 showed that there exists a non-trivial problem that can be solved in constant time: weak 2-colouring on graphs of odd degree. Many problems are known to either have complexity @math @cite_11 @cite_34 @cite_40 @cite_19 or be global on graphs of bounded maximum degree. Until recently, no problems of an intermediate complexity were known. While @cite_15 gave a lower bound of @math for, among others, maximal independent set, this proof does not give an infinite family of graphs with a fixed maximum degree @math . @cite_28 showed that sinkless orientation and @math -colouring have randomised complexity @math , and @cite_4 proved that this implies a deterministic lower bound of @math . These lower bounds provide the first examples of problems with provably intermediate time complexity. Ghaffari and Su @cite_24 proved a matching upper bound for sinkless orientation; no tight bounds are known for @math -colouring, but there is a polylogarithmic upper bound due to Panconesi and Srinivasan @cite_5 .
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_28", "@cite_24", "@cite_19", "@cite_40", "@cite_5", "@cite_15", "@cite_34", "@cite_11" ], "mid": [ "2017345786", "2279830512", "", "2534944111", "", "", "2056050680", "2951615666", "", "2006040238" ], "abstract": [ "The purpose of this paper is a study of computation that can be done locally in a distributed network, where \"locally\" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time @math . Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs).", "Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring. For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm. In this paper we prove that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model. Each result has a very compelling take-away message: 1. Fast @math -coloring of trees requires random bits: Building on the recent lower bounds of , we prove that the randomized complexity of @math -coloring a tree with maximum degree @math is @math , whereas its deterministic complexity is @math for any @math . This also establishes a large separation between the deterministic complexity of @math -coloring and @math -coloring trees. 2. Randomized lower bounds imply deterministic lower bounds: We prove that any deterministic algorithm for a natural class of problems that runs in @math rounds can be transformed to run in @math rounds. If the transformed algorithm violates a lower bound (even allowing randomization), then one can conclude that the problem requires @math time deterministically. 3. Deterministic lower bounds imply randomized lower bounds: We prove that the randomized complexity of any natural problem on instances of size @math is at least its deterministic complexity on instances of size @math . This shows that a deterministic @math lower bound for any problem implies a randomized @math lower bound. It also illustrates that the graph shattering technique is absolutely essential to the LOCAL model.", "", "We study a family of closely-related distributed graph problems, which we call degree splitting, where roughly speaking the objective is to partition (or orient) the edges such that each node's degree is split almost uniformly. Our findings lead to answers for a number of problems, a sampling of which includes: • We present a poly log n round deterministic algorithm for (2Δ−1)·(1+o(1))-edge-coloring, where Δ denotes the maximum degree. Modulo the 1 + o(1) factor, this settles one of the long-standing open problems of the area from the 1990's (see e.g. Panconesi and Srinivasan [PODC'92]). Indeed, a weaker requirement of (2Δ − 1) · poly log Δ-edge-coloring in poly log n rounds was asked for in the 4th open question in the Distributed Graph Coloring book by Barenboim and Elkin. • We show that sinkless orientation---i.e., orienting edges such that each node has at least one out-going edge---on Δ-regular graphs can be solved in O(logΔ log n) rounds randomized and in O(logΔ n) rounds deterministically. These prove the corresponding lower bounds by [STOC'16] and Chang, Kopelowitz, and Pettie [FOCS'16] to be tight. Moreover, these show that sinkless orientation exhibits an exponential separation between its randomized and deterministic complexities, akin to the results of for Δ-coloring Δ-regular trees. • We present a randomized O(log4 n) round algorithm for orienting a-arboricity graphs with maximum out-degree a(1 + e). This can be also turned into a decomposition into a(1 + e) forests when a = Ω(log n) and into a(1 + e) pseduo-forests when a = o(log n). Obtaining an efficient distributed decomposition into less than 2a forests was stated as the 10th open problem in the book by Barenboim and Elkin.", "", "", "Given a connected graphG=(V, E) with |V|=n and maximum degree Δ such thatG is neither a complete graph nor an odd cycle, Brooks' theorem states thatG can be colored with Δ colors. We generalize this as follows: letG-v be Δ-colored; then,v can be colored by considering the vertices in anO(logΔn) radius aroundv and by recoloring anO(logΔn) length “augmenting path” inside it. Using this, we show that Δ-coloringG is reducible inO(log3n logΔ) time to (Δ+1)-vertex coloringG in a distributed model of computation. This leads to fast distributed algorithms and a linear-processorNC algorithm for Δ-coloring.", "We show that any randomised Monte Carlo distributed algorithm for the Lov 'asz local lemma requires @math communication rounds, assuming that it finds a correct assignment with high probability. Our result holds even in the special case of @math , where @math is the maximum degree of the dependency graph. By prior work, there are distributed algorithms for the Lov 'asz local lemma with a running time of @math rounds in bounded-degree graphs, and the best lower bound before our work was @math rounds [ 2014].", "", "We give simple, deterministic, distributed algorithms for computing maximal matchings, maximal independent sets and colourings. We show that edge colourings with at most 2Δ-1 colours, and maximal matchings can be computed within O(log* n + Δ) deterministic rounds, where Δ is the maximum degree of the network. We also show how to find maximal independent sets and (Δ + 1)-vertex colourings within O(log* n + Δ2) deterministic rounds. All hidden constants are very small and the algorithms are very simple." ] }
1702.05456
2950527429
LCLs or locally checkable labelling problems (e.g. maximal independent set, maximal matching, and vertex colouring) in the LOCAL model of computation are very well-understood in cycles (toroidal 1-dimensional grids): every problem has a complexity of @math , @math , or @math , and the design of optimal algorithms can be fully automated. This work develops the complexity theory of LCL problems for toroidal 2-dimensional grids. The complexity classes are the same as in the 1-dimensional case: @math , @math , and @math . However, given an LCL problem it is undecidable whether its complexity is @math or @math in 2-dimensional grids. Nevertheless, if we correctly guess that the complexity of a problem is @math , we can completely automate the design of optimal algorithms. For any problem we can find an algorithm that is of a normal form @math , where @math is a finite function, @math is an algorithm for finding a maximal independent set in @math th power of the grid, and @math is a constant. Finally, partially with the help of automated design tools, we classify the complexity of several concrete LCL problems related to colourings and orientations.
problems. problems were formally introduced by Naor and Stockmeyer @cite_30 . They showed that if there exists a constant-time algorithm for solving an problem @math , then there exists an order-invariant constant-time algorithm for @math , such that the algorithm only uses the relative order of unique identifiers given to the nodes. Their argument works for any time @math : a time- @math distributed algorithm implies a constant-time order-invariant algorithm; hence there are no problems with complexities strictly between @math and @math .
{ "cite_N": [ "@cite_30" ], "mid": [ "2017345786" ], "abstract": [ "The purpose of this paper is a study of computation that can be done locally in a distributed network, where \"locally\" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time @math . Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs)." ] }
1702.05456
2950527429
LCLs or locally checkable labelling problems (e.g. maximal independent set, maximal matching, and vertex colouring) in the LOCAL model of computation are very well-understood in cycles (toroidal 1-dimensional grids): every problem has a complexity of @math , @math , or @math , and the design of optimal algorithms can be fully automated. This work develops the complexity theory of LCL problems for toroidal 2-dimensional grids. The complexity classes are the same as in the 1-dimensional case: @math , @math , and @math . However, given an LCL problem it is undecidable whether its complexity is @math or @math in 2-dimensional grids. Nevertheless, if we correctly guess that the complexity of a problem is @math , we can completely automate the design of optimal algorithms. For any problem we can find an algorithm that is of a normal form @math , where @math is a finite function, @math is an algorithm for finding a maximal independent set in @math th power of the grid, and @math is a constant. Finally, partially with the help of automated design tools, we classify the complexity of several concrete LCL problems related to colourings and orientations.
Recently @cite_4 showed that there are further gaps in the time complexities of problems. They gave a speed-up lemma for simulating any deterministic @math -time algorithm in time @math by computing new small and locally unique identifiers for the input graph. This implies that there are no problems with deterministic complexity @math and @math . They also show that the deterministic complexity of an on instances of size @math is at most the randomised complexity on instances of size @math . This implies a similar gap for randomised complexities between @math and @math .
{ "cite_N": [ "@cite_4" ], "mid": [ "2279830512" ], "abstract": [ "Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring. For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm. In this paper we prove that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model. Each result has a very compelling take-away message: 1. Fast @math -coloring of trees requires random bits: Building on the recent lower bounds of , we prove that the randomized complexity of @math -coloring a tree with maximum degree @math is @math , whereas its deterministic complexity is @math for any @math . This also establishes a large separation between the deterministic complexity of @math -coloring and @math -coloring trees. 2. Randomized lower bounds imply deterministic lower bounds: We prove that any deterministic algorithm for a natural class of problems that runs in @math rounds can be transformed to run in @math rounds. If the transformed algorithm violates a lower bound (even allowing randomization), then one can conclude that the problem requires @math time deterministically. 3. Deterministic lower bounds imply randomized lower bounds: We prove that the randomized complexity of any natural problem on instances of size @math is at least its deterministic complexity on instances of size @math . This shows that a deterministic @math lower bound for any problem implies a randomized @math lower bound. It also illustrates that the graph shattering technique is absolutely essential to the LOCAL model." ] }
1702.05456
2950527429
LCLs or locally checkable labelling problems (e.g. maximal independent set, maximal matching, and vertex colouring) in the LOCAL model of computation are very well-understood in cycles (toroidal 1-dimensional grids): every problem has a complexity of @math , @math , or @math , and the design of optimal algorithms can be fully automated. This work develops the complexity theory of LCL problems for toroidal 2-dimensional grids. The complexity classes are the same as in the 1-dimensional case: @math , @math , and @math . However, given an LCL problem it is undecidable whether its complexity is @math or @math in 2-dimensional grids. Nevertheless, if we correctly guess that the complexity of a problem is @math , we can completely automate the design of optimal algorithms. For any problem we can find an algorithm that is of a normal form @math , where @math is a finite function, @math is an algorithm for finding a maximal independent set in @math th power of the grid, and @math is a constant. Finally, partially with the help of automated design tools, we classify the complexity of several concrete LCL problems related to colourings and orientations.
The notion of automatic synthesis of algorithms has been around for a long time; for example, already in the 1950s Church proposed the idea of synthesising circuits @cite_22 @cite_13 . Since then synthesis of distributed and parallel protocols has become a well-established research area in the formal methods community @cite_20 @cite_10 @cite_23 @cite_35 @cite_36 @cite_43 @cite_14 . However, synthesis has received considerably less attention in the distributed computing community, even though they have been used to discover e.g. novel synchronisation algorithms @cite_29 @cite_18 @cite_14 and local graph algorithms @cite_31 @cite_6 .
{ "cite_N": [ "@cite_13", "@cite_35", "@cite_18", "@cite_14", "@cite_22", "@cite_36", "@cite_29", "@cite_6", "@cite_43", "@cite_23", "@cite_31", "@cite_10", "@cite_20" ], "mid": [ "", "2070369873", "1911929785", "2479556144", "", "2098035530", "2034499376", "2964154443", "1610578937", "1783469492", "2952497148", "2040127143", "1501731334" ], "abstract": [ "", "Methods for mechanically synthesizing concurrent programs for temporal logic specifications have been proposed by Emerson and Clarke and by Manna and Wolper. An important advantage of these synthesis methods is that they obviate the need to manually compose a program and manually construct a proof of its correctness. A serious drawback of these methods in practice, however, is that they produce concurrent programs for models of computation that are often unrealistic, involving highly centralized system architecture (Manna and Wolper), processes with global information about the system state (Emerson and Clarke), or reactive modules that can read all of their inputs in one atomic step (Anuchitanukul and Manna, and Pnueli and Rosner). Even simple synchronization protocols based on atomic read write primitives such as Peterson's solution to the mutual exclusion problem have remained outside the scope of practical mechanical synthesis methods. In this paper, we show how to mechanically synthesize in more realistic computational models solutions to synchronization problems. We illustrate the method by synthesizing Peterson's solution to the mutual exclusion problem.", "Consider a complete communication network on n nodes. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are \"odd\" and which are \"even\". Furthermore, the solution needs to be self-stabilising (reaching correct operation from any initial state) and tolerate f Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms either require a source of random bits or a large number of states per node. In this work, we give fast state-optimal deterministic algorithms for the first non-trivial case f = 1 . To obtain these algorithms, we develop and evaluate two different techniques for algorithm synthesis. Both are based on casting the synthesis problem as a propositional satisfiability (SAT) problem; a direct encoding is efficient for synthesising time-optimal algorithms, while an approach based on counter-example guided abstraction refinement discovers non-optimal algorithms quickly. We develop computational techniques to find algorithms for synchronous 2-counting.Automated synthesis yields state-optimal self-stabilising fault-tolerant algorithms.We give a thorough experimental comparison of our two SAT-based synthesis techniques.A direct SAT encoding is more efficient for finding time-optimal algorithms.An iterative CEGAR-based approach finds non-optimal algorithms quickly.", "Fault-tolerant distributed algorithms play an increasingly important role in many applications, and their correct and efficient implementation is notoriously difficult. We present an automatic approach to synthesise provably correct fault-tolerant distributed algorithms from formal specifications in linear-time temporal logic. The supported system model covers synchronous reactive systems with finite local state, while the failure model includes strong self-stabilisation as well as Byzantine failures. The synthesis approach for a fixed-size network of processes is complete for realisable specifications, and can optimise the solution for small implementations and short stabilisation time. To solve the bounded synthesis problem with Byzantine failures more efficiently, we design an incremental, CEGIS-like loop. Finally, we define two classes of problems for which our synthesis algorithm obtains solutions that are not only correct in fixed-size networks, but in networks of arbitrary size.", "", "In system synthesis, we transform a specification into a system that is guaranteed to satisfy the specification. When the system is distributed, the goal is to construct the system's underlying processes. Results on multi-player games imply that the synthesis problem for linear specifications is undecidable for general architectures, and is nonelementary decidable for hierarchical architectures, where the processes are linearly ordered and information among them flows in one direction. In this paper, we present a significant extension of this result. We handle both linear and branching specifications, and we show that a sufficient condition for decidability of the synthesis problem is a linear or cyclic order among the processes, in which information flows in either one or both directions. We also allow the processes to have internal hidden variables, and we consider communications with and without delay. Many practical applications fall into this class.", "The distributed (Δ + 1)-coloring problem is one of most fundamental and well-studied problems in Distributed Algorithms. Starting with the work of Cole and Vishkin in 86, there was a long line of gradually improving algorithms published. The current state-of-the-art running time is O(Δ log Δ + log* n), due to Kuhn and Wattenhofer, PODC'06. Linial (FOCS'87) has proved a lower bound of 1 2 log* n for the problem, and Szegedy and Vishwanathan (STOC'93) provided a heuristic argument that shows that algorithms from a wide family of locally iterative algorithms are unlikely to achieve running time smaller than Θ(Δ log Δ). We present a deterministic (Δ + 1)-coloring distributed algorithm with running time O(Δ) + 1 2 log* n. We also present a tradeoff between the running time and the number of colors, and devise an O(Δ • t)-coloring algorithm with running time O(Δ t + log* n), for any parameter t, 1", "Let @math be a @math -regular triangle-free graph with @math edges. We present an algorithm which finds a cut in @math with at least @math edges in expectation, improving upon Shearer's classic result. In particular, this implies that any @math -regular triangle-free graph has a cut of at least this size, and thus, we obtain a new lower bound for the maximum number of edges in a bipartite subgraph of @math . Our algorithm is simpler than Shearer's classic algorithm and it can be interpreted as a very efficient randomised distributed (local) algorithm : each node needs to produce only one random bit, and the algorithm runs in one round. The randomised algorithm itself was discovered using computational techniques . We show that for any fixed @math , there exists a weighted neighbourhood graph @math such that there is a one-to-one correspondence between heavy cuts of @math and randomised local algorithms that find large cuts in any @math -regular input graph. This turns out to be a useful tool for analysing the existence of cuts in @math -regular graphs: we can compute the optimal cut of @math to attain a lower bound on the maximum cut size of any @math -regular triangle-free graph.", "We provide a uniform solution to the problem of synthesizing a finite-state distributed system. An instance of the synthesis problem consists of a system architecture and a temporal specification. The architecture is given as a directed graph, where the nodes represent processes (including the environment as a special process) that communicate synchronously through shared variables attached to the edges. The same variable may occur on multiple outgoing edges of a single node, allowing for the broadcast of data. A solution to the synthesis problem is a collection of finite-state programs for the processes in the architecture, such that the joint behavior of the programs satisfies the specification in an unrestricted environment. We define information forks, a comprehensive criterion that characterizes all architectures with an undecidable synthesis problem. The criterion is effective: for a given architecture with n processes and v variables, it can be determined in O(n sup 2 spl middot v) time whether the synthesis problem is decidable. We give a uniform synthesis algorithm for all decidable cases. Our algorithm works for all spl omega -regular tree specification languages, including the spl mu -calculus. The undecidability proof, on the other hand, uses only LTL or, alternatively, CTL as the specification language. Our results therefore hold for the entire range of specification languages from LTL CTL to the spl mu -calculus.", "The problem of synthesizing a finite-state distributed reactive system is considered. Given a distributed architecture A, which comprises several processors P sub 1 , . . ., P sub k and their interconnection scheme, and a propositional temporal specification phi , a solution to the synthesis problem consists of finite-state programs Pi sub 1 , . . ., Pi sub k (one for each processor), whose joint (synchronous) behavior maintains phi against all possible inputs from the environment. Such a solution is referred to as the realization of the specification phi over the architecture A. Specifically, it is shown that the problem of realizing a given propositional specification over a given architecture is undecidable, and it is nonelementarily decidable for the very restricted class of hierarchical architectures. An extensive characterization of architecture classes for which the realizability problem is elementarily decidable and of classes for which it is undecidable is given. >", "We prove exact bounds on the time complexity of distributed graph colouring. If we are given a directed path that is properly coloured with @math colours, by prior work it is known that we can find a proper 3-colouring in @math communication rounds. We close the gap between upper and lower bounds: we show that for infinitely many @math the time complexity is precisely @math communication rounds.", "In this paper, we apply Propositional Temporal Logic (PTL) to the specification and synthesis of the synchronization part of communicating processes. To specify a process, we give a PTL formula that describes its sequence of communications. The synthesis is done by constructing a model of the given specifications using a tableau-like satisfiability algorithm for PTL. This model can then be interpreted as a program.", "We Propose a method of constructing concurrent programs in which the synchronization skeletonof the program is automatically synthesized from a high-level (branching time) Temporal Logic specification. The synchronization skeleton is an abstraction of the actual program where detail irrelevant to synchronization is suppressed. For example, in the synchronization skeleton for a solution to the critical section problem each process's critical section may be viewed as a single node since the internal structure of the critical section is unimportant. Most solutions to synchronization problems in the literature are in fact given as synchronization skeletons. Because synchronization skeletons are in general finite state, the propositional version of Temporal Logic can be used to specify their properties." ] }
1702.05456
2950527429
LCLs or locally checkable labelling problems (e.g. maximal independent set, maximal matching, and vertex colouring) in the LOCAL model of computation are very well-understood in cycles (toroidal 1-dimensional grids): every problem has a complexity of @math , @math , or @math , and the design of optimal algorithms can be fully automated. This work develops the complexity theory of LCL problems for toroidal 2-dimensional grids. The complexity classes are the same as in the 1-dimensional case: @math , @math , and @math . However, given an LCL problem it is undecidable whether its complexity is @math or @math in 2-dimensional grids. Nevertheless, if we correctly guess that the complexity of a problem is @math , we can completely automate the design of optimal algorithms. For any problem we can find an algorithm that is of a normal form @math , where @math is a finite function, @math is an algorithm for finding a maximal independent set in @math th power of the grid, and @math is a constant. Finally, partially with the help of automated design tools, we classify the complexity of several concrete LCL problems related to colourings and orientations.
The synthesis of optimal distributed algorithms in general is often computationally hard and even undecidable. In the context of the model and problems, Naor and Stockmeyer @cite_30 show that simply deciding whether a given problem can be solved in constant time is undecidable; hence we cannot expect to completely automate the synthesis of asymptotically optimal distributed algorithms for problems in general graphs. This result holds even if we study non-toroidal two-dimensional grids, but it does not hold in toroidal grids. In essence, in toroidal grids only trivial problems are solvable in constant time, and as we will see, the interesting case is the time complexity of @math .
{ "cite_N": [ "@cite_30" ], "mid": [ "2017345786" ], "abstract": [ "The purpose of this paper is a study of computation that can be done locally in a distributed network, where \"locally\" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time @math . Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs)." ] }
1702.05456
2950527429
LCLs or locally checkable labelling problems (e.g. maximal independent set, maximal matching, and vertex colouring) in the LOCAL model of computation are very well-understood in cycles (toroidal 1-dimensional grids): every problem has a complexity of @math , @math , or @math , and the design of optimal algorithms can be fully automated. This work develops the complexity theory of LCL problems for toroidal 2-dimensional grids. The complexity classes are the same as in the 1-dimensional case: @math , @math , and @math . However, given an LCL problem it is undecidable whether its complexity is @math or @math in 2-dimensional grids. Nevertheless, if we correctly guess that the complexity of a problem is @math , we can completely automate the design of optimal algorithms. For any problem we can find an algorithm that is of a normal form @math , where @math is a finite function, @math is an algorithm for finding a maximal independent set in @math th power of the grid, and @math is a constant. Finally, partially with the help of automated design tools, we classify the complexity of several concrete LCL problems related to colourings and orientations.
While grids have not been studied from a distributed computing perspective, grid-like models with local dynamics have appeared in many different contexts: @cite_41 @cite_44 @cite_12 have been studied both as a primitive computational model, and as a model for various complex systems and emergent phenomena, e.g. in ecology, sociology and physics @cite_21 @cite_9 @cite_26 . Various @cite_45 have connections to computability questions, such as the abstract @cite_1 and the variants of the @cite_32 @cite_8 @cite_27 @cite_16 for DNA self-assembly. However, the prior work of this flavour is usually interested in understanding the dynamics of a specific fixed process, or what kind of global behaviours can arise from fixed number of local states---in particular, whether the system is computationally universal. Our distributed complexity perspective to grid-like systems seems mostly novel, and we expect it to have implications in other fields. Applying an existing result of distributed computing to tiling models has been previously demonstrated by Sterling @cite_2 , who makes use of a weak-colouring lower bound by Naor and Stockmeyer @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_8", "@cite_41", "@cite_9", "@cite_21", "@cite_1", "@cite_32", "@cite_44", "@cite_27", "@cite_45", "@cite_2", "@cite_16", "@cite_12" ], "mid": [ "2017345786", "", "2044709436", "", "", "2022060148", "", "", "", "2079425200", "", "1488471656", "", "" ], "abstract": [ "The purpose of this paper is a study of computation that can be done locally in a distributed network, where \"locally\" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time @math . Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs).", "", "Self-assembly is the process by which small components automatically assemble themselves into large, complex structures. Examples in nature abound: lipids self-assemble a cell's membrane, and bacteriophage virus proteins self-assemble a capsid that allows the virus to invade other bacteria. Even a phenomenon as simple as crystal formation is a process of self-assembly. How could such a process be described as \"algorithmic?\" The key word in the first sentence is automatically. Algorithms automate a series of simple computational tasks. Algorithmic self-assembly systems automate a series of simple growth tasks, in which the object being grown is simultaneously the machine controlling its own growth.", "", "", "This article surveys some theoretical aspects of cellular automata CA research. In particular, we discuss classical and new results on reversibility, conservation laws, limit sets, decidability questions, universality and topological dynamics of CA. The selection of topics is by no means comprehensive and reflects the research interests of the author. The main goal is to provide a tutorial of CA theory to researchers in other branches of natural computing, to give a compact collection of known results with references to their proofs, and to suggest some open problems.", "", "", "", "We first give an introduction to the field of tile-based self-assembly, focusing primarily on theoretical models and their algorithmic nature. We start with a description of Winfree’s abstract Tile Assembly Model (aTAM) and survey a series of results in that model, discussing topics such as the shapes which can be built and the computations which can be performed, among many others. Next, we introduce the more experimentally realistic kinetic Tile Assembly Model (kTAM) and provide an overview of kTAM results, focusing especially on the kTAM’s ability to model errors and several results targeted at preventing and correcting errors. We then describe the 2-Handed Assembly Model (2HAM), which allows entire assemblies to combine with each other in pairs (as opposed to the restriction of single-tile addition in the aTAM and kTAM) and doesn’t require a specified seed. We give overviews of a series of 2HAM results, which tend to make use of geometric techniques not applicable in the aTAM. Finally, we discuss and define a wide array of more recently developed models and discuss their various tradeoffs in comparison to the previous models and to each other.", "", "Majumder, Reif and Sahu presented in [7] a model of reversible, error-permitting tile self-assembly, and showed that restricted classes of tile assembly systems achieved equilibrium in (expected) polynomial time. One open question they asked was how the model would change if it permitted multiple nucleation, i.e.,independent groups of tiles growing before attaching to the original seed assembly. This paper provides a partial answer, by proving that no tile assembly model can use multiple nucleation to achieve speedup from polynomial time to constant time without sacrificing computational power: if a tile assembly system @math uses multiple nucleation to tile a surface in constant time (independent of the size of the surface), then @math is unable to solve computational problems that have low complexity in the (single-seeded) Winfree-Rothemund Tile Assembly Model. The proof technique defines a new model of distributed computing that simulates tile assembly, so a tile assembly model can be described as a distributed computing model.", "", "" ] }
1702.05003
2953215316
The theory of sparse stochastic processes offers a broad class of statistical models to study signals. In this framework, signals are represented as realizations of random processes that are solution of linear stochastic differential equations driven by white L 'evy noises. Among these processes, generalized Poisson processes based on compound-Poisson noises admit an interpretation as random L-splines with random knots and weights. We demonstrate that every generalized L 'evy process-from Gaussian to sparse-can be understood as the limit in law of a sequence of generalized Poisson processes. This enables a new conceptual understanding of sparse processes and suggests simple algorithms for the numerical generation of such objects.
Random processes and random fields are notorious tools to model uncertainty and statistics of signals @cite_3 . Gaussian processes are by far the most studied stochastic models because of their fundamental properties (, stability, finite variance, central-limit theorem) and their relative ease of use. They are the principal actors within the classical'' paradigm in statistical signal processing @cite_23 . Many fractal-type signals are modeled as self-similar Gaussian processes @cite_32 @cite_9 @cite_46 @cite_7 . However, many real-world signals are empirically observed to be inherently sparse, a property that is incompatible with Gaussianity @cite_11 @cite_24 @cite_39 . In order to overcome the limitations of Gaussian model, several other stochastic models has been proposed for the study of sparse signals. They include infinite-variance @cite_46 @cite_35 or piecewise-constant models @cite_11 @cite_28 .
{ "cite_N": [ "@cite_35", "@cite_7", "@cite_28", "@cite_9", "@cite_32", "@cite_3", "@cite_39", "@cite_24", "@cite_23", "@cite_46", "@cite_11" ], "mid": [ "", "1997019093", "2099205567", "2078206416", "2031753087", "", "2158162781", "1019822895", "606515514", "2021537669", "1753871439" ], "abstract": [ "", "In a companion paper (see Self-Similarity: Part I-Splines and Operators), we characterized the class of scale-invariant convolution operators: the generalized fractional derivatives of order gamma. We used these operators to specify regularization functionals for a series of Tikhonov-like least-squares data fitting problems and proved that the general solution is a fractional spline of twice the order. We investigated the deterministic properties of these smoothing splines and proposed a fast Fourier transform (FFT)-based implementation. Here, we present an alternative stochastic formulation to further justify these fractional spline estimators. As suggested by the title, the relevant processes are those that are statistically self-similar; that is, fractional Brownian motion (fBm) and its higher order extensions. To overcome the technical difficulties due to the nonstationary character of fBm, we adopt a distributional formulation due to Gel'fand. This allows us to rigorously specify an innovation model for these fractal processes, which rests on the property that they can be whitened by suitable fractional differentiation. Using the characteristic form of the fBm, we then derive the conditional probability density function (PDF) p(BH(t)|Y), where Y= BH(k)+n[k] kisinZ are the noisy samples of the fBm BH(t) with Hurst exponent H. We find that the conditional mean is a fractional spline of degree 2H, which proves that this class of functions is indeed optimal for the estimation of fractal-like processes. The result also yields the optimal [minimum mean-square error (MMSE)] parameters for the smoothing spline estimator, as well as the connection with kriging and Wiener filtering", "We introduce an extended family of continuous-domain stochastic models for sparse, piecewise-smooth signals. These are specified as solutions of stochastic differential equations, or, equivalently, in terms of a suitable innovation model; the latter is analogous conceptually to the classical interpretation of a Gaussian stationary process as filtered white noise. The two specific features of our approach are 1) signal generation is driven by a random stream of Dirac impulses (Poisson noise) instead of Gaussian white noise, and 2) the class of admissible whitening operators is considerably larger than what is allowed in the conventional theory of stationary processes. We provide a complete characterization of these finite-rate-of-innovation signals within Gelfand's framework of generalized stochastic processes. We then focus on the class of scale-invariant whitening operators which correspond to unstable systems. We show that these can be solved by introducing proper boundary conditions, which leads to the specification of random, spline-type signals that are piecewise-smooth. These processes are the Poisson counterpart of fractional Brownian motion; they are nonstationary and have the same 1 ω-type spectral signature. We prove that the generalized Poisson processes have a sparse representation in a wavelet-like basis subject to some mild matching condition. We also present a limit example of sparse process that yields a MAP signal estimator that is equivalent to the popular TV-denoising algorithm.", "\"...a blend of erudition (fascinating and sometimes obscure historical minutiae abound), popularization (mathematical rigor is relegated to appendices) and exposition (the reader need have little knowledge of the fields involved) ...and the illustrations include many superb examples of computer graphics that are works of art in their own right.\" Nature", "", "", "Statistical analysis of images reveals two interesting properties: (i) invariance of image statistics to scaling of images, and (ii) non-Gaussian behavior of image statistics, i.e. high kurtosis, heavy tails, and sharp central cusps. In this paper we review some recent results in statistical modeling of natural images that attempt to explain these patterns. Two categories of results are considered: (i) studies of probability models of images or image decompositions (such as Fourier or wavelet decompositions), and (ii) discoveries of underlying image manifolds while restricting to natural images. Applications of these models in areas such as texture analysis, image classification, compression, and denoising are also considered.", "Preface Notation What Is Pattern Theory? The Manifesto of Pattern Theory The Basic Types of Patterns Bayesian Probability Theory: Pattern Analysis and Pattern Synthesis English Text and Markov Chains Basics I: Entropy and Information Measuring the n-gram Approximation with Entropy Markov Chains and the n-gram Models Words Word Boundaries via Dynamic Programming and Maximum Likelihood Machine Translation via Bayes' Theorem Exercises Music and Piece wise Gaussian Models Basics III: Gaussian Distributions Basics IV: Fourier Analysis Gaussian Models for Single Musical Notes Discontinuities in One-Dimensional Signals The Geometric Model for Notes via Poisson Processes Related Models Exercises Character Recognition and Syntactic Grouping Finding Salient Contours in Images Stochastic Models of Contours The Medial Axis for Planar Shapes Gestalt Laws and Grouping Principles Grammatical Formalisms Exercises Contents Image Texture, Segmentation and Gibbs Models Basics IX: Gibbs Fields (u + v)-Models for Image Segmentation Sampling Gibbs Fields Deterministic Algorithms to Approximate the Mode of a Gibbs Field Texture Models Synthesizing Texture via Exponential Models Texture Segmentation Exercises Faces and Flexible Templates Modeling Lighting Variations Modeling Geometric Variations by Elasticity Basics XI: Manifolds, Lie Groups, and Lie Algebras Modeling Geometric Variations by Metrics on Diff Comparing Elastic and Riemannian Energies Empirical Data on Deformations of Faces The Full Face Model Appendix: Geodesics in Diff and Landmark Space Exercises Natural Scenes and their Multiscale Analysis High Kurtosis in the Image Domain Scale Invariance in the Discrete and Continuous Setting The Continuous and Discrete Gaussian Pyramids Wavelets and the \"Local\" Structure of Images Distributions Are Needed Basics XIII: Gaussian Measures on Function Spaces The Scale -Rotation- and Translation-Invariant Gaussian Distribution Mode lII: Images Made Up of Independent Objects Further Models Appendix: A Stability Property of the Discrete Gaussian Pyramid Exercises Bibliography Index", "1. Introduction 2. Roadmap to the book 3. Mathematical context and background 4. Continuous-domain innovation models 5. Operators and their inverses 6. Splines and wavelets 7. Sparse stochastic processes 8. Sparse representations 9. Infinite divisibility and transform-domain statistics 10. Recovery of sparse signals 11. Wavelet-domain methods 12. Conclusion Appendix A. Singular integrals Appendix B. Positive definiteness Appendix C. Special functions.", "Our study of fractal landscapes departs from the simplest but yet effective model of fractional Brownian motion and explores its two-dimensional (2-D) extensions. We focus on the ability to introduce anisotropy in this model, and we are also interested in considering its discrete-space counterparts. We then move towards other multifractional and multifractal models providing more degrees of freedom for fitting complex 2-D fields. We note that many of the models and processing are implemented in FracLab, a software MATLAB Scilab toolbox for fractal processing of signals and images.", "The idea of using statistical inference for analyzing and understanding images has been used for at least 20 years, going back, for instance, to the work of Grenander [Gr] and Cooper [Co]. To apply these techniques, one needs, of course, a probabilistic model for some class of images or some class of structures present in images. Many models of this type have been introduced. There are stochastic models for image textures [GGGD], [ZMW], for contours in images [Mu], [GCK], for the decomposition of an image into regions [G-G], [M-S], for disparity maps, for grammatical parsing of shapes [Fu], for template matching, for speci c tasks such as face recognition [HGYGM]. The common framework for all these studies is to describe some class of images I(x; y) by means of a set of auxiliary variables fx g representing the salient structures in the images, e.g. edges, texture statistics, inferred depth values or relations, illumination features, medial axes or shape features, locations of key points such as eyes in a face, labels (as in character recognition), etc. Then i) a prior probability model for the hidden' variables p(fx g) and ii) an imagingmodel p(Ijfx g) for I, given the hidden variables, are de ned. Finally, an image is analyzed using Bayes's rule p(fx gjI) p(Ijfx g)p(fx g)" ] }
1702.05003
2953215316
The theory of sparse stochastic processes offers a broad class of statistical models to study signals. In this framework, signals are represented as realizations of random processes that are solution of linear stochastic differential equations driven by white L 'evy noises. Among these processes, generalized Poisson processes based on compound-Poisson noises admit an interpretation as random L-splines with random knots and weights. We demonstrate that every generalized L 'evy process-from Gaussian to sparse-can be understood as the limit in law of a sequence of generalized Poisson processes. This enables a new conceptual understanding of sparse processes and suggests simple algorithms for the numerical generation of such objects.
Several behaviors can be observed within this extended family of random processes. For instance, self-similar Gaussian processes exhibit fractal behaviors. In one dimension, they include the fractional Brownian motion @cite_32 and its higher-order extensions @cite_10 . In higher dimensions, our framework covers the family of fractional Gaussian fields @cite_21 @cite_38 @cite_36 and finite-variance self-similar fields that appear to converge to fractional Gaussian fields at large scales @cite_16 . The self-similarity property is also compatible with the family of @math -stable processes @cite_37 which have the particularity of having unbounded variances or second-order moments (when non-Gaussian). More generally, every process considered in our framework is part of the L 'evy family, including Laplace processes @cite_42 and Student's processes @cite_14 . Upon varying the operator @math , one recovers L 'evy processes @cite_1 , CARMA processes @cite_30 @cite_15 , and their multivariate generalizations @cite_23 @cite_47 . Unlike those examples, the compound-Poisson processes, although members of the L 'evy family, are piecewise-constant and have a finite rate of innovation (FRI) in the sense of @cite_26 . For a signal, being FRI means that a finite quantity of information is sufficient to reconstruct it over a bounded domain.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_37", "@cite_14", "@cite_47", "@cite_26", "@cite_36", "@cite_21", "@cite_42", "@cite_32", "@cite_1", "@cite_23", "@cite_15", "@cite_16", "@cite_10" ], "mid": [ "2171440547", "", "", "581621380", "", "", "", "1533055209", "41409100", "2031753087", "", "606515514", "", "2190195640", "2145402794" ], "abstract": [ "Properties and examples of continuous-time ARMA (CARMA) processes driven by Levy processes are examined. By allowing Levy processes to replace Brownian motion in the definition of a Gaussian CARMA process, we obtain a much richer class of possibly heavy-tailed continuous-time stationary processes with many potential applications in finance, where such heavy tails are frequently observed in practice. If the Levy process has finite second moments, the correlation structure of the CARMA process is the same as that of a corresponding Gaussian CARMA process. In this paper we make use of the properties of general Levy processes to investigate CARMA processes driven by Levy processes W(t) without the restriction to finite second moments. We assume only that W (1) has finite r-th absolute moment for some strictly positive r. The processes so obtained include CARMA processes with marginal symmetric stable distributions.", "", "", "Introduction.- Asymptotics.- Preliminaries of Levy Processes.- Student-Levy Processes.- Student OU-type Processes.- Student Diffusion Processes.- Miscellanea.- Bessel Functions.- References.- Index.", "", "", "", "", "A retroreflective film comprising a lamination of a first base layer having vacuum met allized hemispherical depressions on one surface thereof and a second layer having substantially hemispherical projections from the surface thereof formed from an optically transparent film wherein the first and second layers are arranged so that the hemispherical depressions of the base layer and the substantially hemispherical projections of the intermediate layer are concentrically arranged and the radius of the hemispherical depressions of the base layer is greater than the radius of the substantially hemispherical projections of the second layer. A laminates construction with matched opposing substantially hemispherical projections on the second layer is preferably provided with a third optically clear overlay film.", "", "", "1. Introduction 2. Roadmap to the book 3. Mathematical context and background 4. Continuous-domain innovation models 5. Operators and their inverses 6. Splines and wavelets 7. Sparse stochastic processes 8. Sparse representations 9. Infinite divisibility and transform-domain statistics 10. Recovery of sparse signals 11. Wavelet-domain methods 12. Conclusion Appendix A. Singular integrals Appendix B. Positive definiteness Appendix C. Special functions.", "", "It is well documented that natural images are compressible in wavelet bases and tend to exhibit fractal properties. In this paper, we investigate statistical models that mimic these behaviors. We then use our models to make predictions on the statistics of the wavelet coefficients. Following an innovation modeling approach, we identify a general class of finite-variance self-similar sparse random processes. We first prove that spatially dilated versions of self-similar sparse processes are asymptotically Gaussian as the dilation factor increases. Based on this fundamental result, we show that the coarse-scale wavelet coefficients of these processes are also asymptotically Gaussian, provided the wavelet has enough vanishing moments. Moreover, we quantify the degree of Gaussianity by deriving the theoretical evolution of the kurtosis of the wavelet coefficients across scales. Finally, we apply our analysis to one- and two-dimensional signals, including natural images, and show that the wavelet coefficients ...", "A generalization of fractional Brownian motion (fBm) of parameter H in ]0, 1[ is proposed. More precisely, this work leads to nth-order fBm (n-fBm) of H parameter in ]n-1, n[, where n is any strictly positive integer. They include fBm for the special case n=1. Properties of these new processes are investigated. Their covariance function are given, and it is shown that they are self similar. In addition, their spectral shape is assessed as 1 f sup spl alpha with spl alpha belonging to ]1; + spl infin [, providing a larger framework than classical fBm. Special interest is given to their nth-order stationary increments, which extend fractional Gaussian noises. The covariance function and power spectral densities are calculated. The properties and signal processing tasks such as a Cholesky-type synthesis technique and a maximum likelihood estimation method of the H parameter are presented. The results show that the estimator is efficient (unbiased and reaches the Cramer-Rao lower bound) for a large majority of tested values." ] }
1702.05241
2953078836
In industrial control systems, devices such as Programmable Logic Controllers (PLCs) are commonly used to directly interact with sensors and actuators, and perform local automatic control. PLCs run software on two different layers: a) firmware (i.e. the OS) and b) control logic (processing sensor readings to determine control actions). In this work, we discuss ladder logic bombs, i.e. malware written in ladder logic (or one of the other IEC 61131-3-compatible languages). Such malware would be inserted by an attacker into existing control logic on a PLC, and either persistently change the behavior, or wait for specific trigger signals to activate malicious behaviour. For example, the LLB could replace legitimate sensor readings with manipulated values. We see the concept of LLBs as a generalization of attacks such as the Stuxnet attack. We introduce LLBs on an abstract level, and then demonstrate several designs based on real PLC devices in our lab. In particular, we also focus on stealthy LLBs, i.e. LLBs that are hard to detect by human operators manually validating the program running in PLCs. In addition to introducing vulnerabilities on the logic layer, we also discuss countermeasures and we propose two detection techniques.
General Threats to ICS It has been observed over the years that process control systems are vulnerable to various exploits with potentially damaging physical consequences @cite_5 @cite_2 @cite_13 @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_13", "@cite_2" ], "mid": [ "", "207267598", "2130751706", "2038651258" ], "abstract": [ "", "In this paper we attempt to answer two questions: (1) Why should we be interested in the security of control systems? And (2) What are the new and fundamentally different requirements and problems for the security of control systems? We also propose a new mathematical framework to analyze attacks against control systems. Within this framework we formulate specific research problems to (1) detect attacks, and (2) survive attacks.", "This article investigates the vulnerabilities of Supervisory Control and Data Acquisition (SCADA) systems which monitor and control the modern day irrigation canal systems. This type of monitoring and control infrastructure is also common for many other water distribution systems. We present a linearized shallow water partial differential equation (PDE) system that can model water flow in a network of canal pools which are equipped with lateral offtakes for water withdrawal and are connected by automated gates. The knowledge of the system dynamics enables us to develop a deception attack scheme based on switching the PDE parameters and proportional (P) boundary control actions, to withdraw water from the pools through offtakes. We briefly discuss the limits on detectability of such attacks. We use a known formulation based on low frequency approximation of the PDE model and an associated proportional integral (PI) controller, to create a stealthy deception scheme capable of compromising the performance of the closed-loop system. We test the proposed attack scheme in simulation, using a shallow water solver; and show that the attack is indeed realizable in practice by implementing it on a physical canal in Southern France: the Gignac canal. A successful field experiment shows that the attack scheme enables us to steal water stealthily from the canal until the end of the attack.", "A power grid is a complex system connecting electric power generators to consumers through power transmission and distribution networks across a large geographical area. System monitoring is necessary to ensure the reliable operation of power grids, and state estimation is used in system monitoring to best estimate the power grid state through analysis of meter measurements and power system models. Various techniques have been developed to detect and identify bad measurements, including interacting bad measurements introduced by arbitrary, nonrandom causes. At first glance, it seems that these techniques can also defeat malicious measurements injected by attackers. In this article, we expose an unknown vulnerability of existing bad measurement detection algorithms by presenting and analyzing a new class of attacks, called false data injection attacks, against state estimation in electric power grids. Under the assumption that the attacker can access the current power system configuration information and manipulate the measurements of meters at physically protected locations such as substations, such attacks can introduce arbitrary errors into certain state variables without being detected by existing algorithms. Moreover, we look at two scenarios, where the attacker is either constrained to specific meters or limited in the resources required to compromise meters. We show that the attacker can systematically and efficiently construct attack vectors in both scenarios to change the results of state estimation in arbitrary ways. We also extend these attacks to generalized false data injection attacks, which can further increase the impact by exploiting measurement errors typically tolerated in state estimation. We demonstrate the success of these attacks through simulation using IEEE test systems, and also discuss the practicality of these attacks and the real-world constraints that limit their effectiveness." ] }
1702.05241
2953078836
In industrial control systems, devices such as Programmable Logic Controllers (PLCs) are commonly used to directly interact with sensors and actuators, and perform local automatic control. PLCs run software on two different layers: a) firmware (i.e. the OS) and b) control logic (processing sensor readings to determine control actions). In this work, we discuss ladder logic bombs, i.e. malware written in ladder logic (or one of the other IEC 61131-3-compatible languages). Such malware would be inserted by an attacker into existing control logic on a PLC, and either persistently change the behavior, or wait for specific trigger signals to activate malicious behaviour. For example, the LLB could replace legitimate sensor readings with manipulated values. We see the concept of LLBs as a generalization of attacks such as the Stuxnet attack. We introduce LLBs on an abstract level, and then demonstrate several designs based on real PLC devices in our lab. In particular, we also focus on stealthy LLBs, i.e. LLBs that are hard to detect by human operators manually validating the program running in PLCs. In addition to introducing vulnerabilities on the logic layer, we also discuss countermeasures and we propose two detection techniques.
@cite_22 Morris discuss different attacks such as measurement injection, command injection, denial of service, etc, on SCADA control systems which use the MODBUS communication protocol. Much like the rest, this study is again restricted to exploiting the network layer to attack the PLCs. Therefore, it is necessary to analyze control logic vulnerabilities, which can be manifested through malicious logic additions.
{ "cite_N": [ "@cite_22" ], "mid": [ "2182092823" ], "abstract": [ "This paper presents a set of attacks against SCADA control systems. The attacks are grouped into 4 classes; reconnaissance, response and measurement injection, command injection and denial of service. The 4 classes are defined and each attack is described in detail. The response and measurement injection and command injection classes are subdivided into sub-classes based on attack complexity. Each attack described in this paper has been exercised against industrial control systems in a laboratory setting." ] }
1702.05241
2953078836
In industrial control systems, devices such as Programmable Logic Controllers (PLCs) are commonly used to directly interact with sensors and actuators, and perform local automatic control. PLCs run software on two different layers: a) firmware (i.e. the OS) and b) control logic (processing sensor readings to determine control actions). In this work, we discuss ladder logic bombs, i.e. malware written in ladder logic (or one of the other IEC 61131-3-compatible languages). Such malware would be inserted by an attacker into existing control logic on a PLC, and either persistently change the behavior, or wait for specific trigger signals to activate malicious behaviour. For example, the LLB could replace legitimate sensor readings with manipulated values. We see the concept of LLBs as a generalization of attacks such as the Stuxnet attack. We introduce LLBs on an abstract level, and then demonstrate several designs based on real PLC devices in our lab. In particular, we also focus on stealthy LLBs, i.e. LLBs that are hard to detect by human operators manually validating the program running in PLCs. In addition to introducing vulnerabilities on the logic layer, we also discuss countermeasures and we propose two detection techniques.
In @cite_15 , Karnouskos discuss Stuxnet and how it managed to deviate the expected behaviour of PLC. @cite_8 , Kim discuss the cyber security issues in nuclear power plants and focused on stuxnet inherited malware attacks on control system, and its impacts in future along with its countermeasures.
{ "cite_N": [ "@cite_15", "@cite_8" ], "mid": [ "2039148409", "1982645273" ], "abstract": [ "Industrial systems consider only partially security, mostly relying on the basis of “isolated” networks, and controlled access environments. Monitoring and control systems such as SCADA DCS are responsible for managing critical infrastructures operate in these environments, where a false sense of security assumptions is usually made. The Stuxnet worm attack demonstrated widely in mid 2010 that many of the security assumptions made about the operating environment, technological capabilities and potential threat risk analysis are far away from the reality and challenges modern industrial systems face. We investigate in this work the highly sophisticated aspects of Stuxnet, the impact that it may have on existing security considerations and pose some thoughts on the next generation SCADA DCS systems from a security perspective.", "Abstract With the introduction of new technology based on the increasing digitalization of control systems, the potential of cyber attacks has escalated into a serious threat for nuclear facilities, resulting in the advent of the Stuxnet. In this regard, the nuclear industry needs to consider several cyber security issues imposed on nuclear power plants, including regulatory guidelines and standards for cyber security, the possibility of Stuxnet-inherited malware attacks in the future, and countermeasures for protecting nuclear power plants against possible cyber attacks." ] }
1702.05241
2953078836
In industrial control systems, devices such as Programmable Logic Controllers (PLCs) are commonly used to directly interact with sensors and actuators, and perform local automatic control. PLCs run software on two different layers: a) firmware (i.e. the OS) and b) control logic (processing sensor readings to determine control actions). In this work, we discuss ladder logic bombs, i.e. malware written in ladder logic (or one of the other IEC 61131-3-compatible languages). Such malware would be inserted by an attacker into existing control logic on a PLC, and either persistently change the behavior, or wait for specific trigger signals to activate malicious behaviour. For example, the LLB could replace legitimate sensor readings with manipulated values. We see the concept of LLBs as a generalization of attacks such as the Stuxnet attack. We introduce LLBs on an abstract level, and then demonstrate several designs based on real PLC devices in our lab. In particular, we also focus on stealthy LLBs, i.e. LLBs that are hard to detect by human operators manually validating the program running in PLCs. In addition to introducing vulnerabilities on the logic layer, we also discuss countermeasures and we propose two detection techniques.
@cite_10 , the authors investigate vulnerabilities of industrial PLCs on firmware and network level, leaving out any analysis on logic level exploits. In this work, we provide a consolidated study on logic layer manipulations and provide logic level safeguarding methods, unlike the network based security (e.g., firewall, VPN security and secured layered architecture) methods proposed in majority of the papers above.
{ "cite_N": [ "@cite_10" ], "mid": [ "2074450875" ], "abstract": [ "In this paper we have shown that PLC devices are complex embedded systems often relying on some operating system. They are plagued by the same sorts of vulnerabilities and exploits as general purpose operating systems. In fact, the number of latent vulnerabilities in the typical microprocessor-based device can be surprisingly high. However we don't need bugs or vulnerabilities in order to attack the PLC. We can exploit its normal operation provided we have some access to the device. It is suggested that the one of effective ways to avoid expensive business losses or production disruption due to misuse of the PLC is to start protecting the system with defence-in-depth measures." ] }
1702.05147
2950414386
Current surveillance and control systems still require human supervision and intervention. This work presents a novel automatic handgun detection system in videos appropriate for both, surveillance and control purposes. We reformulate this detection problem into the problem of minimizing false positives and solve it by building the key training data-set guided by the results of a deep Convolutional Neural Networks (CNN) classifier, then assessing the best classification model under two approaches, the sliding window approach and region proposal approach. The most promising results are obtained by Faster R-CNN based model trained on our new database. The best detector show a high potential even in low quality youtube videos and provides satisfactory results as automatic alarm system. Among 30 scenes, it successfully activates the alarm after five successive true positives in less than 0.2 seconds, in 27 scenes. We also define a new metric, Alarm Activation per Interval (AApI), to assess the performance of a detection model as an automatic detection system in videos.
The first and traditional sub-area in gun detection focuses on detecting concealed handguns in X-ray or millimetric wave images. The most representative application in this context is luggage control in airports. The existent methods achieve high accuracies by using different combinations of feature extractors and detectors, either using simple density descriptors @cite_28 , border detection and pattern matching @cite_1 or using more complex methods such as cascade classifiers with boosting @cite_15 . The effectiveness of these methods made them essential in some specific places. However, they have several limitations. As these systems are based on met al detection, they cannot detect non met allic guns. They are expensive to be used in many places as they require to be combined with X-ray scanners and Conveyor belts. They are not precise because they react to all met allic objects.
{ "cite_N": [ "@cite_28", "@cite_15", "@cite_1" ], "mid": [ "", "1646190092", "2016107543" ], "abstract": [ "", "A method is proposed for automatic detection of the concealed pistols detected by passive millimeter in security applications. In this paper, we extend four half-surrounded Haar-like features and use integral image to rapidly calculate the rectangle features. Then we obtain a multi-layer classifier cascaded by several strong classifiers using AdaBoost algorithm to detect the contraband. Various passive millimeter images from both published literatures and our own measurements are used for training and testing. The experimental results show that the met allic pistols in different sizes, shapes, and angles can be accurately detected, so this method is useful for automatic detection of pistols.", "The goal of this research is to develop a process, using current imaging hardware and without human intervention, that provides an accurate and timely detection alert of a concealed weapon and its location in the image of the luggage. There are several processes in existence that are able to highlight or otherwise outline a concealed weapon in baggage but so far those processes still require a highly trained operator to observe the resulting image and draw the correct conclusions. We attempted three different approaches in this project. The first approach uses edge detection combined with pattern matching to determine the existence of a concealed pistol. Rather than use the whole body of the weapon which varies significantly, the trigger guard was used since it is fairly consistent in dimensions. While the processes were reliable in detecting a pistol's presence, on any but the simplest of images, the computational time was excessive and a substantial number of false positives were generated. The second approach employed Daubechie wavelet transforms but the results have so far been inconclusive. A third approach involving an algorithm based on the scale invariant feature transform (SIFT) is proposed." ] }
1702.05147
2950414386
Current surveillance and control systems still require human supervision and intervention. This work presents a novel automatic handgun detection system in videos appropriate for both, surveillance and control purposes. We reformulate this detection problem into the problem of minimizing false positives and solve it by building the key training data-set guided by the results of a deep Convolutional Neural Networks (CNN) classifier, then assessing the best classification model under two approaches, the sliding window approach and region proposal approach. The most promising results are obtained by Faster R-CNN based model trained on our new database. The best detector show a high potential even in low quality youtube videos and provides satisfactory results as automatic alarm system. Among 30 scenes, it successfully activates the alarm after five successive true positives in less than 0.2 seconds, in 27 scenes. We also define a new metric, Alarm Activation per Interval (AApI), to assess the performance of a detection model as an automatic detection system in videos.
Sliding window approach: It is an exhaustive method that considers a large number of candidate windows, in the order of @math , from the input image. It scans the input image, at all locations and multiple scales, with a window and runs the classifier at each one of the windows. The most relevant works in this context improve the performance of the detection by building more sophisticated classifiers. The Histogram of Oriented Gradients (HOG) based model @cite_23 uses HOG descriptor for feature extraction to predict the object class in each window. The Deformable Parts Models (DPM) @cite_8 , which is an extension of HOG based model, uses (1) HOG descriptor to calculate low-level features, (2) a matching algorithm for deformable part-based models that uses the pictorial structures @cite_21 and (3) a discriminative learning with latent variables (latent SVM). This model provides very good accuracies for pedestrian detection with a speed of around 0.07fps and 14s image.
{ "cite_N": [ "@cite_21", "@cite_23", "@cite_8" ], "mid": [ "2030536784", "2161969291", "" ], "abstract": [ "In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "" ] }
1702.04811
2950004938
Interpreting the performance of deep learning models beyond test set accuracy is challenging. Characteristics of individual data points are often not considered during evaluation, and each data point is treated equally. We examine the impact of a test set question's difficulty to determine if there is a relationship between difficulty and performance. We model difficulty using well-studied psychometric methods on human response patterns. Experiments on Natural Language Inference (NLI) and Sentiment Analysis (SA) show that the likelihood of answering a question correctly is impacted by the question's difficulty. As DNNs are trained with more data, easy examples are learned more quickly than hard examples.
There has been work in the NLP community around modeling latent characteristics of data @cite_5 and annotators , but none that apply the resulting metrics to interpret DNN models. model the probability a label is correct with the probability of an annotator to label an item correctly according to the model, but do not consider difficulty or discriminatory ability of the data points.
{ "cite_N": [ "@cite_5" ], "mid": [ "2128669672" ], "abstract": [ "In this paper, we describe a case study of a sentence-level categorization in which tagging instructions are developed and used by four judges to classify clauses from the Wall Street Journal as either subjective or objective. Agreement among the four judges is analyzed, and based on that analysis, each clause is given a final classification. To provide empirical support for the classifications, correlations are assessed in the data between the subjective category and a basic semantic class posited by Quirk, Greenbaum, Leech and Svartvik (1985)." ] }
1702.04510
2592711854
In machine translation (MT) that involves translating between two languages with significant differences in word order, determining the correct word order of translated words is a major challenge. The dependency parse tree of a source sentence can help to determine the correct word order of the translated words. In this paper, we present a novel reordering approach utilizing a neural network and dependency-based embeddings to predict whether the translations of two source words linked by a dependency relation should remain in the same order or should be swapped in the translated sentence. Experiments on Chinese-to-English translation show that our approach yields a statistically significant improvement of 0.57 BLEU point on benchmark NIST test sets, compared to our prior state-of-the-art statistical MT system that uses sparse dependency-based reordering features.
Our neural reordering classifier serves as a decoding feature function in SMT, leveraging the decoding. This is similar to prior work on neural decoding features, i.e., neural language model @cite_36 and neural joint model @cite_10 , a source-augmented language model. However, these features are not about word reordering.
{ "cite_N": [ "@cite_36", "@cite_10" ], "mid": [ "932413789", "2251682575" ], "abstract": [ "We explore the application of neural language models to machine translation. We develop a new model that combines the neural probabilistic language model of , rectified linear units, and noise-contrastive estimation, and we incorporate it into a machine translation system both by reranking k-best lists and by direct integration into the decoder. Our large-scale, large-vocabulary experiments across four language pairs show that our neural language model improves translation quality by up to 1.1 Bleu.", "Recent work has shown success in using neural network language models (NNLMs) as features in MT systems. Here, we present a novel formulation for a neural network joint model (NNJM), which augments the NNLM with a source context window. Our model is purely lexicalized and can be integrated into any MT decoder. We also present several variations of the NNJM which provide significant additive improvements." ] }
1702.04510
2592711854
In machine translation (MT) that involves translating between two languages with significant differences in word order, determining the correct word order of translated words is a major challenge. The dependency parse tree of a source sentence can help to determine the correct word order of the translated words. In this paper, we present a novel reordering approach utilizing a neural network and dependency-based embeddings to predict whether the translations of two source words linked by a dependency relation should remain in the same order or should be swapped in the translated sentence. Experiments on Chinese-to-English translation show that our approach yields a statistically significant improvement of 0.57 BLEU point on benchmark NIST test sets, compared to our prior state-of-the-art statistical MT system that uses sparse dependency-based reordering features.
While continuous representation of words is originally defined for words @cite_35 , we also define continuous representation for POS tags, dependency labels, and indicator features. Extending continuous representation to non-word features is also done in neural dependency parsing @cite_9 @cite_19 , which shows better performance by using continuous feature representation over the traditional discrete representation.
{ "cite_N": [ "@cite_19", "@cite_35", "@cite_9" ], "mid": [ "2311132329", "", "2250861254" ], "abstract": [ "We introduce a globally normalized transition-based neural network model that achieves state-of-the-art part-of-speech tagging, dependency parsing and sentence compression results. Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models. We discuss the importance of global as opposed to local normalization: a key insight is that the label bias problem implies that globally normalized models can be strictly more expressive than locally normalized models.", "", "Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2 improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2 unlabeled attachment score on the English Penn Treebank." ] }
1702.04739
2590392595
We propose a parallel graph-based data clustering algorithm using CUDA GPU, based on exact clustering of the minimum spanning tree in terms of a minimum isoperimetric criteria. We also provide a comparative performance analysis of our algorithm with other related ones which demonstrates the general superiority of this parallel algorithm over other competing algorithms in terms of accuracy and speed.
Various algorithms and strategies have been proposed to improve the computational performance of data clustering and its applications to image segmentation, signal processing and etc. In this manner, some attempts have been devoted to develop parallel adaptations of the well-known methods. K-means is one of the most popular algorithm in data clustering literature. Firstly, Stoffel and Belkoniene @cite_15 developed some parallel techniques for this algorithm. Afterwards, many attempts was devoted to design and implement the parallel version for k-means in the large data sets (e.g. see @cite_5 and @cite_4 ).
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_4" ], "mid": [ "1870625491", "199449237", "" ], "abstract": [ "To cluster increasingly massive data sets that are common today in data and text mining, we propose a parallel implementation of the k-means clustering algorithm based on the message passing model. The proposed algorithm exploits the inherent data-parallelism in the kmeans algorithm. We analytically show that the speedup and the scaleup of our algorithm approach the optimal as the number of data points increases. We implemented our algorithm on an IBM POWERparallel SP2 with a maximum of 16 nodes. On typical test data sets, we observe nearly linear relative speedups, for example, 15.62 on 16 nodes, and essentially linear scaleup in the size of the data set and in the number of clusters desired. For a 2 gigabyte test data set, our implementation drives the 16 node SP2 at more than 1.8 gigaflops.", "This paper describes the realization of a parallel version of the k h-means clustering algorithm. This is one of the basic algorithms used in a wide range of data mining tasks. We show how a database can be distributed and how the algorithm can be applied to this distributed database. The tests conducted on a network of 32 PCs showed for large data sets a nearly ideal speedup.", "" ] }
1702.04739
2590392595
We propose a parallel graph-based data clustering algorithm using CUDA GPU, based on exact clustering of the minimum spanning tree in terms of a minimum isoperimetric criteria. We also provide a comparative performance analysis of our algorithm with other related ones which demonstrates the general superiority of this parallel algorithm over other competing algorithms in terms of accuracy and speed.
Normalized cut @cite_13 is another popular graph-based data clustering algorithm for which XianLou and ShuangYuan @cite_1 developed a parallel version and gained about 2.34 times speed up in the best case. In the context of hierarchical clustering, firstly, Olson @cite_14 , introduced a parallel algorithm. Also, Dahlhaus @cite_2 presented another parallel algorithm and used his algorithm to solve the undirected split decomposition problem in graphs. Amal Elsayed @cite_18 , designed and implemented a parallel document clustering based on hierarchical approach and for the best case, they gained a 5 times speed up over $65 On the other hand, some other parallel algorithms are developed on the various architectures and models. @cite_0 , implemented a parallel version of k-means clustering on map-reduce model. Middelmann and Sanders @cite_10 proposed a heuristic graph-based image segmentation algorithm and implemented a parallel algorithm for shared-memory machines.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_10", "@cite_1", "@cite_0", "@cite_2", "@cite_13" ], "mid": [ "2327628302", "2049631158", "", "2063149017", "2116762767", "2103191280", "2121947440" ], "abstract": [ "As the amount of internet documents has been growing, document clustering has become practically important. This has led the interest in developing document clustering algorithms. Exploiting parallelism plays an important role in achieving fast and high quality clustering. In this paper, we propose a parallel algorithm that adopts a hierarchical document clustering approach. Our focus is to exploit the sources of parallelism to improve performance and decrease clustering time. The proposed parallel algorithm is tested using a test-bed collection of 749 documents from CACM. A multiprocessor system based on message-passing is used. Various parameters are considered for evaluating performance including average inter-cluster similarity, speedup and processors' utilization. Simulation results show that the proposed algorithm improves performance, decreases the clustering time, and increases the overall speedup while still keeping a high clustering quality. By increasing the number of processors, the clustering time decreases till a certain point where any more processors will no longer be effective. Moreover, the algorithm is applicable for different domains for other document collections.", "Hierarchical clustering is common method used to determine clusters of similar data points in multi-dimensional spaces. @math algorithms, where @math is the number of points to cluster, have long been known for this problem. This paper discusses parallel algorithms to perform hierarchical clustering using various distance metrics. I describe @math time algorithms for clustering using the single link, average link, complete link, centroid, median, and minimum variance metrics on an @math node CRCW PRAM and @math algorithms for these metrics (except average link and complete link) on @math node butterfly networks or trees. Thus, optimal efficiency is achieved for a significant number of processors using these distance metrics. A general algorithm is given that can be used to perform clustering with the complete link and average link metrics on a butterfly. While this algorithm achieves optimal efficiency for the general class of metrics, it is not optimal for the specific cases of complete link and average link clustering.", "", "This paper proposes a parallel algorithm using CUDA GPU to accelerate the process of image segmentation algorithm based on Normalized Cut. After giving a summary of the key concepts and theory of normalized cut and CUDA, detailed implementation issues are discussed including the calculation of affinity matrix, transforming symmetric matrices to symmetric tridiagonal matrices, calculation of generalized eigenvalue value and its associated eigenvetor, the choice of splitting point, stopping criterion etc. This algorithm doesn't sparse the similarity matrix, so there is no information loss in transforming, which will lead to a more real and reliable segmentation. The experiment shows that the parallel algorithm using CUDA not only segment the image reliably but also have a great performance speed-up.", "Data clustering has been received considerable attention in many applications, such as data mining, document retrieval, image segmentation and pattern classification. The enlarging volumes of information emerging by the progress of technology, makes clustering of very large scale of data a challenging task. In order to deal with the problem, many researchers try to design efficient parallel clustering algorithms. In this paper, we propose a parallel k -means clustering algorithm based on MapReduce, which is a simple yet powerful parallel programming technique. The experimental results demonstrate that the proposed algorithm can scale well and efficiently process large datasets on commodity hardware.", "We present efficient (parallel) algorithms for two hierarchical clustering heuristics. We point out that these heuristics can also be applied to solving some algorithmic problems in graphs, including split decomposition. We show that efficient parallel split decomposition induces an efficient parallel parity graph recognition algorithm. This is a consequence of the result of S. Cicerone and D. Di Stefano 7] that parity graphs are exactly those graphs that can be split decomposed into cliques and bipartite graphs.", "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging." ] }
1702.04563
2952978987
We consider a basic caching system, where a single server with a database of @math files (e.g. movies) is connected to a set of @math users through a shared bottleneck link. Each user has a local cache memory with a size of @math files. The system operates in two phases: a placement phase, where each cache memory is populated up to its size from the database, and a following delivery phase, where each user requests a file from the database, and the server is responsible for delivering the requested contents. The objective is to design the two phases to minimize the load (peak or average) of the bottleneck link. We characterize the rate-memory tradeoff of the above caching system within a factor of @math for both the peak rate and the average rate (under uniform file popularity), improving state of the arts that are within a factor of @math and @math respectively. Moreover, in a practically important case where the number of files ( @math ) is large, we exactly characterize the tradeoff for systems with no more than @math users, and characterize the tradeoff within a factor of @math otherwise. To establish these results, we develop two new converse bounds that improve over the state of the art.
for @math , where @math is uniformly random in @math , and @math denotes the number of distinct requests in @math . Here the letter e'' in the subscript represents effective'', given that the function @math can also be interpreted as the effective'' number of files for any demand @math . Specifically, for any demand @math , the needed communication rate stated in equation ) is exactly the peak communication rate stated in equation ) for a caching system with @math files. Furthermore, for general (non-integer) @math , @math and @math are defined as the lower convex envelope of their values at @math , respectively. Specifically, for any non-integer @math , we have Rigorously, the fact that equations ) and ) define lower convex envelopes is due to the convexity of @math and @math on @math . This convexity was observed in @cite_4 and can be proved using elementary combinatorics. A short proof of the convexity of @math and @math can be found in Appendix . Given the above upper bounds, we develop improved converse bounds in this paper, which provides better characterizations for both the peak rate and the average rate.
{ "cite_N": [ "@cite_4" ], "mid": [ "2527594325" ], "abstract": [ "We consider a basic cache network, in which a single server is connected to multiple users via a shared bottleneck link. The server has a database of files (content). Each user has an isolated memory that can be used to cache content in a prefetching phase. In a following delivery phase, each user requests a file from the database, and the server needs to deliver users’ demands as efficiently as possible by taking into account their cache contents. We focus on an important and commonly used class of prefetching schemes, where the caches are filled with uncoded data. We provide the exact characterization of the rate-memory tradeoff for this problem, by deriving both the minimum average rate (for a uniform file popularity) and the minimum peak rate required on the bottleneck link for a given cache size available at each user. In particular, we propose a novel caching scheme, which strictly improves the state of the art by exploiting commonality among user demands. We then demonstrate the exact optimality of our proposed scheme through a matching converse, by dividing the set of all demands into types, and showing that the placement phase in the proposed caching scheme is universally optimal for all types. Using these techniques, we also fully characterize the rate-memory tradeoff for a decentralized setting, in which users fill out their cache content without any coordination." ] }
1702.04521
2952136670
Neural language models predict the next token using a latent representation of the immediate token history. Recently, various methods for augmenting neural language models with an attention mechanism over a differentiable memory have been proposed. For predicting the next token, these models query information from a memory of the recent history which can facilitate learning mid- and long-range dependencies. However, conventional attention mechanisms used in memory-augmented neural language models produce a single output vector per time step. This vector is used both for predicting the next token as well as for the key and value of a differentiable memory of a token history. In this paper, we propose a neural language model with a key-value attention mechanism that outputs separate representations for the key and value of a differentiable memory, as well as for encoding the next-word distribution. This model outperforms existing memory-augmented neural language models on two corpora. Yet, we found that our method mainly utilizes a memory of the five most recent output representations. This led to the unexpected main finding that a much simpler model based only on the concatenation of recent output representations from previous time steps is on par with more sophisticated memory-augmented neural language models.
Early attempts of using memory in neural networks have been undertaken by @cite_2 and @cite_10 by performing nearest-neighbor operations on input vectors and fitting parametric models to the retrieved sets. The dedicated use of external memory in neural architectures has more recently witnessed increased interest. @cite_13 introduced Memory Networks to explicitly segregate memory storage from the computation of the neural network, and @cite_0 trained this model end-to-end with an attention-based memory addressing mechanism. The Neural Turing Machines by @cite_7 add an external differentiable memory with read-write functions to a controller recurrent neural network, and has shown promising results in simple sequence tasks such as copying and sorting. These models make use of external memory, whereas our model directly uses a short sequence from the history of tokens to dynamically populate an addressable memory.
{ "cite_N": [ "@cite_7", "@cite_10", "@cite_0", "@cite_2", "@cite_13" ], "mid": [ "2950527759", "2024585065", "2150355110", "2178931739", "" ], "abstract": [ "We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.", "The paper gives a survey of the learning circuits which became known as learning matrices and some of their possible technological applications. The first section describes the principle of learning matrices. So-called conditioned connections between the characteristics of an object and the meaning of an object are formed in the learning phase. During the operation of connecting the characteristics of an object with its meaning (EB operation of the knowing phase) upon presenting the object characteristics, the associated most similar meaning is realized in the form of a signal by maximum likelihood decoding. Conversely, in operation from the meaning of an object to its characteristics (BE operation) the associated object characteristics are obtained as signals by parallel reading upon application of an object meaning. According to the characteristic signals processed (binary or analog signals) discrimination must be made between binary and nonbinary learning matrices. In the case of the binary learning matrix the conditioned connections are a statistical measure for the frequency of the coordination of object characteristics and object meaning, in the case of the nonbinary learning matrix they are a measure for an analog value proportional to a characteristic. Both types of matrices allow for the characteristic sets applied during EB operation to be unsystematically disturbed within limits. Moreover, the nonbinary learning matrix is invariant to systematic deviations between presented and learned characteristic sets (invariance to affine transformation, translation and rotated skewness).", "Basic backpropagation, which is a simple method now being widely used in areas like pattern recognition and fault diagnosis, is reviewed. The basic equations for backpropagation through time, and applications to areas like pattern recognition involving dynamic systems, systems identification, and control are discussed. Further extensions of this method, to deal with systems other than neural networks, systems involving simultaneous equations, or true recurrent networks, and other practical issues arising with the method are described. Pseudocode is provided to clarify the algorithms. The chain rule for ordered derivatives-the theorem which underlies backpropagation-is briefly discussed. The focus is on designing a simpler version of backpropagation which can be translated into computer code and applied directly by neutral network users. >", "The problem of synthesizing apparatus that will automatically simulate man's ability to recognize and to learn to recognize patterns is discussed and it is concluded that analogue circuits, rather than the digital switching circuits that have been employed in the past, provide the simpler solution. A new circuit unit that possesses many of the essential functional characteristics exhibited by nerve cells in the brain is derived from earlier work on the electrical simulation of nervous-system functional activity and forms the basic element of the circuits. The new analogue apparatus consists of a number of distinct functional circuits arranged in a definite sequence, through which signals derived from the patterns to be recognized pass simultaneously on their way to the final output terminals. Classification information may be built into the apparatus initially if it is available, but if not, it can be stored automatically in a special unit during a setting-up procedure in which samples of the pattern types that the apparatus will be required to recognize are presented, together with identification signals. Low-resolution automatic pattern-recognition apparatus is described, and examples illustrate the setting-up procedure and subsequent performance of the apparatus.", "" ] }
1702.04307
2594912618
We give a nearly linear time randomized approximation scheme for the Held-Karp bound [Held and Karp, 1970] for metric TSP. Formally, given an undirected edge-weighted graph @math on @math edges and @math , the algorithm outputs in @math time, with high probability, a @math -approximation to the Held-Karp bound on the metric TSP instance induced by the shortest path metric on @math . The algorithm can also be used to output a corresponding solution to the Subtour Elimination LP. We substantially improve upon the @math running time achieved previously by Garg and Khandekar. The LP solution can be used to obtain a fast randomized @math -approximation for metric TSP which improves upon the running time of previous implementations of Christofides' algorithm.
Our work here builds extensively on Karger's randomized nearly linear time mincut algorithm @cite_0 . As mentioned already, we adapt his algorithm to a partially dynamic setting informed by the MWU framework. also builds on Karger's tree packing ideas to develop a dynamic mincut algorithm. Thorup's algorithm is rather involved and is slower than what we are able to achieve for the partial dynamic setting; he achieves an update time of @math while we achieve polylogarithmic time. There are other obstacles to integrating his ideas and data structure to the needs of the MWU framework as we already remarked. For unweighted incremental mincut, recently developed a deterministic data structure with a poly-logarithmic amortized update time.
{ "cite_N": [ "@cite_0" ], "mid": [ "1964510837" ], "abstract": [ "We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a \"semiduality\" between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized (Monte Carlo) algorithm that finds a minimum cut in an m -edge, n -vertex graph with high probability in O (m log 3 n ) time. We also give a simpler randomized algorithm that finds all minimum cuts with high probability in O( m log 3 n ) time. This variant has an optimal RNC parallelization. Both variants improve on the previous best time bound of O ( n 2 log 3 n ). Other applications of the tree-packing approach are new, nearly tight bounds on the number of near-minimum cuts a graph may have and a new data structure for representing them in a space-efficient manner." ] }
1702.04376
2781545259
In a recent paper we analyzed the space complexity of streaming algorithms whose goal is to decide membership of a sliding window to a fixed language. For the class of regular languages we proved a space trichotomy theorem: for every regular language the optimal space bound is either constant, logarithmic or linear. In this paper we continue this line of research: We present natural characterizations for the constant and logarithmic space classes and establish tight relationships to the concept of language growth. We also analyze the space complexity with respect to automata size and prove almost matching lower and upper bounds. Finally, we consider the decision problem whether a language given by a DFA NFA admits a sliding window algorithm using logarithmic constant space.
In @cite_23 Fijalkow defines the online space complexity of a language @math . His definition is equivalent to the space complexity of the language @math in the standard streaming model described above. Among other results, Fijalkow presents a probabilistic automaton @math such that the language accepted by @math (with threshold @math ) needs space @math in the streaming model. Streaming a language @math in the standard model is also related to the concept of automaticity @cite_34 . For a language @math , the automaticity @math of @math is the function @math , where @math is the minimal number of states of a DFA @math such that for all words @math of length at most @math : @math if and only if @math . Clearly, every regular language @math has constant automaticity. Karp @cite_1 proved that for every non-regular language @math , @math for infinitely many @math . This implies that for every non-regular language @math , membership checking in the standard streaming model is not possible in space @math .
{ "cite_N": [ "@cite_34", "@cite_1", "@cite_23" ], "mid": [ "", "2000920814", "2293632979" ], "abstract": [ "", "Any sequential machine M represents a function f M from input sequences to output symbols. A function f is representable if some finite-state sequential machine represents it. The function f M is called an n-th order approximation to a given function f if f M is equal to f for all input sequences of length less than or equal to n . It is proved that, for an arbitrary nonrepresentable function f , there are infinitely many n such that any sequential machine representing an n th order approximation to f has more than n 2 + 1 states. An analogous result is obtained for two-way sequential machines and, using these and related results, lower bounds are obtained for two-way sequential machines and, using these and related results, lower bounds are obtained on the amount of work tape required online and offline Turing machines that compute nonrepresentable functions.", "In this paper, we define the online space complexity of languages, as the size of the smallest abstract machine processing words sequentially and able to determine at every point whether the word read so far belongs to the language or not. The first part of this paper motivates this model and provides examples and preliminary results." ] }
1702.04250
2951280619
Molecular Dynamics is an important tool for computational biologists, chemists, and materials scientists, consuming a sizable amount of supercomputing resources. Many of the investigated systems contain charged particles, which can only be simulated accurately using a long-range solver, such as PPPM. We extend the popular LAMMPS molecular dynamics code with an implementation of PPPM particularly suitable for the second generation Intel Xeon Phi. Our main target is the optimization of computational kernels by means of vectorization, and we observe speedups in these kernels of up to 12x. These improvements carry over to LAMMPS users, with overall speedups ranging between 2-3x, without requiring users to retune input parameters. Furthermore, our optimizations make it easier for users to determine optimal input parameters for attaining top performance.
Besides LAMMPS, many other popular molecular dynamics codes contain long-ranged solvers. Examples include, but are not limited to, Gromacs @cite_18 , DL @cite_13 , AMBER @cite_14 , Desmond @cite_9 , and NAMD @cite_4 . These codes tend not to implement PPPM itself, in favor of related schemes such as PME @cite_6 , SPME @cite_2 , and @math -GSE @cite_21 . The main differences with respect to PPPM lie in the function used to interpolate atom charges onto the grid and back, and in the corresponding Green's function used to solve for the smooth part of the potential. There also exist schemes for long-ranged force evaluation that are not based on Fourier transforms, such as lattice Gaussian multigrid @cite_17 , Multilevel Summation @cite_22 , and @math -GSE @cite_21 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_22", "@cite_9", "@cite_21", "@cite_6", "@cite_2", "@cite_13", "@cite_17" ], "mid": [ "1981021420", "2103325328", "2150981663", "2328012222", "2153228625", "2058874255", "2067174909", "2035687084", "", "2020036580" ], "abstract": [ "Abstract A parallel message-passing implementation of a molecular dynamics (MD) program that is useful for bio(macro)molecules in aqueous environment is described. The software has been developed for a custom-designed 32-processor ring GROMACS (GROningen MAchine for Chemical Simulation) with communication to and from left and right neighbours, but can run on any parallel system onto which a a ring of processors can be mapped and which supports PVM-like block send and receive calls. The GROMACS software consists of a preprocessor, a parallel MD and energy minimization program that can use an arbitrary number of processors (including one), an optional monitor, and several analysis tools. The programs are written in ANSI C and available by ftp (information: gromacs@chem.rug.nl). The functionality is based on the GROMOS (GROningen MOlecular Simulation) package (van Gunsteren and Berendsen, 1987; BIOMOS B.V., Nijenborgh 4, 9747 AG Groningen). Conversion programs between GROMOS and GROMACS formats are included. The MD program can handle rectangular periodic boundary conditions with temperature and pressure scaling. The interactions that can be handled without modification are variable non-bonded pair interactions with Coulomb and Lennard-Jones or Buckingham potentials, using a twin-range cut-off based on charge groups, and fixed bonded interactions of either harmonic or constraint type for bonds and bond angles and either periodic or cosine power series interactions for dihedral angles. Special forces can be added to groups of particles (for non-equilibrium dynamics or for position restraining) or between particles (for distance restraints). The parallelism is based on particle decomposition. Interprocessor communication is largely limited to position and force distribution over the ring once per time step.", "Molecular dynamics (MD) allows the study of biological and chemical systems at the atomistic level on timescales from femtoseconds to milliseconds. It complements experiment while also offering a way to follow processes difficult to discern with experimental techniques. Numerous software packages exist for conducting MD simulations of which one of the widest used is termed Amber. Here, we outline the most recent developments, since version 9 was released in April 2006, of the Amber and AmberTools MD software packages, referred to here as simply the Amber package. The latest release represents six years of continued development, since version 9, by multiple research groups and the culmination of over 33 years of work beginning with the first version in 1979. The latest release of the Amber package, version 12 released in April 2012, includes a substantial number of important developments in both the scientific and computer science arenas. We present here a condensed vision of what Amber currently supports and where things are likely to head over the coming years. Figure 1 shows the performance in ns day of the Amber package version 12 on a single-core AMD FX-8120 8-Core 3.6GHz CPU, the Cray XT5 system, and a single GPU GTX680. © 2012 John Wiley & Sons, Ltd.", "NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomo- lecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This article, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free energy differences are presented. The motivations for and a roadmap to the internal design of NAMD, implemented in C and based on Charm parallel objects, are outlined. The factors affecting the serial and parallel performance of a simulation are discussed. Finally, typical NAMD use is illustrated with representative applications to a small, a medium, and a large biomolecular system, highlighting particular features of NAMD, for example, the Tcl scripting language. The article also provides a list of the key features of NAMD and discusses the benefits of combining NAMD with the molecular graphics sequence analysis software VMD and the grid computing collaboratory software BioCoRE. NAMD is distributed free of charge with source code at www.ks.uiuc.edu.", "The multilevel summation method (MSM) offers an efficient algorithm utilizing convolution for evaluating long-range forces arising in molecular dynamics simulations. Shifting the balance of computation and communication, MSM provides key advantages over the ubiquitous particle–mesh Ewald (PME) method, offering better scaling on parallel computers and permitting more modeling flexibility, with support for periodic systems as does PME but also for semiperiodic and nonperiodic systems. The version of MSM available in the simulation program NAMD is described, and its performance and accuracy are compared with the PME method. The accuracy feasible for MSM in practical applications reproduces PME results for water property calculations of density, diffusion constant, dielectric constant, surface tension, radial distribution function, and distance-dependent Kirkwood factor, even though the numerical accuracy of PME is higher than that of MSM. Excellent agreement between MSM and PME is found also for interface potentials of air–water and membrane–water interfaces, where long-range Coulombic interactions are crucial. Applications demonstrate also the suitability of MSM for systems with semiperiodic and nonperiodic boundaries. For this purpose, simulations have been performed with periodic boundaries along directions parallel to a membrane surface but not along the surface normal, yielding membrane pore formation induced by an imbalance of charge across the membrane. Using a similar semiperiodic boundary condition, ion conduction through a graphene nanopore driven by an ion gradient has been simulated. Furthermore, proteins have been simulated inside a single spherical water droplet. Finally, parallel scalability results show the ability of MSM to outperform PME when scaling a system of modest size (less than 100 K atoms) to over a thousand processors, demonstrating the suitability of MSM for large-scale parallel simulation.", "Although molecular dynamics (MD) simulations of biomolecular systems often run for days to months, many events of great scientific interest and pharmaceutical relevance occur on long time scales that remain beyond reach. We present several new algorithms and implementation techniques that significantly accelerate parallel MD simulations compared with current stateof- the-art codes. These include a novel parallel decomposition method and message-passing techniques that reduce communication requirements, as well as novel communication primitives that further reduce communication time. We have also developed numerical techniques that maintain high accuracy while using single precision computation in order to exploit processor-level vector instructions. These methods are embodied in a newly developed MD code called Desmond that achieves unprecedented simulation throughput and parallel scalability on commodity clusters. Our results suggest that Desmond?s parallel performance substantially surpasses that of any previously described code. For example, on a standard benchmark, Desmond?s performance on a conventional Opteron cluster with 2K processors slightly exceeded the reported performance of IBM?s Blue Gene L machine with 32K processors running its Blue Matter MD code.", "Gaussian split Ewald (GSE) is a versatile Ewald mesh method that is fast and accurate when used with both real-space and k-space Poisson solvers. While real-space methods are known to be asymptotically superior to k-space methods in terms of both computational cost and parallelization efficiency, k-space methods such as smooth particle-mesh Ewald (SPME) have thus far remained dominant because they have been more efficient than existing real-space methods for simulations of typical systems in the size range of current practical interest. Real-space GSE, however, is approximately a factor of 2 faster than previously described real-space Ewald methods for the level of force accuracy typically required in biomolecular simulations, and is competitive with leading k-space methods even for systems of moderate size. Alternatively, GSE may be combined with a k-space Poisson solver, providing a conveniently tunable k-space method that performs comparably to SPME. The GSE method follows naturally from a uniform framework that we introduce to concisely describe the differences between existing Ewald mesh methods.Gaussian split Ewald (GSE) is a versatile Ewald mesh method that is fast and accurate when used with both real-space and k-space Poisson solvers. While real-space methods are known to be asymptotically superior to k-space methods in terms of both computational cost and parallelization efficiency, k-space methods such as smooth particle-mesh Ewald (SPME) have thus far remained dominant because they have been more efficient than existing real-space methods for simulations of typical systems in the size range of current practical interest. Real-space GSE, however, is approximately a factor of 2 faster than previously described real-space Ewald methods for the level of force accuracy typically required in biomolecular simulations, and is competitive with leading k-space methods even for systems of moderate size. Alternatively, GSE may be combined with a k-space Poisson solver, providing a conveniently tunable k-space method that performs comparably to SPME. The GSE method follows naturally from a uniform fram...", "An N⋅log(N) method for evaluating electrostatic energies and forces of large periodic systems is presented. The method is based on interpolation of the reciprocal space Ewald sums and evaluation of the resulting convolutions using fast Fourier transforms. Timings and accuracies are presented for three large crystalline ionic systems.", "The previously developed particle mesh Ewald method is reformulated in terms of efficient B‐spline interpolation of the structure factors. This reformulation allows a natural extension of the method to potentials of the form 1 rp with p≥1. Furthermore, efficient calculation of the virial tensor follows. Use of B‐splines in place of Lagrange interpolation leads to analytic gradients as well as a significant improvement in the accuracy. We demonstrate that arbitrary accuracy can be achieved, independent of system size N, at a cost that scales as N log(N). For biomolecular systems with many thousands of atoms this method permits the use of Ewald summation at a computational cost comparable to that of a simple truncation method of 10 A or less.", "", "We present an O(N) multigrid-based method for the efficient calculation of the long-range electrostatic forces needed for biomolecular simulations, that is suitable for implementation on massively parallel architectures. Along general lines, the method consists of: (i) a charge assignment scheme, which both interpolates and smoothly assigns the charges onto a grid; (ii) the solution of Poisson’s equation on the grid via multigrid methods; and (iii) the back interpolation of the forces and energy from the grid to the particle space. Careful approaches for the charge assignment and the force interpolation, and a Hermitian approximation of Poisson’s equation on the grid allow for the generation of the high-accuracy solutions required for high-quality molecular dynamics simulations. Parallel versions of the method scale linearly with the number of particles for a fixed number of processors, and with the number of processors, for a fixed number of particles." ] }
1702.04179
2587804601
Given a pedestrian image as a query, the purpose of person re-identification is to identify the correct match from a large collection of gallery images depicting the same person captured by disjoint camera views. The critical challenge is how to construct a robust yet discriminative feature representation to capture the compounded variations in pedestrian appearance. To this end, deep learning methods have been proposed to extract hierarchical features against extreme variability of appearance. However, existing methods in this category generally neglect the efficiency in the matching stage whereas the searching speed of a re-identification system is crucial in real-world applications. In this paper, we present a novel deep hashing framework with Convolutional Neural Networks (CNNs) for fast person re-identification. Technically, we simultaneously learn both CNN features and hash functions to get robust yet discriminative features and similarity-preserving hash codes. Thereby, person re-identification can be resolved by efficiently computing and ranking the Hamming distances between images. A structured loss function defined over positive pairs and hard negatives is proposed to formulate a novel optimization problem so that fast convergence and more stable optimized solution can be attained. Extensive experiments on two benchmarks CUHK03 (, 2014) and Market-1501 (, 2015) show that the proposed deep architecture is efficacy over state-of-the-arts.
In literature of person re-identification, many studies try to address this challenging problem by either seeking a robust feature representation @cite_17 @cite_18 @cite_10 @cite_25 @cite_48 @cite_3 @cite_2 @cite_52 @cite_49 @cite_20 @cite_31 @cite_19 or casting it as a metric learning problem where more discriminative distance metrics are learned to handle features extracted from person images across camera views @cite_35 @cite_5 @cite_13 @cite_6 @cite_64 @cite_45 . The first aspect considers to find features that are robust to challenging factors while preserving identity information. The second stream generally tries to minimize the intra-class distance while maximize the inter-class distance. Also, person re-identification can be approached by a pipeline of image search where a Bag-of-words @cite_27 model is constructed to represent each pedestrian image and visual matching refinement strategies can be applied to improve the matching precision. Readers are kindly referred to @cite_0 to have more reviews.
{ "cite_N": [ "@cite_35", "@cite_64", "@cite_3", "@cite_2", "@cite_5", "@cite_10", "@cite_20", "@cite_18", "@cite_48", "@cite_52", "@cite_49", "@cite_17", "@cite_6", "@cite_19", "@cite_27", "@cite_25", "@cite_0", "@cite_45", "@cite_31", "@cite_13" ], "mid": [ "2151873133", "2950596073", "", "2551647111", "2068042582", "", "", "", "", "", "", "1979260620", "166429404", "2046835352", "2204750386", "2098416578", "", "2950240960", "1518138188", "2115669554" ], "abstract": [ "This paper considers the person verification problem in modern surveillance and video retrieval systems. The problem is to identify whether a pair of face or human body images is about the same person, even if the person is not seen before. Traditional methods usually look for a distance (or similarity) measure between images (e.g., by metric learning algorithms), and make decisions based on a fixed threshold. We show that this is nevertheless insufficient and sub-optimal for the verification problem. This paper proposes to learn a decision function for verification that can be viewed as a joint model of a distance metric and a locally adaptive thresholding rule. We further formulate the inference on our decision function as a second-order large-margin regularization problem, and provide an efficient algorithm in its dual from. We evaluate our algorithm on both human body verification and face verification problems. Our method outperforms not only the classical metric learning algorithm including LMNN and ITML, but also the state-of-the-art in the computer vision community.", "Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2 , 4.88 , 28.91 , and 31.55 on the four databases, respectively.", "", "Learning hash functions codes for similarity search over multi-view data is attracting increasing attention, where similar hash codes are assigned to the data objects characterizing consistently neighborhood relationship across views. Traditional methods in this category inherently suffer three limitations: 1) they commonly adopt a two-stage scheme where similarity matrix is first constructed, followed by a subsequent hash function learning; 2) these methods are commonly developed on the assumption that data samples with multiple representations are noise-free,which is not practical in real-life applications; and 3) they often incur cumbersome training model caused by the neighborhood graph construction using all N points in the database (O(N)). In this paper, we motivate the problem of jointly and efficiently training the robust hash functions over data objects with multi-feature representations which may be noise corrupted. To achieve both the robustness and training efficiency, we propose an approach to effectively and efficiently learning low-rank kernelized11We use kernelized similarity rather than kernel, as it is not a squared symmetric matrix for data-landmark affinity matrix. hash functions shared across views. Specifically, we utilize landmark graphs to construct tractable similarity matrices in multi-views to automatically discover neighborhood structure in the data. To learn robust hash functions, a latent low-rank kernel function is used to construct hash functions in order to accommodate linearly inseparable data. In particular, a latent kernelized similarity matrix is recovered by rank minimization on multiple kernel-based similarity matrices. Extensive experiments on real-world multi-view datasets validate the efficacy of our method in the presence of error corruptions.We use kernelized similarity rather than kernel, as it is not a squared symmetric matrix for data-landmark affinity matrix. A robust hashing method for multi-view data with noise corruptions is presented.It is to jointly learn a low-rank kernelized similarity consensus and hash functions.Approximate landmark graph is employed to make training fast.Extensive experiments are conducted on benchmarks to show the efficacy of our model.", "In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art.", "", "", "", "", "", "", "In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances.", "Re-identification of individuals across camera networks with limited or no overlapping fields of view remains challenging in spite of significant research efforts. In this paper, we propose the use, and extensively evaluate the performance, of four alternatives for re-ID classification: regularized Pairwise Constrained Component Analysis, kernel Local Fisher Discriminant Analysis, Marginal Fisher Analysis and a ranking ensemble voting scheme, used in conjunction with different sizes of sets of histogram-based features and linear, χ 2 and RBF-χ 2 kernels. Comparisons against the state-of-art show significant improvements in performance measured both in terms of Cumulative Match Characteristic curves (CMC) and Proportion of Uncertainty Removed (PUR) scores on the challenging VIPeR, iLIDS, CAVIAR and 3DPeS datasets.", "Human eyes can recognize person identities based on some small salient regions. However, such valuable salient information is often hidden when computing similarities of images with existing approaches. Moreover, many existing approaches learn discriminative features and handle drastic viewpoint change in a supervised way and require labeling new training data for a different pair of camera views. In this paper, we propose a novel perspective for person re-identification based on unsupervised salience learning. Distinctive features are extracted without requiring identity labels in the training procedure. First, we apply adjacency constrained patch matching to build dense correspondence between image pairs, which shows effectiveness in handling misalignment caused by large viewpoint and pose variations. Second, we learn human salience in an unsupervised manner. To improve the performance of person re-identification, human salience is incorporated in patch matching to find reliable and discriminative matched patches. The effectiveness of our approach is validated on the widely used VIPeR dataset and ETHZ dataset.", "This paper contributes a new high quality dataset for person re-identification, named \"Market-1501\". Generally, current datasets: 1) are limited in scale, 2) consist of hand-drawn bboxes, which are unavailable under realistic settings, 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset.", "Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multiquery optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space. In this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.", "", "Most existing person re-identification (re-id) methods focus on learning the optimal distance metrics across camera views. Typically a person's appearance is represented using features of thousands of dimensions, whilst only hundreds of training samples are available due to the difficulties in collecting matched training images. With the number of training samples much smaller than the feature dimension, the existing methods thus face the classic small sample size (SSS) problem and have to resort to dimensionality reduction techniques and or matrix regularisation, which lead to loss of discriminative power. In this work, we propose to overcome the SSS problem in re-id distance metric learning by matching people in a discriminative null space of the training data. In this null space, images of the same person are collapsed into a single point thus minimising the within-class scatter to the extreme and maximising the relative between-class separation simultaneously. Importantly, it has a fixed dimension, a closed-form solution and is very efficient to compute. Extensive experiments carried out on five person re-identification benchmarks including VIPeR, PRID2011, CUHK01, CUHK03 and Market1501 show that such a simple approach beats the state-of-the-art alternatives, often by a big margin.", "Viewpoint invariant pedestrian recognition is an important yet under-addressed problem in computer vision. This is likely due to the difficulty in matching two objects with unknown viewpoint and pose. This paper presents a method of performing viewpoint invariant pedestrian recognition using an efficiently and intelligently designed object representation, the ensemble of localized features (ELF). Instead of designing a specific feature by hand to solve the problem, we define a feature space using our intuition about the problem and let a machine learning algorithm find the best representation. We show how both an object class specific representation and a discriminative recognition model can be learned using the AdaBoost algorithm. This approach allows many different kinds of simple features to be combined into a single similarity function. The method is evaluated using a viewpoint invariant pedestrian recognition dataset and the results are shown to be superior to all previous benchmarks for both recognition and reacquisition of pedestrians.", "Metric learning methods, for person re-identification, estimate a scaling for distances in a vector space that is optimized for picking out observations of the same individual. This paper presents a novel approach to the pedestrian re-identification problem that uses metric learning to improve the state-of-the-art performance on standard public datasets. Very high dimensional features are extracted from the source color image. A first processing stage performs unsupervised PCA dimensionality reduction, constrained to maintain the redundancy in color-space representation. A second stage further reduces the dimensionality, using a Local Fisher Discriminant Analysis defined by a training set. A regularization step is introduced to avoid singular matrices during this stage. The experiments conducted on three publicly available datasets confirm that the proposed method outperforms the state-of-the-art performance, including all other known metric learning methods. Further-more, the method is an effective way to process observations comprising multiple shots, and is non-iterative: the computation times are relatively modest. Finally, a novel statistic is derived to characterize the Match Characteristic: the normalized entropy reduction can be used to define the 'Proportion of Uncertainty Removed' (PUR). This measure is invariant to test set size and provides an intuitive indication of performance." ] }
1702.04179
2587804601
Given a pedestrian image as a query, the purpose of person re-identification is to identify the correct match from a large collection of gallery images depicting the same person captured by disjoint camera views. The critical challenge is how to construct a robust yet discriminative feature representation to capture the compounded variations in pedestrian appearance. To this end, deep learning methods have been proposed to extract hierarchical features against extreme variability of appearance. However, existing methods in this category generally neglect the efficiency in the matching stage whereas the searching speed of a re-identification system is crucial in real-world applications. In this paper, we present a novel deep hashing framework with Convolutional Neural Networks (CNNs) for fast person re-identification. Technically, we simultaneously learn both CNN features and hash functions to get robust yet discriminative features and similarity-preserving hash codes. Thereby, person re-identification can be resolved by efficiently computing and ranking the Hamming distances between images. A structured loss function defined over positive pairs and hard negatives is proposed to formulate a novel optimization problem so that fast convergence and more stable optimized solution can be attained. Extensive experiments on two benchmarks CUHK03 (, 2014) and Market-1501 (, 2015) show that the proposed deep architecture is efficacy over state-of-the-arts.
A notable improvement on person re-identification is achieved by using Convolutional Neural Networks (CNNs) @cite_62 @cite_40 @cite_29 @cite_24 @cite_63 @cite_53 @cite_46 @cite_60 @cite_4 , which can jointly learn robust yet discriminative feature representation and its corresponding similarity value in an end-to-end fashion. However, existing deep learning methods in person re-identification are facing a major challenge of efficiency, where computational time required to process an input image is very high due to the convolution operations with the entire input through deep nets. Thus, from a pragmatical perspective, an advanced yet fast neural network-based architecture is highly demanded. This motivated us to develop an efficient deep learning model to alleviate the computational burden in person re-identification.
{ "cite_N": [ "@cite_62", "@cite_4", "@cite_60", "@cite_29", "@cite_53", "@cite_24", "@cite_40", "@cite_63", "@cite_46" ], "mid": [ "1982925187", "2625961748", "", "2135442311", "2414767909", "1971955426", "1928419358", "2259687230", "2253171278" ], "abstract": [ "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.", "Person re-identification (re-id) aims to match pedestrians observed by disjoint camera views. It attracts increasing attention in computer vision due to its importance to surveillance systems. To combat the major challenge of cross-view visual variations, deep embedding approaches are proposed by learning a compact feature space from images such that the Euclidean distances correspond to their cross-view similarity metric. However, the global Euclidean distance cannot faithfully characterize the ideal similarity in a complex visual feature space because features of pedestrian images exhibit unknown distributions due to large variations in poses, illumination and occlusion. Moreover, intra-personal training samples within a local range which are robust to guide deep embedding against uncontrolled variations cannot be captured by a global Euclidean distance. In this paper, we study the problem of person re-id by proposing a novel sampling to mine suitable positives (i.e., intra-class) within a local range to improve the deep embedding in the context of large intra-class variations. Our method is capable of learning a deep similarity metric adaptive to local sample structure by minimizing each sample's local distances while propagating through the relationship between samples to attain the whole intra-class minimization. To this end, a novel objective function is proposed to jointly optimize similarity metric learning, local positive mining and robust deep feature embedding. This attains local discriminations by selecting local-ranged positive samples, and the learned features are robust to dramatic intra-class variations. Experiments on benchmarks show state-of-the-art results achieved by our method. (C) 2017 Elsevier Ltd. All rights reserved.", "", "Various hand-crafted features and metric learning methods prevail in the field of person re-identification. Compared to these methods, this paper proposes a more general way that can learn a similarity metric from image pixels directly. By using a \"siamese\" deep neural network, the proposed method can jointly learn the color feature, texture feature and metric in a unified framework. The network has a symmetry structure with two sub-networks which are connected by a cosine layer. Each sub network includes two convolutional layers and a full connected layer. To deal with the big variations of person images, binomial deviance is used to evaluate the cost between similarities and labels, which is proved to be robust to outliers. Experiments on VIPeR illustrate the superior performance of our method and a cross database experiment also shows its good generalization.", "Person re-identification is to seek a correct match for a person of interest across different camera views among a large number of impostors. It typically involves two procedures of non-linear feature extractions against dramatic appearance changes, and subsequent discriminative analysis in order to reduce intra-personal variations while enlarging inter-personal differences. In this paper, we introduce a hybrid deep architecture which combines Fisher vectors and deep neural networks to learn non-linear transformations of pedestrian images to a deep space where data can be linearly separable. The proposed method starts from Fisher vector encoding which computes a sequence of local feature extraction, aggregation, and encoding. The resulting Fisher vector output are fed into stacked supervised layer to seek non-linear transformation into a deep space. On top of the deep neural network, Linear Discriminant Analysis (LDA) is reinforced such that linearly separable latent representations can be learned in an end-to-end fashion. By optimizing an objective function modified from LDA, the network is enforced to produce feature distributions which have a low variance within the same class and high variance between classes. The objective is essentially derived from the general LDA eigenvalue problem and allows to train the network with Stochastic Gradient Descent and back-propagate LDA gradients to compute Gaussian Mixture Model (GMM) gradients in Fisher vector encoding. For empirical evaluations, we test our approach on four benchmark data sets in person re-identification (VIPeR 1, CUHK03 2, CUHK01 3, and Market 1501 4). Extensive experiments on these benchmarks show that our method can achieve state-of-the-art results. HighlightsA hybrid architecture that combines Fisher vectors and deep neural networks.An end-to-end training with linear discriminant analysis as objective.Deep features are linearly separable and class separability is maximally preserved.", "Identifying the same individual across different scenes is an important yet difficult task in intelligent video surveillance. Its main difficulty lies in how to preserve similarity of the same person against large appearance and structure variation while discriminating different individuals. In this paper, we present a scalable distance driven feature learning framework based on the deep neural network for person re-identification, and demonstrate its effectiveness to handle the existing challenges. Specifically, given the training images with the class labels (person IDs), we first produce a large number of triplet units, each of which contains three images, i.e. one person with a matched reference and a mismatched reference. Treating the units as the input, we build the convolutional neural network to generate the layered representations, and follow with the L 2 distance metric. By means of parameter optimization, our framework tends to maximize the relative distance between the matched pair and the mismatched pair for each triplet unit. Moreover, a nontrivial issue arising with the framework is that the triplet organization cubically enlarges the number of training triplets, as one image can be involved into several triplet units. To overcome this problem, we develop an effective triplet generation scheme and an optimized gradient descent algorithm, making the computational load mainly depend on the number of original images instead of the number of triplets. On several challenging databases, our approach achieves very promising results and outperforms other state-of-the-art approaches. HighlightsWe present a novel feature learning framework for person re-identification.Our framework is based on the maximum relative distance comparison.The learning algorithm is scalable to process large amount of data.We demonstrate superior performances over other state-of-the-arts.", "In this work, we propose a method for simultaneously learning features and a corresponding similarity metric for person re-identification. We present a deep convolutional architecture with layers specially designed to address the problem of re-identification. Given a pair of images as input, our network outputs a similarity value indicating whether the two input images depict the same person. Novel elements of our architecture include a layer that computes cross-input neighborhood differences, which capture local relationships between the two input images based on mid-level features from each input image. A high-level summary of the outputs of this layer is computed by a layer of patch summary features, which are then spatially integrated in subsequent layers. Our method significantly outperforms the state of the art on both a large data set (CUHK03) and a medium-sized data set (CUHK01), and is resistant to over-fitting. We also demonstrate that by initially training on an unrelated large data set before fine-tuning on a small target data set, our network can achieve results comparable to the state of the art even on a small data set (VIPeR).", "In this paper, we propose a deep end-to-end neu- ral network to simultaneously learn high-level features and a corresponding similarity metric for person re-identification. The network takes a pair of raw RGB images as input, and outputs a similarity value indicating whether the two input images depict the same person. A layer of computing neighborhood range differences across two input images is employed to capture local relationship between patches. This operation is to seek a robust feature from input images. By increasing the depth to 10 weight layers and using very small (3 @math 3) convolution filters, our architecture achieves a remarkable improvement on the prior-art configurations. Meanwhile, an adaptive Root- Mean-Square (RMSProp) gradient decent algorithm is integrated into our architecture, which is beneficial to deep nets. Our method consistently outperforms state-of-the-art on two large datasets (CUHK03 and Market-1501), and a medium-sized data set (CUHK01).", "This paper proposes a novel approach to person re-identification, a fundamental task in distributed multi-camera surveillance systems. Although a variety of powerful algorithms have been presented in the past few years, most of them usually focus on designing hand-crafted features and learning metrics either individually or sequentially. Different from previous works, we formulate a unified deep ranking framework that jointly tackles both of these key components to maximize their strengths. We start from the principle that the correct match of the probe image should be positioned in the top rank within the whole gallery set. An effective learning-to-rank algorithm is proposed to minimize the cost corresponding to the ranking disorders of the gallery. The ranking model is solved with a deep convolutional neural network (CNN) that builds the relation between input image pairs and their similarity scores through joint representation learning directly from raw image pixels. The proposed framework allows us to get rid of feature engineering and does not rely on any assumption. An extensive comparative evaluation is given, demonstrating that our approach significantly outperforms all the state-of-the-art approaches, including both traditional and CNN-based methods on the challenging VIPeR, CUHK-01, and CAVIAR4REID datasets. In addition, our approach has better ability to generalize across datasets without fine-tuning." ] }
1702.04179
2587804601
Given a pedestrian image as a query, the purpose of person re-identification is to identify the correct match from a large collection of gallery images depicting the same person captured by disjoint camera views. The critical challenge is how to construct a robust yet discriminative feature representation to capture the compounded variations in pedestrian appearance. To this end, deep learning methods have been proposed to extract hierarchical features against extreme variability of appearance. However, existing methods in this category generally neglect the efficiency in the matching stage whereas the searching speed of a re-identification system is crucial in real-world applications. In this paper, we present a novel deep hashing framework with Convolutional Neural Networks (CNNs) for fast person re-identification. Technically, we simultaneously learn both CNN features and hash functions to get robust yet discriminative features and similarity-preserving hash codes. Thereby, person re-identification can be resolved by efficiently computing and ranking the Hamming distances between images. A structured loss function defined over positive pairs and hard negatives is proposed to formulate a novel optimization problem so that fast convergence and more stable optimized solution can be attained. Extensive experiments on two benchmarks CUHK03 (, 2014) and Market-1501 (, 2015) show that the proposed deep architecture is efficacy over state-of-the-arts.
Hashing is an efficient technology in approximate nearest neighbor search with low storage cost of loading hash codes. Learning-based hash methods can be roughly divided into two categories: unsupervised methods and supervised methods. Unsupervised methods including Spectral Hashing @cite_14 @cite_21 and Iterative Quantization @cite_61 only use the training data to learn hash functions. Supervised methods try to leverage supervised information to learn compact binary codes. Some representative methods are Binary Reconstruction Embedding (BRE) @cite_28 , Minimal Loss Hashing (MLH) @cite_7 , and Supervised Hashing with Kernels (KSH) @cite_33 .
{ "cite_N": [ "@cite_61", "@cite_14", "@cite_33", "@cite_7", "@cite_28", "@cite_21" ], "mid": [ "2084363474", "", "1992371516", "", "2164338181", "1969752030" ], "abstract": [ "This paper addresses the problem of learning similarity-preserving binary codes for efficient retrieval in large-scale image collections. We propose a simple and efficient alternating minimization scheme for finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube. This method, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). Our experiments show that the resulting binary coding schemes decisively outperform several other state-of-the-art methods.", "", "Recent years have witnessed the growing popularity of hashing in large-scale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or often incur cumbersome model training. In this paper, we propose a novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing. The idea is to map the data to compact binary codes whose Hamming distances are minimized on similar pairs and simultaneously maximized on dissimilar pairs. Our approach is distinct from prior works by utilizing the equivalence between optimizing the code inner products and the Hamming distances. This enables us to sequentially and efficiently train the hash functions one bit at a time, yielding very short yet discriminative codes. We carry out extensive experiments on two image benchmarks with up to one million samples, demonstrating that our approach significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors, with accuracy gains ranging from 13 to 46 .", "", "Fast retrieval methods are increasingly critical for many large-scale analysis tasks, and there have been several recent methods that attempt to learn hash functions for fast and accurate nearest neighbor searches. In this paper, we develop an algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings. We develop a scalable coordinate-descent algorithm for our proposed hashing objective that is able to efficiently learn hash functions in a variety of settings. Unlike existing methods such as semantic hashing and spectral hashing, our method is easily kernelized and does not require restrictive assumptions about the underlying distribution of the data. We present results over several domains to demonstrate that our method outperforms existing state-of-the-art techniques.", "Hashing has gained considerable attention on large-scale similarity search, due to its enjoyable efficiency and low storage cost. In this paper, we study the problem of learning hash functions in the context of multi-modal data for cross-modal similarity search. Notwithstanding the progress achieved by existing methods, they essentially learn only one common hamming space, where data objects from all modalities are mapped to conduct similarity search. However, such method is unable to well characterize the flexible and discriminative local (neighborhood) structure in all modalities simultaneously, hindering them to achieve better performance. Bearing such stand-out limitation, we propose to learn heterogeneous hamming spaces with each preserving the local structure of data objects from an individual modality. Then, a novel method to learning bridging mapping for cross-modal hashing, named LBMCH, is proposed to characterize the cross-modal semantic correspondence by seamlessly connecting these distinct hamming spaces. Meanwhile, the local structure of each data object in a modality is preserved by constructing an anchor based representation, enabling LBMCH to characterize a linear complexity w.r.t the size of training set. The efficacy of LBMCH is experimentally validated against real-world cross-modal datasets." ] }
1702.04179
2587804601
Given a pedestrian image as a query, the purpose of person re-identification is to identify the correct match from a large collection of gallery images depicting the same person captured by disjoint camera views. The critical challenge is how to construct a robust yet discriminative feature representation to capture the compounded variations in pedestrian appearance. To this end, deep learning methods have been proposed to extract hierarchical features against extreme variability of appearance. However, existing methods in this category generally neglect the efficiency in the matching stage whereas the searching speed of a re-identification system is crucial in real-world applications. In this paper, we present a novel deep hashing framework with Convolutional Neural Networks (CNNs) for fast person re-identification. Technically, we simultaneously learn both CNN features and hash functions to get robust yet discriminative features and similarity-preserving hash codes. Thereby, person re-identification can be resolved by efficiently computing and ranking the Hamming distances between images. A structured loss function defined over positive pairs and hard negatives is proposed to formulate a novel optimization problem so that fast convergence and more stable optimized solution can be attained. Extensive experiments on two benchmarks CUHK03 (, 2014) and Market-1501 (, 2015) show that the proposed deep architecture is efficacy over state-of-the-arts.
More recently, to generate the binary hash codes directly from raw images, deep CNNs are utilized to train the model in an end-to-end manner where discriminative features and hash functions are simultaneously optimized @cite_32 @cite_1 @cite_12 . However, in training stage, they commonly take mini-batches with randomly sampled triplets as inputs, which may lead to local optimum or unstable optimized solution.
{ "cite_N": [ "@cite_1", "@cite_32", "@cite_12" ], "mid": [ "2949235290", "1939575207", "1951304353" ], "abstract": [ "With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multilevel semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets.", "Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods.", "Extracting informative image features and learning effective approximate hashing functions are two crucial steps in image retrieval. Conventional methods often study these two steps separately, e.g., learning hash functions from a predefined hand-crafted feature space. Meanwhile, the bit lengths of output hashing codes are preset in the most previous methods, neglecting the significance level of different bits and restricting their practical flexibility. To address these issues, we propose a supervised learning framework to generate compact and bit-scalable hashing codes directly from raw images. We pose hashing learning as a problem of regularized similarity learning. In particular, we organize the training images into a batch of triplet samples, each sample containing two images with the same label and one with a different label. With these triplet samples, we maximize the margin between the matched pairs and the mismatched pairs in the Hamming space. In addition, a regularization term is introduced to enforce the adjacency consistency, i.e., images of similar appearances should have similar codes. The deep convolutional neural network is utilized to train the model in an end-to-end fashion, where discriminative image features and hash functions are simultaneously optimized. Furthermore, each bit of our hashing codes is unequally weighted, so that we can manipulate the code lengths by truncating the insignificant bits. Our framework outperforms state-of-the-arts on public benchmarks of similar image search and also achieves promising results in the application of person re-identification in surveillance. It is also shown that the generated bit-scalable hashing codes well preserve the discriminative powers with shorter code lengths." ] }
1702.04263
2590534970
Okapi is a new causally consistent geo-replicated key-value store. Okapi leverages two key design choices to achieve high performance. First, it relies on hybrid logical physical clocks to achieve low latency even in the presence of clock skew. Second, Okapi achieves higher resource efficiency and better availability, at the expense of a slight increase in update visibility latency. To this end, Okapi implements a new stabilization protocol that uses a combination of vector and scalar clocks and makes a remote update visible when its delivery has been acknowledged by every data center. We evaluate Okapi with different workloads on Ama- zon AWS, using three geographically distributed regions and 96 nodes. We compare Okapi with two recent ap- proaches to causal consistency, Cure and GentleRain. We show that Okapi delivers up to two orders of magnitude better performance than GentleRain and that Okapi achieves up to 3.5x lower latency and a 60 reduction of the meta-data overhead with respect to Cure.
Dependency tracking. The systems based on logical clocks keep detailed dependency information, encoded as a dependency list @cite_18 @cite_32 @cite_21 @cite_19 or matrix @cite_33 . black The techniques proposed to reduce the resulting overhead have downsides like per-update acknowledgement messages among replicas @cite_33 , call-backs to the client @cite_33 , or delay the visibility of updates also in the local data center @cite_8 @cite_10 . GentleRain and Cure track dependencies at a coarser granularity. GentleRain uses a single timestamp to achieve minimal overhead but incurs high waiting times to serve read-only transactions. Cure uses dependency vectors to avoid this issue but incurs a dependency tracking overhead linear in the number of data centers. uses dependency vectors too but reduces the meta-data for remote updates at the cost of slightly delaying their visibility at remote sites.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_8", "@cite_21", "@cite_32", "@cite_19", "@cite_10" ], "mid": [ "2161730338", "2112612200", "1925953220", "1981851173", "12688243", "2098618284", "2195205682" ], "abstract": [ "Geo-replicated, distributed data stores that support complex online applications, such as social networks, must provide an \"always-on\" experience where operations always complete with low latency. Today's systems often sacrifice strong consistency to achieve these goals, exposing inconsistencies to their clients and necessitating complex application logic. In this paper, we identify and define a consistency model---causal consistency with convergent conflict handling, or causal+---that is the strongest achieved under these constraints. We present the design and implementation of COPS, a key-value store that delivers this consistency model across the wide-area. A key contribution of COPS is its scalability, which can enforce causal dependencies between keys stored across an entire cluster, rather than a single server like previous systems. The central approach in COPS is tracking and explicitly checking whether causal dependencies between keys are satisfied in the local cluster before exposing writes. Further, in COPS-GT, we introduce get transactions in order to obtain a consistent view of multiple keys without locking or blocking. Our evaluation shows that COPS completes operations in less than a millisecond, provides throughput similar to previous systems when using one server per cluster, and scales well as we increase the number of servers in each cluster. It also shows that COPS-GT provides similar latency, throughput, and scaling to COPS for common workloads.", "We propose two protocols that provide scalable causal consistency for both partitioned and replicated data stores using dependency matrices (DM) and physical clocks. The DM protocol supports basic read and update operations and uses two-dimensional dependency matrices to track dependencies in a client session. It utilizes the transitivity of causality and sparse matrix encoding to keep dependency metadata small and bounded. The DM-Clock protocol extends the DM protocol to support read-only transactions using loosely synchronized physical clocks. We implement the two protocols in Orbe, a distributed key-value store, and evaluate them experimentally. Orbe scales out well, incurs relatively small overhead over an eventually consistent key-value store, and outperforms an existing system that uses explicit dependency tracking to provide scalable causal consistency.", "It is well known that causal consistency is more expensive to implement than eventual consistency due to its requirement of dependency tracking and checking for causality. To close the performance gap between the two consistency models, we propose a new protocol that implements causal consistency for both partitioned and replicated data stores. Our protocol trades the visibility latency of updates across different client sessions for higher throughput. An update, either from a local client or a remote replica, is only visible to other clients after it is replicated by all replicas. As a result, a read operation never introduces dependencies to its client session. Only update operations introduce dependencies. By exploiting the transitive property of causality and total order update propagation, an update always has at most one dependency. By reducing the number of tracked dependencies and the number of messages for dependency checking down to one, we believe our protocol can provide causal consistency with similar cost to eventual consistency.", "This paper proposes a Geo-distributed key-value datastore, named ChainReaction, that offers causal+ consistency, with high performance, fault-tolerance, and scalability. ChainReaction enforces causal+ consistency which is stronger than eventual consistency by leveraging on a new variant of chain replication. We have experimentally evaluated the benefits of our approach by running the Yahoo! Cloud Serving Benchmark. Experimental results show that ChainReaction has better performance in read intensive workloads while offering competitive performance for other workloads. Also we show that our solution requires less metadata when compared with previous work.", "We present the first scalable, geo-replicated storage system that guarantees low latency, offers a rich data model, and provides \"stronger\" semantics. Namely, all client requests are satisfied in the local datacenter in which they arise; the system efficiently supports useful data model abstractions such as column families and counter columns; and clients can access data in a causally-consistent fashion with read-only and write-only transactional support, even for keys spread across many servers. The primary contributions of this work are enabling scalable causal consistency for the complex columnfamily data model, as well as novel, non-blocking algorithms for both read-only and write-only transactions. Our evaluation shows that our system, Eiger, achieves low latency (single-ms), has throughput competitive with eventually-consistent and non-transactional Cassandra (less than 7 overhead for one of Facebook's real-world workloads), and scales out to large clusters almost linearly (averaging 96 increases up to 128 server clusters).", "We consider the problem of separating consistency-related safety properties from availability and durability in distributed data stores via the application of a \"bolt-on\" shim layer that upgrades the safety of an underlying general-purpose data store. This shim provides the same consistency guarantees atop a wide range of widely deployed but often inflexible stores. As causal consistency is one of the strongest consistency models that remain available during system partitions, we develop a shim layer that upgrades eventually consistent stores to provide convergent causal consistency. Accordingly, we leverage widely deployed eventually consistent infrastructure as a common substrate for providing causal guarantees. We describe algorithms and shim implementations that are suitable for a large class of application-level causality relationships and evaluate our techniques using an existing, production-ready data store and with real-world explicit causality relationships.", "Client-side apps (e.g., mobile or in-browser) need cloud data to be available in a local cache, for both reads and updates. For optimal user experience and developer support, the cache should be consistent and fault-tolerant. In order to scale to high numbers of unreliable and resource-poor clients, and large database, the system needs to use resources sparingly. The SwiftCloud distributed object database is the first to provide fast reads and writes via a causally-consistent client-side local cache backed by the cloud. It is thrifty in resources and scales well, thanks to consistent versioning provided by the cloud, using small and bounded metadata. It remains available during faults, switching to a different data centre when the current one is not responsive, while maintaining its consistency guarantees. This paper presents the SwiftCloud algorithms, design, and experimental evaluation. It shows that client-side apps enjoy the high performance and availability, under the same guarantees as a remote cloud data store, at a small cost." ] }
1702.04432
2588668853
The tensor power of the clique on @math vertices (denoted by @math ) is the graph on vertex set @math such that two vertices @math are connected if and only if @math for all @math . Let the density of a subset @math of @math to be @math , and let the vertex boundary of a set @math to be vertices which are incident to some vertex of @math , perhaps including points of @math . We investigate two similar problems on such graphs. First, we study the vertex isoperimetry problem. Given a density @math what is the smallest possible density of the vertex boundary of a subset of @math of density @math ? Let @math be the infimum of these minimum densities as @math . We find a recursive relation allows one to compute @math in time polynomial to the number of desired bits of precision. Second, we study given an independent set @math of density @math , how close it is to a maximum-sized independent set @math of density @math . We show that this deviation (measured by @math ) is at most @math as long as @math . This substantially improves on results of Alon, Dinur, Friedgut, and Sudakov (2004) and Ghandehari and Hatami (2008) which had an @math upper bound. We also show the exponent @math is optimal assuming @math tending to infinity and @math tending to @math . The methods have similarity to recent work by Ellis, Keller, and Lifshitz (2016) in the context of Kneser graphs and other settings. The author hopes that these results have potential applications in hardness of approximation, particularly in approximate graph coloring and independent set problems.
A result which also finds a tight'' super constant exponent @math for the independent set stability is proved in some very recent work @cite_24 @cite_10 @cite_4 @cite_6 @cite_18 @cite_16 on Kneser graphs and related structures. (See also @cite_14 and Proposition 4.3 of @cite_8 .) The techniques have high-level similarity to the ones adopted here: The author became aware of these similar proofs only after writing major portions of the manuscript. particularly in their use of compressions to prove a isoperimetric inequality which they then bootstrap to a combinatorial independent set stability result.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_8", "@cite_6", "@cite_24", "@cite_16", "@cite_10" ], "mid": [ "2896300682", "2606627981", "2531394623", "2529480917", "2339165222", "2336459687", "2609098951", "2757077271" ], "abstract": [ "A family @math of graphs on a fixed set of @math vertices is called triangle-intersecting if for any @math , the intersection @math contains a triangle. More generally, for a fixed graph @math , a family @math is @math -intersecting if the intersection of any two graphs in @math contains a sub-graph isomorphic to @math . In [D. Ellis, Y. Filmus, and E. Friedgut, Triangle-intersecting families of graphs, J. Eur. Math. Soc. 14 (2012), pp. 841--885], Ellis, Filmus and Friedgut proved a 36-year old conjecture of Simonovits and Sos stating that the maximal size of a triangle-intersecting family is @math . Furthermore, they proved a @math -biased generalization, stating that for any @math , we have @math , where @math is the probability that the random graph @math belongs to @math . In the same paper, conjectured that the assertion of their biased theorem holds also for @math , and more generally, that for any non- @math -colorable graph @math and any @math -intersecting family @math , we have @math for all @math . In this note we construct, for any fixed @math and any @math , an @math -intersecting family @math of graphs such that @math , where @math depends only on @math and @math , thus disproving both conjectures.", "A family of sets is said to be if its automorphism group is transitive, and if any two sets in the family have nonempty intersection. Our purpose here is to study the following question: for @math with @math , how large can a symmetric intersecting family of @math -element subsets of @math be? As a first step towards a complete answer, we prove that such a family has size at most [ (- c(n-2k) n k( n - k) ) n k , ] where @math is a universal constant. We also describe various combinatorial and algebraic approaches to constructing such families.", "A family of sets is said to be if any two sets in the family have nonempty intersection. In 1973, Erd o s raised the problem of determining the maximum possible size of a union of @math different intersecting families of @math -element subsets of an @math -element set, for each triple of integers @math . We make progress on this problem, proving that for any fixed integer @math and for any @math , if @math is an @math -element set, and @math , where each @math is an intersecting family of @math -element subsets of @math , then @math , with equality only if @math for some @math with @math . This is best possible up to the size of the @math term, and improves a 1987 result of Frankl and F \"uredi, who obtained the same conclusion under the stronger hypothesis @math , in the case @math . Our proof utilises an isoperimetric, influence-based method recently developed by Keller and the authors.", "The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of a @math -uniform @math -intersecting family on @math points, and describes all optimal families. We extend this theorem to several other settings: the weighted case, the case of infinitely many points, and the Hamming scheme. The weighted Ahlswede-Khachatrian theorem gives the maximal @math measure of a @math -intersecting family on @math points, where @math . As has been observed by Ahlswede and Khachatrian and by Dinur and Safra, this theorem can be derived from the classical one by a simple reduction. However, this reduction fails to identify the optimal families, and only works for @math , using a different technique of Ahlswede and Khachatrian (the case @math is Katona's intersection theorem). We then extend the weighted Ahlswede-Khachatrian theorem to the case of infinitely many points. The Ahlswede-Khachatrian theorem on the Hamming scheme gives the maximum cardinality of a subset of @math in which any two elements @math have @math positions @math such that @math . We show that this case corresponds to @math with @math , extending work of Ahlswede and Khachatrian, who considered the case @math . We also determine the maximum cardinality families. We obtain similar results for subsets of @math , though in this case we are not able to identify all maximum cardinality families.", "A set family F is said to be t-intersecting if any two sets in F share at least t elements. The Complete Intersection Theorem of Ahlswede and Khachatrian (1997) determines the maximal size f(n,k,t) of a t-intersecting family of k-element subsets of 1,2,...,n , and gives a full characterisation of the extremal families. In this paper, we prove the following stability version' of the theorem: if k n is bounded away from 0 and 1 2, and F is a t-intersecting family of k-element subsets of 1,2,...,n such that @math , then there exists an extremal family G such that @math . For fixed t, this assertion is tight up to a constant factor. This proves a conjecture of Friedgut from 2008. Our proof combines classical shifting arguments with a bootstrapping' method based upon an isoperimetric argument.", "Erdős-Ko-Rado (EKR) type theorems yield upper bounds on the sizes of families of sets, subject to various intersection requirements on the sets in the family. Stability versions of such theorems assert that if the size of a family is close to the maximum possible size, then the family itself must be close (in some appropriate sense) to a maximum-sized family. In this paper, we present an approach to obtaining stability versions of EKR-type theorems, via isoperimetric inequalities for subsets of the hypercube. Our approach is rather general, and allows the leveraging of a wide variety of exact EKR-type results into strong stability versions of these results, without going into the proofs of the original results. We use this approach to obtain tight stability versions of the EKR theorem itself and of the Ahlswede-Khachatrian theorem on @math -intersecting families of @math -element subsets of @math (for @math ), and to show that, somewhat surprisingly, all these results hold when the intersection requirement is replaced by a much weaker requirement. Other examples include stability versions of Frankl's recent result on the Erdős matching conjecture, the Ellis-Filmus-Friedgut proof of the Simonovits-Sos conjecture, and various EKR-type results on @math -wise (cross)- @math -intersecting families.", "The full' edge isoperimetric inequality for the discrete cube (due to Harper, Bernstein, Lindsay and Hart) specifies the minimum size of the edge boundary @math of a set @math , as a function of @math . A weaker (but more widely-used) lower bound is @math , where equality holds iff @math is a subcube. In 2011, the first author obtained a sharp stability' version of the latter result, proving that if @math , then there exists a subcube @math such that @math . The weak' version of the edge isoperimetric inequality has the following well-known generalization for the @math -biased' measure @math on the discrete cube: if @math , or if @math and @math is monotone increasing, then @math . In this paper, we prove a sharp stability version of the latter result, which generalizes the aforementioned result of the first author. Namely, we prove that if @math , then there exists a subcube @math such that @math , where @math . This result is a central component in recent work of the authors proving sharp stability versions of a number of Erd o s-Ko-Rado type theorems in extremal combinatorics, including the seminal complete intersection theorem' of Ahlswede and Khachatrian. In addition, we prove a biased-measure analogue of the full' edge isoperimetric inequality, for monotone increasing sets, and we observe that such an analogue does not hold for arbitrary sets, hence answering a question of Kalai. We use this result to give a new proof of the full' edge isoperimetric inequality, one relying on the Kruskal-Katona theorem.", "The edge isoperimetric inequality in the discrete cube specifies, for each pair of integers @math and @math , the minimum size @math of the edge boundary of an @math -element subset of @math ; the extremal families (up to automorphisms of the discrete cube) are initial segments of the lexicographic ordering on @math . We show that for any @math -element subset @math and any integer @math , if the edge boundary of @math has size at most @math , then there exists an extremal family @math such that @math , where @math is an absolute constant. This is best-possible, up to the value of @math . Our result can be seen as a stability' version of the edge isoperimetric inequality in the discrete cube, and as a discrete analogue of the seminal stability result of Fusco, Maggi and Pratelli concerning the isoperimetric inequality in Euclidean space." ] }