aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1701.02088 | 2572947299 | This paper investigates the achievable rates of an additive white Gaussian noise (AWGN) energy-harvesting (EH) channel with an infinite battery. The EH process is characterized by a sequence of blocks of harvested energy, which is known causally at the source. The harvested energy remains constant within a block while the harvested energy across different blocks is characterized by a sequence of independent and identically distributed (i.i.d.) random variables. The blocks have length @math , which can be interpreted as the coherence time of the energy arrival process. If @math is a constant or grows sublinearly in the blocklength @math , we fully characterize the first-order term in the asymptotic expansion of the maximum transmission rate subject to a fixed tolerable error probability @math . The first-order term is known as the @math -capacity. In addition, we obtain lower and upper bounds on the second-order term in the asymptotic expansion, which reveal that the second order term scales as @math for any @math less than @math . The lower bound is obtained through analyzing the save-and-transmit strategy. If @math grows linearly in @math , we obtain lower and upper bounds on the @math -capacity, which coincide whenever the cumulative distribution function (cdf) of the EH random variable is continuous and strictly increasing. In order to achieve the lower bound, we have proposed a novel adaptive save-and-transmit strategy, which chooses different save-and-transmit codes across different blocks according to the energy variation across the blocks. | For a fixed tolerable error probability @math , @cite_10 recently performed a finite blocklength analysis of save-and-transmit schemes proposed in @cite_8 and obtained a non-asymptotic achievable rate for the AWGN channel with an i.i.d. EH process. The first-, second- and third-order terms of the non-asymptotic achievable rate presented in [Th. 1] FTY15 are equal to the capacity, @math and @math respectively where @math and @math are some positive constants that do not depend on @math and @math . Subsequently, Shenoy and Sharma @cite_5 refined the analysis in @cite_10 and improved the second-order term to @math where @math is some positive constant that does not depend on @math and @math . This paper further improves the second-order term to @math for any @math where @math is some positive constant that does not depend on @math and @math (see Remark ). The aforementioned improvements are due to better analyses of the energy outage" probability for the same save-and-transmit strategy, where the energy outage" occurs when the source cannot output the desired codeword due to energy shortage. | {
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_8"
],
"mid": [
"2038920680",
"1751436528",
"2088506000"
],
"abstract": [
"We study the performance limits of state-dependent discrete memoryless channels with a discrete state available at both the encoder and the decoder. We establish the epsilon-capacity as well as necessary and sufficient conditions for the strong converse property for such channels when the sequence of channel states is not necessarily stationary, memoryless or ergodic. We then seek a finer characterization of these capacities in terms of second-order coding rates. The general results are supplemented by several examples including i.i.d. and Markov states and mixed channels.",
"This paper investigates the information-theoretic limits of energy-harvesting (EH) channels in the finite blocklength regime. The EH process is characterized by a sequence of i.i.d. random variables with finite variances. We use the save-and-transmit strategy proposed by Ozel and Ulukus (2012) together with Shannon’s non-asymptotic achievability bound to obtain lower bounds on the achievable rates for both additive white Gaussian noise channels and discrete memoryless channels under EH constraints. The first-order terms of the lower bounds of the achievable rates are equal to @math and the second-order (backoff from capacity) terms are proportional to @math , where @math denotes the blocklength and @math denotes the capacity of the EH channel, which is the same as the capacity without the EH constraints. The constant of proportionality of the backoff term is found and qualitative interpretations are provided.",
"In energy harvesting communication systems, an exogenous recharge process supplies energy necessary for data transmission and the arriving energy can be buffered in a battery before consumption. We determine the information-theoretic capacity of the classical additive white Gaussian noise (AWGN) channel with an energy harvesting transmitter with an unlimited sized battery. As the energy arrives randomly and can be saved in the battery, codewords must obey cumulative stochastic energy constraints. We show that the capacity of the AWGN channel with such stochastic channel input constraints is equal to the capacity with an average power constraint equal to the average recharge rate. We provide two capacity achieving schemes: save-and-transmit and best-effort-transmit. In the save-and-transmit scheme, the transmitter collects energy in a saving phase of proper duration that guarantees that there will be no energy shortages during the transmission of code symbols. In the best-effort-transmit scheme, the transmission starts right away without an initial saving period, and the transmitter sends a code symbol if there is sufficient energy in the battery, and a zero symbol otherwise. Finally, we consider a system in which the average recharge rate is time varying in a larger time scale and derive the optimal offline power policy that maximizes the average throughput, by using majorization theory."
]
} |
1701.01821 | 2950661620 | We present an unsupervised representation learning approach that compactly encodes the motion dependencies in videos. Given a pair of images from a video clip, our framework learns to predict the long-term 3D motions. To reduce the complexity of the learning framework, we propose to describe the motion as a sequence of atomic 3D flows computed with RGB-D modality. We use a Recurrent Neural Network based Encoder-Decoder framework to predict these sequences of flows. We argue that in order for the decoder to reconstruct these sequences, the encoder must learn a robust video representation that captures long-term motion dependencies and spatial-temporal relations. We demonstrate the effectiveness of our learned temporal representations on activity classification across multiple modalities and datasets such as NTU RGB+D and MSR Daily Activity 3D. Our framework is generic to any input modality, i.e., RGB, Depth, and RGB-D videos. | In the RGB domain, unsupervised learning of visual representations has shown usefulness for various supervised tasks such as pedestrian detection and object detection @cite_23 @cite_48 . To exploit temporal structures, researchers have started focusing on learning visual representations using RGB videos. Early works such as @cite_33 focused on inclusion of constraints via video to autoencoder framework. The most common constraint is enforcing learned representations to be temporally smooth @cite_33 . More recently, a stream of reconstruction-based models has been proposed. @cite_1 proposed a generative model that uses a recurrent neural network to predict the next frame or interpolate between frames. This was extended by @cite_38 where they utilized a LSTM Encoder-Decoder framework to reconstruct current frame or predict future frames. Another line of work @cite_27 uses video data to mine patches which belong to the same object to learn representations useful for distinguishing objects. @cite_11 presented an approach to learn visual representation with an unsupervised sequential verification task, and showed performance gain for supervised tasks like activity recognition and pose estimation. One common problem for the learned representations is that they capture mostly semantic features that we can get from ImageNet or short-range activities, neglecting the temporal features. | {
"cite_N": [
"@cite_38",
"@cite_33",
"@cite_48",
"@cite_1",
"@cite_27",
"@cite_23",
"@cite_11"
],
"mid": [
"2952453038",
"",
"",
"1568514080",
"219040644",
"2950187998",
"2949966521"
],
"abstract": [
"We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.",
"",
"",
"We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences.",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"Pedestrian detection is a problem of considerable practical interest. Adding to the list of successful applications of deep learning methods to vision, we report state-of-the-art and competitive results on all major pedestrian datasets with a convolutional network model. The model uses a few new twists, such as multi-stage features, connections that skip layers to integrate global shape information with local distinctive motif information, and an unsupervised method based on convolutional sparse coding to pre-train the filters at each stage."
]
} |
1701.01821 | 2950661620 | We present an unsupervised representation learning approach that compactly encodes the motion dependencies in videos. Given a pair of images from a video clip, our framework learns to predict the long-term 3D motions. To reduce the complexity of the learning framework, we propose to describe the motion as a sequence of atomic 3D flows computed with RGB-D modality. We use a Recurrent Neural Network based Encoder-Decoder framework to predict these sequences of flows. We argue that in order for the decoder to reconstruct these sequences, the encoder must learn a robust video representation that captures long-term motion dependencies and spatial-temporal relations. We demonstrate the effectiveness of our learned temporal representations on activity classification across multiple modalities and datasets such as NTU RGB+D and MSR Daily Activity 3D. Our framework is generic to any input modality, i.e., RGB, Depth, and RGB-D videos. | The past few years have seen great progress on activity recognition on short clips @cite_19 @cite_24 @cite_53 @cite_2 @cite_10 . These works can be roughly divided into two categories. The first category focuses on handcrafted local features and Bag of Visual Words (BoVWs) representation. The most successful example is to extract improved trajectory features @cite_45 and employ Fisher vector representation @cite_17 . The second category utilizes deep convolutional neural networks (ConvNets) to learn video representations from raw data (, RGB images or optical flow fields) and train a recognition system in an end-to-end manner. The most competitive deep learning model is the deep two-stream ConvNets @cite_35 and its successors @cite_29 @cite_22 , which combine both semantic features extracted by ConvNets and traditional optical flow that captures motion. However, unlike image classification, the benefit of using deep neural networks over traditional handcrafted features is not very evident. This is potentially because supervised training of deep networks requires a lot of data, whilst the current RGB activity recognition datasets are still too small. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_53",
"@cite_29",
"@cite_24",
"@cite_19",
"@cite_45",
"@cite_2",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"1944615693",
"2952186347",
"787785461",
"",
"2951893483",
"2105101328",
"",
"",
""
],
"abstract": [
"",
"Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features [31] and deep-learned features [24]. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMD-B51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features [31] and deep-learned features [24]. Our method also achieves superior performance to the state of the art on these datasets.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"Deep convolutional networks have achieved great success for object recognition in still images. However, for action recognition in videos, the improvement of deep convolutional networks is not so evident. We argue that there are two reasons that could probably explain this result. First the current network architectures (e.g. Two-stream ConvNets) are relatively shallow compared with those very deep models in image domain (e.g. VGGNet, GoogLeNet), and therefore their modeling capacity is constrained by their depth. Second, probably more importantly, the training dataset of action recognition is extremely small compared with the ImageNet dataset, and thus it will be easy to over-fit on the training dataset. To address these issues, this report presents very deep two-stream ConvNets for action recognition, by adapting recent very deep architectures into video domain. However, this extension is not easy as the size of action recognition is quite small. We design several good practices for the training of very deep two-stream ConvNets, namely (i) pre-training for both spatial and temporal nets, (ii) smaller learning rates, (iii) more data augmentation techniques, (iv) high drop out ratio. Meanwhile, we extend the Caffe toolbox into Multi-GPU implementation with high computational efficiency and low memory consumption. We verify the performance of very deep two-stream ConvNets on the dataset of UCF101 and it achieves the recognition accuracy of @math .",
"",
"Most state-of-the-art action feature extractors involve differential operators, which act as highpass filters and tend to attenuate low frequency action information. This attenuation introduces bias to the resulting features and generates ill-conditioned feature matrices. The Gaussian Pyramid has been used as a feature enhancing technique that encodes scale-invariant characteristics into the feature space in an attempt to deal with this attenuation. However, at the core of the Gaussian Pyramid is a convolutional smoothing operation, which makes it incapable of generating new features at coarse scales. In order to address this problem, we propose a novel feature enhancing technique called Multi-skIp Feature Stacking (MIFS), which stacks features extracted using a family of differential filters parameterized with multiple time skips and encodes shift-invariance into the frequency space. MIFS compensates for information lost from using differential operators by recapturing information at coarse scales. This recaptured information allows us to match actions at different speeds and ranges of motion. We prove that MIFS enhances the learnability of differential-based features exponentially. The resulting feature matrices from MIFS have much smaller conditional numbers and variances than those from conventional methods. Experimental results show significantly improved performance on challenging action recognition and event detection tasks. Specifically, our method exceeds the state-of-the-arts on Hollywood2, UCF101 and UCF50 datasets and is comparable to state-of-the-arts on HMDB51 and Olympics Sports datasets. MIFS can also be used as a speedup strategy for feature extraction with minimal or no accuracy cost.",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"",
"",
""
]
} |
1701.01657 | 2576814271 | In this paper, a control approach called Artificial Neural Tissue (ANT) is applied to multirobot excavation for lunar base preparation tasks including clearing landing pads and burying of habitat modules. We show for the first time, a team of autonomous robots excavating a terrain to match a given 3D blueprint. Constructing mounds around landing pads will provide physical shielding from debris during launch landing. Burying a human habitat modules under 0.5 m of lunar regolith is expected to provide both radiation shielding and maintain temperatures of -25 @math C. This minimizes base life-support complexity and reduces launch mass. ANT is compelling for a lunar mission because it doesn't require a team of astronauts for excavation and it requires minimal supervision. The robot teams are shown to autonomously interpret blueprints, excavate and prepare sites for a lunar base. Because little pre-programmed knowledge is provided, the controllers discover creative techniques. ANT evolves techniques such as slot-dozing that would otherwise require excavation experts. This is critical in making an excavation mission feasible when it is prohibitively expensive to send astronauts. The controllers evolve elaborate negotiation behaviors to work in close quarters. These and other techniques such as concurrent evolution of the controller and team size are shown to tackle problem of antagonism, when too many robots interfere reducing the overall efficiency or worse, resulting in gridlock. While many challenges remain with this technology our work shows a compelling pathway for field testing this approach. | Previous work in autonomous excavation @cite_3 has been limited to a single robot and separate loading unloading vehicles. Digging is performed using hand coded scripts that simplify repetitive excavation truck loading cycles. These controllers are used to position and unload an excavator bucket relative to a dump truck using a suite of sensors onboard the vehicles. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2153318775"
],
"abstract": [
"Excavators are used for the rapid removal of soil and other materials in mines, quarries, and construction sites. The automation of these machines offers promise for increasing productivity and improving safety. To date, most research in this area has focused on selected parts of the problem. In this paper we present a system that completely automates the truck loading task. The excavator uses two scanning laser rangefinders to recognize and localize the truck, measure the soil face, and detect obstacles. The excavator's software decides where to dig in the soil, where to dump in the truck, and how to quickly move between these points while detecting and stopping for obstacles. The system was fully implemented and was demonstrated to load trucks as fast as human operators."
]
} |
1701.01657 | 2576814271 | In this paper, a control approach called Artificial Neural Tissue (ANT) is applied to multirobot excavation for lunar base preparation tasks including clearing landing pads and burying of habitat modules. We show for the first time, a team of autonomous robots excavating a terrain to match a given 3D blueprint. Constructing mounds around landing pads will provide physical shielding from debris during launch landing. Burying a human habitat modules under 0.5 m of lunar regolith is expected to provide both radiation shielding and maintain temperatures of -25 @math C. This minimizes base life-support complexity and reduces launch mass. ANT is compelling for a lunar mission because it doesn't require a team of astronauts for excavation and it requires minimal supervision. The robot teams are shown to autonomously interpret blueprints, excavate and prepare sites for a lunar base. Because little pre-programmed knowledge is provided, the controllers discover creative techniques. ANT evolves techniques such as slot-dozing that would otherwise require excavation experts. This is critical in making an excavation mission feasible when it is prohibitively expensive to send astronauts. The controllers evolve elaborate negotiation behaviors to work in close quarters. These and other techniques such as concurrent evolution of the controller and team size are shown to tackle problem of antagonism, when too many robots interfere reducing the overall efficiency or worse, resulting in gridlock. While many challenges remain with this technology our work shows a compelling pathway for field testing this approach. | These scripts are developed with input from an expert human operator and model vehicle specific limitations such as load handling capacity and latency. These systems incorporate adaptive coarse and refined planners to sequence digging operations @cite_44 . Other works used coarse and refined planner containing a neural network approach to learn soil conditions and excavator capabilities during operation @cite_34 . Such systems are comparable in efficiency to human operators. Other approaches are devoted to modeling kinematics and or dynamics of the excavation vehicles and simulating their performance @cite_12 . These techniques are designed for specific vehicle platforms and do not include scripts for every possible scenario in the task space thus requiring close monitoring and intervention by a human operator. This makes the approach unsuitable for fully autonomous operation of multiple robots on the moon. | {
"cite_N": [
"@cite_44",
"@cite_34",
"@cite_12"
],
"mid": [
"2143438404",
"1852251700",
"2111688374"
],
"abstract": [
"Presents an approach for real time planning and execution of the motions of complicated robotic systems. The approach is motivated by the observation that a robot's task can be described as a series of simple steps, or a script. The script is a general template which encodes knowledge for a class of tasks and is fitted to a specific instance of a task. The script receives information about its environment in the form of parameters, which it uses to bind variables in the template and allows it to deal with the current task conditions. Changes or variations in the robot's environment can be easily handled with this parameterized script approach. New tasks for the robot to perform can be added in the form of subscripts, which could handle exceptional cases. We apply this approach to the task of autonomous excavation, and demonstrate its validity on an actual hydraulic excavator. We obtain good results, with the autonomous system approaching the performance of an expert human operator.",
"We present a composite forward model of the mechanics of an excavator backhoe digging in soil. This model is used to predict the trajectories developed by a closed-loop force based control scheme given initial conditions, some of which can be controlled (excavator control parameters), some directly measured (shape of the terrain), and some estimated (soil properties). Since soil conditions can vary significantly, it is necessary that soil properties be estimated online. Our models are used to both estimate soil properties and predict contact forces between the excavator and the terrain. In a large set of experiments we have conducted, we find that these models are accurate to within approximately 20 and run about 10 times faster than real-time. In this paper we motivate the development of these models and discuss experimental data from our testbed.",
"This paper describes automation of the digging cycle of a mining rope shovel which considers autonomous dipper (bucket) filling and determining methods to detect when to disengage the dipper from the bank. Novel techniques to overcome dipper stall and the online estimation of dipper “fullness” are described with in-field experimental results of laser DTM generation, machine automation and digging using a 1 7th scale model rope shovel presented."
]
} |
1701.01657 | 2576814271 | In this paper, a control approach called Artificial Neural Tissue (ANT) is applied to multirobot excavation for lunar base preparation tasks including clearing landing pads and burying of habitat modules. We show for the first time, a team of autonomous robots excavating a terrain to match a given 3D blueprint. Constructing mounds around landing pads will provide physical shielding from debris during launch landing. Burying a human habitat modules under 0.5 m of lunar regolith is expected to provide both radiation shielding and maintain temperatures of -25 @math C. This minimizes base life-support complexity and reduces launch mass. ANT is compelling for a lunar mission because it doesn't require a team of astronauts for excavation and it requires minimal supervision. The robot teams are shown to autonomously interpret blueprints, excavate and prepare sites for a lunar base. Because little pre-programmed knowledge is provided, the controllers discover creative techniques. ANT evolves techniques such as slot-dozing that would otherwise require excavation experts. This is critical in making an excavation mission feasible when it is prohibitively expensive to send astronauts. The controllers evolve elaborate negotiation behaviors to work in close quarters. These and other techniques such as concurrent evolution of the controller and team size are shown to tackle problem of antagonism, when too many robots interfere reducing the overall efficiency or worse, resulting in gridlock. While many challenges remain with this technology our work shows a compelling pathway for field testing this approach. | Control systems such as LUCIE are more sophisticated and incorporate long-term planning @cite_21 . Apart from identifying and automating excavation cycles, the system incorporates a whole sequence of intermediate goals that need to be achieved to complete a trench digging task. However, the system lacks autonomy because the task is decomposed and prioritized by a human operator. | {
"cite_N": [
"@cite_21"
],
"mid": [
"1479682106"
],
"abstract": [
"The excavation of foundations, general earthworks and earth removal tasks are activities which involve the machine operator in a series of repetitive operations, suggesting opportunities for the automation through the introduction of robotic technologies with subsequent improvements in machine utilisation and throughput. The automation of the earth removal process is also likely to provide a number of other benefits such as a reduced dependence on operator skills and a lower operator work load, both of which might be expected to contribute to improvements in quality and, in particular, the removal of the need for a local operator when working in hazardous environments. The Lancaster University Computerised Intelligent Excavator or LUCIE has demonstrated the achievement of automated and robotic excavation through the implementation of an integrated, real-time, artificial intelligence based control system utilising a novel form of motion control strategy for movement of the excavator bucket through ground. Having its origins in the systematic observation of a range of machine operators of differing levels of expertise, the control strategy as evolved enables the autonomous excavation of a high quality rectangular trench in a wide variety of types and conditions of ground and the autonomous removal of obstacles such as boulders along the line of that trench. The paper considers the development of the LUCIE programme since its inception and sets out in terms of the machine kinematics the evolution and development of the real-time control strategy from an implementation on a one-fifth scale model of a back-hoe arm to a full working system on a JCB801 360° tracked excavator."
]
} |
1701.01657 | 2576814271 | In this paper, a control approach called Artificial Neural Tissue (ANT) is applied to multirobot excavation for lunar base preparation tasks including clearing landing pads and burying of habitat modules. We show for the first time, a team of autonomous robots excavating a terrain to match a given 3D blueprint. Constructing mounds around landing pads will provide physical shielding from debris during launch landing. Burying a human habitat modules under 0.5 m of lunar regolith is expected to provide both radiation shielding and maintain temperatures of -25 @math C. This minimizes base life-support complexity and reduces launch mass. ANT is compelling for a lunar mission because it doesn't require a team of astronauts for excavation and it requires minimal supervision. The robot teams are shown to autonomously interpret blueprints, excavate and prepare sites for a lunar base. Because little pre-programmed knowledge is provided, the controllers discover creative techniques. ANT evolves techniques such as slot-dozing that would otherwise require excavation experts. This is critical in making an excavation mission feasible when it is prohibitively expensive to send astronauts. The controllers evolve elaborate negotiation behaviors to work in close quarters. These and other techniques such as concurrent evolution of the controller and team size are shown to tackle problem of antagonism, when too many robots interfere reducing the overall efficiency or worse, resulting in gridlock. While many challenges remain with this technology our work shows a compelling pathway for field testing this approach. | Other techniques closely mimic insect in there ability to build road ways and ramps using amorphous construction. A laboratory robot is used to heat, melt and deposit foam to produce a ramp and other complex structures @cite_31 . More recent work by @cite_42 have performed simulations of multiple robots to perform excavation on Mars. The intent is to setup a permanent human base and utilize Martian resources for construction and in-situ resource utilization. The work has focused on human assisted high level planning required to locate a base and key facilities and the process of resource extraction. Human assistance is utilized in planning the high level tasks and giving execution orders to the multiple robots. It is presumed human astronauts are already located on Mars and can perform tele-operation on site (from a safe distance). Recent work by @cite_26 have show bucket wheels to be the most effective excavation platform for low gravity on the Moon. As will be shown later, our results also show bucket wheels to be most efficient for excavation. | {
"cite_N": [
"@cite_31",
"@cite_42",
"@cite_26"
],
"mid": [
"2115536039",
"",
"2344081731"
],
"abstract": [
"We present a model of construction using iterative amorphous depositions and give a distributed algorithm to reliably build ramps in unstructured environments. The relatively simple local strategy for interacting with irregularly shaped, partially built structures gives rise robust adaptive global properties. We illustrate the algorithm in both the single robot and multi-robot case via simulation and describe how to solve key technical challenges to implementing this algorithm via a robotic prototype.",
"",
"Planetary excavator robots face unique and extreme engineering constraints relative to terrestrial counterparts. In space missions mass is always at a premium because it is the main driver behind launch costs. Lightweight operation, due to low mass and reduced gravity, hinders excavation and mobility by reducing the forces a robot can effect on its environment. This work shows that there is a quantifiable, non-dimensional threshold that distinguishes the regimes of lightweight and nominal excavation. This threshold is crossed at lower weights for continuous excavators e.g. bucket-wheels than discrete scrapers. This research introduces novel experimentation that for the first time subjects excavators to gravity offload a cable pulls up on the robot with five-sixths its weight, to simulate lunar gravity while they dig. A 300 kg excavator robot offloaded to 1 6 g successfully collects 0.5 kg s using a bucket-wheel, with no discernible effect on mobility. For a discrete scraper of the same weight, production rapidly declines as rising excavation resistance stalls the robot. These experiments suggest caution in interpreting low-gravity performance predictions based solely on testing in Earth gravity. Experiments were conducted in GRC-1, a washed industrial silica-sand devoid of agglutinates and of the sub-75-micron basaltic fines that make up 40 of lunar regolith. The important dangers related to dust are thus not directly addressed. The achieved densities for experimentation are 1640 kg m3 very loose loose and 1720 kg m3 medium dense. This work develops a novel robotic bucket-wheel excavator, featuring unique direct transfer from bucket-wheel to dump bed as a solution to material transfer difficulties identified in past literature."
]
} |
1701.01657 | 2576814271 | In this paper, a control approach called Artificial Neural Tissue (ANT) is applied to multirobot excavation for lunar base preparation tasks including clearing landing pads and burying of habitat modules. We show for the first time, a team of autonomous robots excavating a terrain to match a given 3D blueprint. Constructing mounds around landing pads will provide physical shielding from debris during launch landing. Burying a human habitat modules under 0.5 m of lunar regolith is expected to provide both radiation shielding and maintain temperatures of -25 @math C. This minimizes base life-support complexity and reduces launch mass. ANT is compelling for a lunar mission because it doesn't require a team of astronauts for excavation and it requires minimal supervision. The robot teams are shown to autonomously interpret blueprints, excavate and prepare sites for a lunar base. Because little pre-programmed knowledge is provided, the controllers discover creative techniques. ANT evolves techniques such as slot-dozing that would otherwise require excavation experts. This is critical in making an excavation mission feasible when it is prohibitively expensive to send astronauts. The controllers evolve elaborate negotiation behaviors to work in close quarters. These and other techniques such as concurrent evolution of the controller and team size are shown to tackle problem of antagonism, when too many robots interfere reducing the overall efficiency or worse, resulting in gridlock. While many challenges remain with this technology our work shows a compelling pathway for field testing this approach. | The construction of a human habitat on the moon will require multiple excavation robots working towards a given goal. Collective robotics is well suited because it incorporates multiple autonomous robots that work cooperatively towards a global goal. Some collective robotic controllers mimic mechanisms used by social insects to perform group coordination. These include the use of self-organization, templates and stigmergy. describes how macroscopic behavior emerge solely from numerous interactions among lower level components of the system that use only local information @cite_35 and is the basis for the bio-inspired control approach presented here. Templates are environmental features perceptible to individuals within the collective @cite_7 . In robotic applications, template-based approaches include use of light fields to direct the creation of linear @cite_13 and circular walls @cite_27 and planar annulus structures @cite_29 . Spatiotemporally varying templates (, adjusting or moving a light gradient over time) have been used to produce more complex structures @cite_23 . | {
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_29",
"@cite_27",
"@cite_23",
"@cite_13"
],
"mid": [
"2024189170",
"",
"2007303999",
"2111104611",
"2128465422",
""
],
"abstract": [
"Self-organization was originally introduced in the context of physics and chemistry to describe how microscopic processes give rise to macroscopic structures in out-of-equilibrium systems. Recent research, that extends this concept to ethology, suggests that it provides a concise description of a wide rage of collective phenomena in animals, especially in social insects. This description does not rely on individual complexity to account for complex spatiotemporal features which emerge at the colony level, but rather assumes that interactions among simple individuals can produce highly structured collective behaviors.",
"",
"This study shows that a task as complicated as multi-object ‘ant-like annular sorting’ can be accomplished with ‘minimalist’ solutions employing simple mechanisms and minimal hardware. It provides an alternative to ‘patch sorting’ for multi-object sorting. Three different mechanisms, based on hypotheses about the behaviour of Leptothorax ants are investigated and comparisons are made. Mechanism I employs a simple clustering algorithm, with objects of different sizes. The mechanism explores the idea that it is the size difference of the object that promotes segregation. Mechanism II is an extension to our earlier two-object segregation mechanism. We test the ability of this mechanism to segregate an increased number of object types. Mechanism III uses a combined leaky integrator, which allows a greater segregation of object types while retaining the compactness of the structure. Its performance is improved by optimizing the mechanism's parameters using a genetic algorithm. We compare the three mechanisms in terms of sorting performance. Comparisons between the results of these sorting mechanisms and the behaviour of ants should facilitate further insights into both biological and robotic research and make a contribution to the further development of swarm robotics.",
"We study the problem of construction by autonomous mobile robots focusing on the coordination strategy employed by the robots to solve a simple construction problem efficiently. In particular we address the problem of constructing a linear 2D structure in a planar bounded environment. A \"minimalist\" single-robot solution to the problem is given, as well as two multi-robot solutions, which are natural extensions to the single-robot approach, with varying degrees of inter-robot communication. Results show that with minimal inter-robot communication (1 bit of state), there is a significant improvement in the system performance. This improvement is invariant with respect to the size of the environment.",
"In this paper, a distributed mechanism used by a robotic swarm to facilitate the building of a loose wall structure is discussed and results presented. The ability of a robotic swarm to utilise a spatio-temporal varying template in construction tasks is verified. The work is seen as a step towards the development of a generalised methodology for collective construction building tasks and establishes a means for the building of more complex structures.",
""
]
} |
1701.01657 | 2576814271 | In this paper, a control approach called Artificial Neural Tissue (ANT) is applied to multirobot excavation for lunar base preparation tasks including clearing landing pads and burying of habitat modules. We show for the first time, a team of autonomous robots excavating a terrain to match a given 3D blueprint. Constructing mounds around landing pads will provide physical shielding from debris during launch landing. Burying a human habitat modules under 0.5 m of lunar regolith is expected to provide both radiation shielding and maintain temperatures of -25 @math C. This minimizes base life-support complexity and reduces launch mass. ANT is compelling for a lunar mission because it doesn't require a team of astronauts for excavation and it requires minimal supervision. The robot teams are shown to autonomously interpret blueprints, excavate and prepare sites for a lunar base. Because little pre-programmed knowledge is provided, the controllers discover creative techniques. ANT evolves techniques such as slot-dozing that would otherwise require excavation experts. This is critical in making an excavation mission feasible when it is prohibitively expensive to send astronauts. The controllers evolve elaborate negotiation behaviors to work in close quarters. These and other techniques such as concurrent evolution of the controller and team size are shown to tackle problem of antagonism, when too many robots interfere reducing the overall efficiency or worse, resulting in gridlock. While many challenges remain with this technology our work shows a compelling pathway for field testing this approach. | The second limitation is that these approaches can suffer from an emergent feature called @cite_9 when multiple agents trying to perform the same task interfere with one another, reducing the overall efficiency of the group or worse, result in gridlock. This limits scalability of the solution to number of robots and size of the task area. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1518886025"
],
"abstract": [
"This paper fits in the framework of Distributed Artificial Intelligence andmore specifically in the field of Multi-Agent Systems (MAS). We show why antagonism should be considered in MAS made up of autonomous agents and how these systems can even turn antagonism to advantage. We present a case study of a MAS aimed at exploring the possibilities of antagonism, oriented towards mobile robotics applications. Experimental results relative to a preliminary study of antagonism, namely implicit cooperation, illustrating the complexity and the power of this approach, are reported."
]
} |
1701.01657 | 2576814271 | In this paper, a control approach called Artificial Neural Tissue (ANT) is applied to multirobot excavation for lunar base preparation tasks including clearing landing pads and burying of habitat modules. We show for the first time, a team of autonomous robots excavating a terrain to match a given 3D blueprint. Constructing mounds around landing pads will provide physical shielding from debris during launch landing. Burying a human habitat modules under 0.5 m of lunar regolith is expected to provide both radiation shielding and maintain temperatures of -25 @math C. This minimizes base life-support complexity and reduces launch mass. ANT is compelling for a lunar mission because it doesn't require a team of astronauts for excavation and it requires minimal supervision. The robot teams are shown to autonomously interpret blueprints, excavate and prepare sites for a lunar base. Because little pre-programmed knowledge is provided, the controllers discover creative techniques. ANT evolves techniques such as slot-dozing that would otherwise require excavation experts. This is critical in making an excavation mission feasible when it is prohibitively expensive to send astronauts. The controllers evolve elaborate negotiation behaviors to work in close quarters. These and other techniques such as concurrent evolution of the controller and team size are shown to tackle problem of antagonism, when too many robots interfere reducing the overall efficiency or worse, resulting in gridlock. While many challenges remain with this technology our work shows a compelling pathway for field testing this approach. | A limitation with look-up tables is that they have poor sensor scalability, as the size of the look-up table is exponential in the number of inputs. Look-up tables also have poor generalization. Neural network controllers perform better generalization since they effectively encode a compressed representation of the table. Neural networks have been successfully applied on multirobot systems and have been used to build walls, corridors, and briar patches @cite_16 and for tasks that require communication and coordination @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_16"
],
"mid": [
"2010177086",
"1925480412"
],
"abstract": [
"In social insects, both self-organisation and communication play a crucial role for the accomplishment of many tasks at a collective level. Communication is performed with different modalities, which can be roughly classified into three classes: indirect (stigmergic) communication, direct interactions and direct communication. The use of stigmergic communication is predominant in social insects (e.g. the pheromone trails in ants), where, however, direct interactions (e.g. antennation in ants) and direct communication (e.g. the waggle dance in honey bees) can also be observed. Taking inspiration from insect societies, we present an experimental study of self-organising behaviours for a group of robots, which exploit communication to coordinate their activities. In particular, the robots are placed in an arena presenting holes and open borders, which they should avoid while moving coordinately. Artificial evolution is responsible for the synthesis in a simulated environment of the robot’s neural controllers, which are subsequently tested on physical robots. We study different communication strategies among the robots: no direct communication, handcrafted signalling and a completely evolved approach. We show that the latter is the most efficient, suggesting that artificial evolution can produce behaviours that are more adaptive than those obtained with conventional design methodologies. Moreover, we show that the evolved controllers produce a self-organising system that is robust enough to be tested on physical robots, notwithstanding the huge gap between simulation and reality.",
"Describes robust neurocontrollers for groups of agents that perform construction tasks. They enable agents to balance multiple goals, perform sequences of actions and survive while building walls, corridors, intersections, and briar patches."
]
} |
1701.01657 | 2576814271 | In this paper, a control approach called Artificial Neural Tissue (ANT) is applied to multirobot excavation for lunar base preparation tasks including clearing landing pads and burying of habitat modules. We show for the first time, a team of autonomous robots excavating a terrain to match a given 3D blueprint. Constructing mounds around landing pads will provide physical shielding from debris during launch landing. Burying a human habitat modules under 0.5 m of lunar regolith is expected to provide both radiation shielding and maintain temperatures of -25 @math C. This minimizes base life-support complexity and reduces launch mass. ANT is compelling for a lunar mission because it doesn't require a team of astronauts for excavation and it requires minimal supervision. The robot teams are shown to autonomously interpret blueprints, excavate and prepare sites for a lunar base. Because little pre-programmed knowledge is provided, the controllers discover creative techniques. ANT evolves techniques such as slot-dozing that would otherwise require excavation experts. This is critical in making an excavation mission feasible when it is prohibitively expensive to send astronauts. The controllers evolve elaborate negotiation behaviors to work in close quarters. These and other techniques such as concurrent evolution of the controller and team size are shown to tackle problem of antagonism, when too many robots interfere reducing the overall efficiency or worse, resulting in gridlock. While many challenges remain with this technology our work shows a compelling pathway for field testing this approach. | Neural network controllers have been also been used to solve @math tiling task @cite_40 . Going from the @math to the @math tiling formation task, results in a search space of @math to @math respectively. This very large increase in search space prevents a lookup table from finding a suitable solution. However because a neural network can generalize better it finds a desired solution. | {
"cite_N": [
"@cite_40"
],
"mid": [
"2119076930"
],
"abstract": [
"A scalable architecture to facilitate emergent (self-organized) task decomposition using neural networks and evolutionary algorithms is presented. Various control system architectures are compared for a collective robotics (3 × 3 tiling pattern formation) task where emergent behaviours and effective task -decomposition techniques are necessary to solve the task. We show that bigger, more modular network architectures that exploit emergent task decomposition strategies can evolve faster and outperform comparably smaller non emergent neural networks for this task. Much like biological nervous systems, larger Emergent Task Decomposition Networks appear to evolve faster than comparable smaller networks. Unlike reinforcement learning techniques, only a global fitness function is specified, requiring limited supervision, and self-organized task decomposition is achieved through competition and specialization. The results are derived from computer simulations."
]
} |
1701.01527 | 2575335471 | Autonomous vehicles (AVs) will revolutionarize ground transport and take a substantial role in the future transportation system. Most AVs are likely to be electric vehicles (EVs) and they can participate in the vehicle-to-grid (V2G) system to support various V2G services. Although it is generally infeasible for EVs to dictate their routes, we can design AV travel plans to fulfill certain system-wide objectives. In this paper, we focus on the AVs looking for parking and study how they can be led to appropriate parking facilities to support V2G services. We formulate the coordinated parking problem (CPP), which can be solved by a standard integer linear program solver but requires long computational time. To make it more practical, we develop a distributed algorithm to address CPP based on dual decomposition. We carry out a series of simulations to evaluate the proposed solution methods. Our results show that the distributed algorithm can produce nearly optimal solutions with substantially less computational time. A coarser time scale can improve computational time but degrade the solution quality resulting in possible infeasible solution. Even with communication loss, the distributed algorithm can still perform well and converge with only little degradation in speed. | There is some work studying intelligent parking in general. @cite_6 investigated availability of parking facilities for parking guidance and information systems. It developed a multivariate autoregressive model to account for the temporal and spatial relationship of parking availability. @cite_17 studied the uncoordinated parking space allocation for inexpensive limited on-street parking spots and expensive oversized parking lots. Some work focuses on AV parking. @cite_19 developed a control system for AV valet parking with a focus on steering control. @cite_39 designed an intelligent vehicle system to implement the AV valet parking service. However, they mainly targeted AV parking control in a confined parking area. Some investigate the parking issue for a larger area. @cite_34 proposed intelligent parking assistant architecture to manage parking spots to improve the quality of urban mobility. @cite_35 analyzed the impact of charging and discharging of EVs in parking lots on the power grid probabilistically. However, there is no thorough study on V2G based on AVs. In this work, we aim to bridge this research gap. | {
"cite_N": [
"@cite_35",
"@cite_6",
"@cite_39",
"@cite_19",
"@cite_34",
"@cite_17"
],
"mid": [
"2055626365",
"2071980703",
"1605172267",
"1992789637",
"2115332480",
"1615615219"
],
"abstract": [
"Plug-in electric vehicles in the future will possibly emerge widely in city areas. Fleets of such vehicles in large numbers could be regarded as considerable stochastic loads in view of the electrical grid. Moreover, they are not stabled in unique positions to define their impact on the grid. Municipal parking lots could be considered as important aggregators letting these vehicles interact with the utility grid in certain positions. A bidirectional power interface in a parking lot could link electric vehicles with the utility grid or any storage and dispersed generation. Such vehicles, depending on their need, could transact power with parking lots. Considering parking lots equipped with power interfaces, in more general terms, parking-to-vehicle and vehicle-to-parking are propose here instead of conventional grid-to-vehicle and vehicle-to-grid concepts. Based on statistical data and adopting general regulations on vehicles (dis)charging, a novel stochastic methodology is presented to estimate total daily impact of vehicles aggregated in parking lots on the grid. Different scenarios of plug-in vehicles' penetration are suggested in this paper and finally, the scenarios are simulated on standard grids that include several parking lots. The results show acceptable penetration level margins in terms of bus voltages and grid power loss.",
"This paper systematically explores the efficiency of uncoordinated parking space allocation in urban environments with two types of parking facilities. Drivers decide whether to go for inexpensive but limited on-street parking spots or for expensive yet overdimensioned parking lots, incurring an additional cruising cost when they decide for on-street parking spots but fail to actually acquire one. Their decisions are made under perfect knowledge of the total parking supply and costs and different levels of information about the parking demand, i.e., complete probabilistic information and uncertainty. We take a game-theoretic approach and analyze the parking-space allocation process in each case as resource selection game instances. We derive their equilibria, compute the related price-of-anarchy (PoA) values, and study the impact of pricing on them. It is shown that, under typical pricing policies on the two types of parking facilities, drivers tend to overcompete for the on-street parking space, giving rise to redundant cruising cost. However, this inefficiency can be alleviated through systematic manipulation of the information that is announced to the drivers. In particular, counterintuitive less-is-more effects emerge regarding the way that information availability modulates the resulting efficiency of the process, which underpin general competitive service provision settings.",
"The autonomous vehicle valet parking service (AVP) enables a vehicle to drive and park without any human interaction. The driverless vehicle has to follow the scheduled route and be parked in a parking lot automatically. Most fundamental technologies for autonomous vehicle valet parking service are controlling vehicle driving and parking. For autonomous driving, the path to near goal parking lot has to be generated and control commands to follow the path have to be generated continuously. For autonomous parking, the controlling vehicle steer value has to be found to park the vehicle at slot well. And also the mobile assistant system is needed for driver to select the parking lot and monitoring the state of the vehicle in remote. In this paper, we have designed and implemented of an intelligent vehicle system for autonomous vehicle valet parking service in low speed limited parking area.",
"The autonomous vehicle valet parking service (AVP) enables a vehicle to drive and park without any human interaction. The driverless vehicle has to follow the scheduled route and be parked in a parking lot automatically. Most fundamental technologies for autonomous vehicle valet parking service are controlling vehicle driving and parking. For autonomous driving, the path to near goal parking lot has to be generated and control commands to follow the path have to be generated continuously. For autonomous parking, the controlling vehicle steer value has to be found to park the vehicle at a parking lot well. We have used two turning radius circles to find vehicle steering control value about perpendicular and parallel parking method basically. These two vehicle turning radius circles are minimum radius circles in initial and are updated in real time as vehicle stop location in parking mode. In this paper, we have developed the control system for autonomous vehicle valet parking service in parking area with low speed limited.",
"Parking is becoming an expensive resource in almost any major city in the world, and its limited availability is a concurrent cause of urban traffic congestion, and air pollution. In old cities, the structure of the public parking space is rigidly organised and often in the form of on-street public parking spots. Unfortunately, these public parking spots cannot be reserved beforehand during the pre-trip phase, and that often lead to a detriment of the quality of urban mobility. Addressing the problem of managing public parking spots is therefore vital to obtain environmentally friendlier and healthier cities. Recent technological progresses in industrial automation, wireless network, sensor communication along with the widespread of high-range smart devices and new rules concerning financial transactions in mobile payment allow the definition of new intelligent frameworks that enable a convenient management of public parking in urban area, which could improve sustainable urban mobility. In such a scenario, the proposed intelligent parking assistant (IPA) architecture aims at overcoming current public parking management solutions. This study discusses the conceptual architecture of IPA and the first prototype-scale simulations of the system.",
"Parking guidance and information (PGI) systems are becoming important parts of intelligent transportation systems due to the fact that cars and infrastructure are becoming more and more connected. One major challenge in developing efficient PGI systems is the uncertain nature of parking availability in parking facilities (both on-street and off-street). A reliable PGI system should have the capability of predicting the availability of parking at the arrival time with reliable accuracy. In this paper, we study the nature of the parking availability data in a big city and propose a multivariate autoregressive model that takes into account both temporal and spatial correlations of parking availability. The model is used to predict parking availability with high accuracy. The prediction errors are used to recommend the parking location with the highest probability of having at least one parking spot available at the estimated arrival time. The results are demonstrated using real-time parking data in the areas of San Francisco and Los Angeles."
]
} |
1701.01619 | 2952927437 | We present an approach to effectively use millions of images with noisy annotations in conjunction with a small subset of cleanly-annotated images to learn powerful image representations. One common approach to combine clean and noisy data is to first pre-train a network using the large noisy dataset and then fine-tune with the clean dataset. We show this approach does not fully leverage the information contained in the clean set. Thus, we demonstrate how to use the clean annotations to reduce the noise in the large dataset before fine-tuning the network using both the clean set and the full set with reduced noise. The approach comprises a multi-task network that jointly learns to clean noisy annotations and to accurately classify images. We evaluate our approach on the recently released Open Images dataset, containing 9 million images, multiple annotations per image and over 6000 unique classes. For the small clean set of annotations we use a quarter of the validation set with 40k images. Our results demonstrate that the proposed approach clearly outperforms direct fine-tuning across all major categories of classes in the Open Image dataset. Further, our approach is particularly effective for a large number of classes with medium level of noise in annotations (20-80 false positive annotations). | Approaches to learn from noisy labeled data can generally be categorized into two groups: Approaches in the first group aim to directly learn from noisy labels and focus mainly on noise-robust algorithms, e.g., @cite_25 @cite_14 @cite_7 , and label cleansing methods to remove or correct mislabeled data, e.g., @cite_21 . Frequently, these methods face the challenge of distinguishing difficult from mislabeled training samples. Second, semi-supervised learning (SSL) approaches tackle these shortcomings by combining the noisy labels with a small set of clean labels @cite_29 . SSL approaches use label propagration such as constrained bootstrapping @cite_31 or graph-based approaches @cite_26 . Our work follows the semi-supervised paradigm, however focusing on learning a mapping between noisy and clean labels and then exploiting the mapping for training deep neural networks. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_7",
"@cite_29",
"@cite_21",
"@cite_31",
"@cite_25"
],
"mid": [
"2100031962",
"2122457239",
"2102348129",
"2136504847",
"1666942233",
"1964763677",
"2146849125"
],
"abstract": [
"Convolutional networks trained on large supervised datasets produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weakly-labeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and comments, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity and learn correspondences between different languages.",
"With the advent of the Internet it is now possible to collect hundreds of millions of images. These images come with varying degrees of label information. \"Clean labels\" can be manually obtained on a small fraction, \"noisy labels\" may be extracted automatically from surrounding text, while for most images there are no labels at all. Semi-supervised learning is a principled framework for combining these different label sources. However, it scales polynomially with the number of images, making it impractical for use on gigantic collections with hundreds of millions of images and thousands of classes. In this paper we show how to utilize recent results in machine learning to obtain highly efficient approximations for semi-supervised learning that are linear in the number of images. Specifically, we use the convergence of the eigenvectors of the normalized graph Laplacian to eigenfunctions of weighted Laplace-Beltrami operators. Our algorithm enables us to apply semi-supervised learning to a database of 80 million images gathered from the Internet.",
"In this paper, we explore noise-tolerant learning of classifiers. We formulate the problem as follows. We assume that there is an unobservable training set that is noise free. The actual training set given to the learning algorithm is obtained from this ideal data set by corrupting the class label of each example. The probability that the class label of an example is corrupted is a function of the feature vector of the example. This would account for most kinds of noisy data one encounters in practice. We say that a learning method is noise tolerant if the classifiers learnt with noise-free data and with noisy data, both have the same classification accuracy on the noise-free data. In this paper, we analyze the noise-tolerance properties of risk minimization (under different loss functions). We show that risk minimization under 0-1 loss function has impressive noise-tolerance properties and that under squared error loss is tolerant only to uniform noise; risk minimization under other loss functions is not noise tolerant. We conclude this paper with some discussion on the implications of these theoretical results.",
"Door lock apparatus in which a door latch mechanism is operated by inner and outer door handles coupled to a latch shaft extending through the latch mechanism. Handles are coupled to ends of latch shaft by coupling devices enabling door to be locked from the inside to prevent entry from the outside but can still be opened from the inside by normal operation of outside handle. Inside coupling device has limited lost-motion which is used to operate cam device to unlock the door on actuation of inner handles.",
"This paper presents a new approach to identifying and eliminating mislabeled training instances for supervised learning. The goal of this approach is to improve classification accuracies produced by learning algorithms by improving the quality of the training data. Our approach uses a set of learning algorithms to create classifiers that serve as noise filters for the training data. We evaluate single algorithm, majority vote and consensus filters on five datasets that are prone to labeling errors. Our experiments illustrate that filtering significantly improves classification accuracy for noise levels up to 30 . An analytical and empirical evaluation of the precision of our approach shows that consensus filters are conservative at throwing away good data at the expense of retaining bad data and that majority filters are better at detecting bad data at the expense of throwing away good data. This suggests that for situations in which there is a paucity of data, consensus filters are preferable, whereas majority vote filters are preferable for situations with an abundance of data.",
"We propose NEIL (Never Ending Image Learner), a computer program that runs 24 hours per day and 7 days per week to automatically extract visual knowledge from Internet data. NEIL uses a semi-supervised learning algorithm that jointly discovers common sense relationships (e.g., \"Corolla is a kind of looks similar to Car\", \"Wheel is a part of Car\") and labels instances of the given visual categories. It is an attempt to develop the world's largest visual structured knowledge base with minimum human labeling effort. As of 10th October 2013, NEIL has been continuously running for 2.5 months on 200 core cluster (more than 350K CPU hours) and has an ontology of 1152 object categories, 1034 scene categories and 87 attributes. During this period, NEIL has discovered more than 1700 relationships and has labeled more than 400K visual instances.",
"It is usually assumed that the kind of noise existing in annotated data is random classification noise. Yet there is evidence that differences between annotators are not always random attention slips but could result from different biases towards the classification categories, at least for the harder-to-decide cases. Under an annotation generation model that takes this into account, there is a hazard that some of the training instances are actually hard cases with unreliable annotations. We show that these are relatively unproblematic for an algorithm operating under the 0--1 loss model, whereas for the commonly used voted perceptron algorithm, hard training cases could result in incorrect prediction on the uncontroversial cases at test time."
]
} |
1701.01619 | 2952927437 | We present an approach to effectively use millions of images with noisy annotations in conjunction with a small subset of cleanly-annotated images to learn powerful image representations. One common approach to combine clean and noisy data is to first pre-train a network using the large noisy dataset and then fine-tune with the clean dataset. We show this approach does not fully leverage the information contained in the clean set. Thus, we demonstrate how to use the clean annotations to reduce the noise in the large dataset before fine-tuning the network using both the clean set and the full set with reduced noise. The approach comprises a multi-task network that jointly learns to clean noisy annotations and to accurately classify images. We evaluate our approach on the recently released Open Images dataset, containing 9 million images, multiple annotations per image and over 6000 unique classes. For the small clean set of annotations we use a quarter of the validation set with 40k images. Our results demonstrate that the proposed approach clearly outperforms direct fine-tuning across all major categories of classes in the Open Image dataset. Further, our approach is particularly effective for a large number of classes with medium level of noise in annotations (20-80 false positive annotations). | Second, transfer learning has become common practice in modern computer vision. There, a network is pre-trained on a large dataset of labeled images, say ImageNet, and then used for a different but related task, by fine-tuning on a small dataset for specific tasks such as image classification and retrieval @cite_18 and image captioning @cite_33 . Unlike these works, our approach aims to train a network from scratch using noisy labels and then facilitates a small set of clean labels to fine-tune the network. | {
"cite_N": [
"@cite_18",
"@cite_33"
],
"mid": [
"2953391683",
"2951912364"
],
"abstract": [
"Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art."
]
} |
1701.00925 | 2571066139 | In this paper, we study extensions to the Gaussian processes (GPs) continuous occupancy mapping problem. There are two classes of occupancy mapping problems that we particularly investigate. The first problem is related to mapping under pose uncertainty and how to propagate pose estimation uncertainty into the map inference. We develop expected kernel and expected submap notions to deal with uncertain inputs. In the second problem, we account for the complication of the robot's perception noise using warped Gaussian processes (WGPs). This approach allows for non-Gaussian noise in the observation space and captures the possible nonlinearity in that space better than standard GPs. The developed techniques can be applied separately or concurrently to a standard GP occupancy mapping problem. According to our experimental results, although taking into account pose uncertainty leads, as expected, to more uncertain maps, by modeling the nonlinearities present in the observation space WGPs improve the map quality. | Current robotic navigation algorithms rely on a dense environment representation for safe navigation. Occupancy grid maps are the standard way of environment representation in robotics and are the natural choice for online implementations @cite_4 @cite_32 @cite_7 @cite_42 @cite_22 . However, they simplify the mapping problem to a set of independent cells and estimate the probability of occupancy for each cell by ignoring the correlation between them. The map posterior is then approximated as the product of its marginals. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_42",
"@cite_32"
],
"mid": [
"2154418813",
"1986503980",
"1991752086",
"2133844819",
"2158057652"
],
"abstract": [
"We describe the use of multiple wide-angle sonar range measurements to map the surroundings of an autonomous mobile robot. A sonar range reading provides information concerning empty and occupied volumes in a cone (subtending 30 degrees in our case) in front of the sensor. The reading is modelled as probability profiles projected onto a rasterized map, where somewhere occupied and everywhere empty areas are represented. Range measurements from multiple points of view (taken from multiple sensors on the robot, and from the same sensors after robot moves) are systematically integrated in the map. Overlapping empty volumes re-inforce each other, and serve to condense the range of occupied volumes. The map definition improves as more readings are added. The final map shows regions probably occupied, probably unoccupied, and unknown areas. The method deals effectively with clutter, and can be used for motion planning and for extended landmark recognition. This system has been tested on the Neptune mobile robot at CMU.",
"Occupancy grids have been a popular mapping technique in mobile robotics for nearly 30 years. Occupancy grids offer a discrete representation of the world and seek to determine the occupancy probability of each cell. Traditional occupancy grid mapping methods make two assumptions for computational efficiency and it has been shown that the full posterior is computationally intractable for real-world mapping applications without these assumptions. The two assumptions result in tuning parameters that control the information gained from each distance measurement. In this paper, several tuning parameters found in the literature are optimized against the full posterior in 1D. In addition, this paper presents a new parameterization of the update function that outperforms existing methods in terms of capturing residual uncertainty. Capturing the residual uncertainty better estimates the position of obstacles and prevents under- and over-confidence in both the occupied and unoccupied cells. The paper concludes by showing that the new update function better captures the residual uncertainty in each cell when compared to an offline mapping method for realistic 2D simulations.",
"This article describes a new algorithm for acquiring occupancy grid maps with mobile robots. Existing occupancy grid mapping algorithms decompose the high-dimensional mapping problem into a collection of one-dimensional problems, where the occupancy of each grid cell is estimated independently. This induces conflicts that may lead to inconsistent maps, even for noise-free sensors. This article shows how to solve the mapping problem in the original, high-dimensional space, thereby maintaining all dependencies between neighboring cells. As a result, maps generated by our approach are often more accurate than those generated using traditional techniques. Our approach relies on a statistical formulation of the mapping problem using forward models. It employs the expectation maximization algorithm for searching maps that maximize the likelihood of the sensor measurements.",
"Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.",
"A sonar-based mapping and navigation system developed for an autonomous mobile robot operating in unknown and unstructured environments is described. The system uses sonar range data to build a multileveled description of the robot's surroundings. Sonar readings are interpreted using probability profiles to determine empty and occupied areas. Range measurements from multiple points of view are integrated into a sensor-level sonar map, using a robust method that combines the sensor information in such a way as to cope with uncertainties and errors in the data. The resulting two-dimensional maps are used for path planning and navigation. From these sonar maps, multiple representations are developed for various kinds of problem-solving activities. Several dimensions of representation are defined: the abstraction axis, the geographical axis, and the resolution axis. The sonar mapping procedures have been implemented as part of an autonomous mobile robot navigation system called Dolphin. The major modules of this system are described and related to the various mapping representations used. Results from actual runs are presented, and further research is mentioned. The system is also situated within the wider context of developing an advanced software architecture for autonomous mobile robots."
]
} |
1701.00925 | 2571066139 | In this paper, we study extensions to the Gaussian processes (GPs) continuous occupancy mapping problem. There are two classes of occupancy mapping problems that we particularly investigate. The first problem is related to mapping under pose uncertainty and how to propagate pose estimation uncertainty into the map inference. We develop expected kernel and expected submap notions to deal with uncertain inputs. In the second problem, we account for the complication of the robot's perception noise using warped Gaussian processes (WGPs). This approach allows for non-Gaussian noise in the observation space and captures the possible nonlinearity in that space better than standard GPs. The developed techniques can be applied separately or concurrently to a standard GP occupancy mapping problem. According to our experimental results, although taking into account pose uncertainty leads, as expected, to more uncertain maps, by modeling the nonlinearities present in the observation space WGPs improve the map quality. | The FastSLAM algorithm @cite_35 exploits @cite_13 , and relies on the fact that, given the robot trajectory, landmarks are conditionally independent. In practice, FastSLAM can generate accurate maps; however, it suffers from degeneracy and depletion problems @cite_28 . Full SLAM methods estimate the of the robot trajectory and the map, given all available data. In robotics, full SLAM using probabilistic graphical models representation and the estimate derived by solving a nonlinear least squares problem @cite_0 @cite_5 @cite_16 @cite_40 @cite_17 can be considered the state-of-the-art and we thus focus our attention on using a pose-graph estimation of the robot trajectory as in @cite_26 @cite_1 . | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_28",
"@cite_1",
"@cite_0",
"@cite_40",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"2103393159",
"2096052462",
"2146861713",
"2026425544",
"2064169939",
"2005389775",
"2123940811",
"2154459632",
"2149020252",
"2024908906"
],
"abstract": [
"Proceedings of IJCAI 2003 In [15], proposed an algorithm called FastSLAM as an efficient and robust solution to the simultaneous localization and mapping problem. This paper describes a modified version of FastSLAM which overcomes important deficiencies of the original algorithm. We prove convergence of this new algorithm for linear SLAM problems and provide real-world experimental results that illustrate an order of magnitude improvement in accuracy over the original FastSLAM algorithm.",
"Pose SLAM is the variant of simultaneous localization and map building (SLAM) is the variant of SLAM, in which only the robot trajectory is estimated and where landmarks are only used to produce relative constraints between robot poses. To reduce the computational cost of the information filter form of Pose SLAM and, at the same time, to delay inconsistency as much as possible, we introduce an approach that takes into account only highly informative loop-closure links and nonredundant poses. This approach includes constant time procedures to compute the distance between poses, the expected information gain for each potential link, and the exact marginal covariances while moving in open loop, as well as a procedure to recover the state after a loop closure that, in practical situations, scales linearly in terms of both time and memory. Using these procedures, the robot operates most of the time in open loop, and the cost of the loop closure is amortized over long trajectories. This way, the computational bottleneck shifts to data association, which is the search over the set of previously visited poses to determine good candidates for sensor registration. To speed up data association, we introduce a method to search for neighboring poses whose complexity ranges from logarithmic in the usual case to linear in degenerate situations. The method is based on organizing the pose information in a balanced tree whose internal levels are defined using interval arithmetic. The proposed Pose-SLAM approach is validated through simulations, real mapping sessions, and experiments using standard SLAM data sets.",
"This paper presents an analysis of FastSLAM - a Rao-Blackwellised particle filter formulation of simultaneous localisation and mapping. It shows that the algorithm degenerates with time, regardless of the number of particles used or the density of landmarks within the environment, and would always produce optimistic estimates of uncertainty in the long-term. In essence, FastSLAM behaves like a non-optimal local search algorithm; in the short-term it may produce consistent uncertainty estimates but, in the long-term, it is unable to adequately explore the state-space to be a reasonable Bayesian estimator. However, the number of particles and landmarks does affect the accuracy of the estimated mean and, given sufficient particles, FastSLAM can produce good non-stochastic estimates in practice. FastSLAM also has several practical advantages, particularly with regard to data association, and would probably work well in combination with other versions of stochastic SLAM, such as EKF-based SLAM",
"The maps that are built by standard feature-based simultaneous localization and mapping (SLAM) methods cannot be directly used to compute paths for navigation, unless enriched with obstacle or traversability information, with the consequent increase in complexity. Here, we propose a method that directly uses the Pose SLAM graph of constraints to determine the path between two robot configurations with lowest accumulated pose uncertainty, i.e., the most reliable path to the goal. The method shows improved navigation results when compared with standard path-planning strategies over both datasets and real-world experiments.",
"Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)-based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches.",
"Pose graphs have become a popular representation for solving the simultaneous localization and mapping (SLAM) problem. A pose graph is a set of robot poses connected by nonlinear constraints obtained from observations of features common to nearby poses. Optimizing large pose graphs has been a bottleneck for mobile robots, since the computation time of direct nonlinear optimization can grow cubically with the size of the graph. In this paper, we propose an efficient method for constructing and solving the linear subproblem, which is the bottleneck of these direct methods. We compare our method, called Sparse Pose Adjustment (SPA), with competing indirect methods, and show that it outperforms them in terms of convergence speed and accuracy. We demonstrate its effectiveness on a large set of indoor real-world maps, and a very large simulated dataset. Open-source implementations in C++, and the datasets, are publicly available.",
"In this paper, we address the problem of incrementally optimizing constraint networks for maximum likelihood map learning. Our approach allows a robot to efficiently compute configurations of the network with small errors while the robot moves through the environment. We apply a variant of stochastic gradient descent and use a tree-based parameterization of the nodes in the network. By integrating adaptive learning rates in the parameterization of the network, our algorithm can use previously computed solutions to determine the result of the next optimization run. Additionally, our approach updates only the parts of the network which are affected by the newly incorporated measurements and starts the optimization approach only if the new data reveals inconsistencies with the network constructed so far. These improvements yield an efficient solution for this class of online optimization problems. Our approach has been implemented and tested on simulated and on real data. We present comparisons to recently proposed online and offline methods that address the problem of optimizing constraint network. Experiments illustrate that our approach converges faster to a network configuration with small errors than the previous approaches.",
"We present iSAM2, a fully incremental, graph-based version of incremental smoothing and mapping (iSAM). iSAM2 is based on a novel graphical model-based interpretation of incremental sparse matrix factorization methods, afforded by the recently introduced Bayes tree data structure. The original iSAM algorithm incrementally maintains the square root information matrix by applying matrix factorization updates. We analyze the matrix updates as simple editing operations on the Bayes tree and the conditional densities represented by its cliques. Based on that insight, we present a new method to incrementally change the variable ordering which has a large effect on efficiency. The efficiency and accuracy of the new method is based on fluid relinearization, the concept of selectively relinearizing variables as needed. This allows us to obtain a fully incremental algorithm without any need for periodic batch steps. We analyze the properties of the resulting algorithm in detail, and show on various real and simulated datasets that the iSAM2 algorithm compares favorably with other recent mapping algorithms in both quality and efficiency.",
"Particle filters (PFs) are powerful sampling-based inference learning algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a principled way, any type of probability distribution, nonlinearity and non-stationarity. They have appeared in several fields under such names as \"condensation\", \"sequential Monte Carlo\" and \"survival of the fittest\". In this paper, we show how we can exploit the structure of the DBN to increase the efficiency of particle filtering, using a technique known as Rao-Blackwellisation. Essentially, this samples some of the variables, and marginalizes out the rest exactly, using the Kalman filter, HMM filter, junction tree algorithm, or any other finite dimensional optimal filter. We show that Rao-Blackwellised particle filters (RBPFs) lead to more accurate estimates than standard PFs. We demonstrate RBPFs on two problems, namely non-stationary online regression with radial basis function networks and robot localization and map building. We also discuss other potential application areas and provide references to some finite dimensional optimal filters.",
"This article presents GraphSLAM, a unifying algorithm for the offline SLAM problem. GraphSLAM is closely related to a recent sequence of research papers on applying optimization techniques to SLAM problems. It transforms the SLAM posterior into a graphical network, representing the log-likelihood of the data. It then reduces this graph using variable elimination techniques, arriving at a lower-dimensional problems that is then solved using conventional optimization techniques. As a result, GraphSLAM can generate maps with 108 or more features. The paper discusses a greedy algorithm for data association, and presents results for SLAM in urban environments with occasional GPS measurements."
]
} |
1701.00925 | 2571066139 | In this paper, we study extensions to the Gaussian processes (GPs) continuous occupancy mapping problem. There are two classes of occupancy mapping problems that we particularly investigate. The first problem is related to mapping under pose uncertainty and how to propagate pose estimation uncertainty into the map inference. We develop expected kernel and expected submap notions to deal with uncertain inputs. In the second problem, we account for the complication of the robot's perception noise using warped Gaussian processes (WGPs). This approach allows for non-Gaussian noise in the observation space and captures the possible nonlinearity in that space better than standard GPs. The developed techniques can be applied separately or concurrently to a standard GP occupancy mapping problem. According to our experimental results, although taking into account pose uncertainty leads, as expected, to more uncertain maps, by modeling the nonlinearities present in the observation space WGPs improve the map quality. | Kernel methods in the form of Gaussian processes (GPs) framework @cite_34 are non-parametric regression and classification techniques that have been used to model spatial phenomena @cite_41 . The development of GPOM is discussed in @cite_14 @cite_23 . The incremental GP map building using the Bayesian Committee Machine (BCM) technique @cite_29 is developed in @cite_11 @cite_2 @cite_18 @cite_27 and for online applications in @cite_24 . @cite_12 , the Hilbert maps technique is proposed and is more scalable. However, it is an approximation for continuous occupancy mapping and produces maps with less accuracy than GPOM. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_41",
"@cite_29",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_34",
"@cite_12",
"@cite_11"
],
"mid": [
"655477510",
"2124825847",
"2011508070",
"1973310094",
"2410641873",
"2013229266",
"1977189000",
"856491788",
"1746819321",
"2291737362",
"2053455835"
],
"abstract": [
"In this paper, a novel solution for autonomous robotic exploration is proposed. We model the distribution of information in an unknown environment as an unsteady diffusion process, which can be an appropriate mathematical formulation and analogy for expanding, time-varying, and dynamic environments. This information distribution map is the solution of the diffusion process partial differential equation, and is regressed from sensor data as a Gaussian Process. Optimization of the process parameters leads to an optimal frontier map which describes regions of interest for further exploration. Since the presented approach considers a continuous model of the environment, it can be used to plan smooth exploration paths exploiting the structural dependencies of the environment whilst handling sparse sensor measurements. The performance of the approach is evaluated through simulation results in the well-known Freiburg and Cave maps.",
"In this paper we introduce a new statistical modeling technique for building occupancy maps. The problem of mapping is addressed as a classification task where the robot's environment is classified into regions of occupancy and unoccupancy. Our model provides both a continuous representation of the robot's surroundings and an associated predictive variance. This is obtained by employing a Gaussian process as a non-parametric Bayesian learning technique to exploit the fact that real-world environments inherently possess structure. This structure introduces a correlation between points on the map which is not accounted for by many common mapping techniques such as occupancy grids. Using a trained neural network covariance function to model the highly non-stationary datasets, it is possible to generate accurate representations of large environments at resolutions which suit the desired applications while also providing inferences into occluded regions, between beams, and beyond the range of the sensor, even with relatively few sensor readings. We demonstrate the benefits of our approach in a simulated data set with known ground-truth, and in an outdoor urban environment covering an area of 120,000 m2.",
"Building a model of large-scale terrain that can adequately handle uncertainty and incompleteness in a statistically sound way is a challenging problem. This work proposes the use of Gaussian processes as models of large-scale terrain. The proposed model naturally provides a multiresolution representation of space, incorporates and handles uncertainties aptly, and copes with incompleteness of sensory information. Gaussian process regression techniques are applied to estimate and interpolate (to fill gaps in occluded areas) elevation information across the field. The estimates obtained are the best linear unbiased estimates for the data under consideration. A single nonstationary (neural network) Gaussian process is shown to be powerful enough to model large and complex terrain, effectively handling issues relating to discontinuous data. A local approximation method based on a “moving window” methodology and implemented using k-dimensional (KD)-trees is also proposed. This enables the approach to handle extremely large data sets, thereby completely addressing its scalability issues. Experiments are performed on large-scale data sets taken from real mining applications. These data sets include sparse mine planning data, which are representative of a global positioning system–based survey, as well as dense laser scanner data taken at different mine sites. Further, extensive statistical performance evaluation and benchmarking of the technique has been performed through cross-validation experiments. They conclude that for dense and-or flat data, the proposed approach will perform very competitively with grid-based approaches using standard interpolation techniques and triangulated irregular networks using triangle-based interpolation techniques; for sparse and-or complex data, however, it would significantly outperform them. © 2009 Wiley Periodicals, Inc.",
"The Bayesian committee machine (BCM) is a novel approach to combining estimators that were trained on different data sets. Although the BCM can be applied to the combination of any kind of estimators, the main foci are gaussian process regression and related systems such as regularization networks and smoothing splines for which the degrees of freedom increase with the number of training data. Somewhat surprisingly, we find that the performance of the BCM improves if several test points are queried at the same time and is optimal if the number of test points is at least as large as the degrees of freedom of the estimator. The BCM also provides a new solution for on-line learning with potential applications to data mining. We apply the BCM to systems with fixed basis functions and discuss its relationship to gaussian process regression. Finally, we show how the ideas behind the BCM can be applied in a non-Bayesian setting to extend the input-dependent combination of estimators.",
"We present a novel algorithm to produce descriptive online 3D occupancy maps using Gaussian processes (GPs). GP regression and classification have met with recent success in their application to robot mapping, as GPs are capable of expressing rich correlation among map cells and sensor data. However, the cubic computational complexity has limited its application to large-scale mapping and online use. In this paper we address this issue first by proposing test-data octrees, octrees within blocks of the map that prune away nodes of the same state, condensing the number of test data used in a regression, in addition to allowing fast data retrieval. We also propose a nested Bayesian committee machine which, after new sensor data is partitioned among several GP regressions, fuses the result and updates the map with greatly reduced complexity. Finally, by adjusting the range of influence of the training data and tuning a variance threshold implemented in our method's binary classification step, we are able to control the richness of inference achieved by GPs - and its tradeoff with classification accuracy. The performance of the proposed approach is evaluated with both simulated and real data, demonstrating that the method may serve both as an improved-accuracy classifier, and as a predictive tool to support autonomous navigation.",
"An information-driven autonomous robotic explo- ration method on a continuous representation of unknown envi- ronments is proposed in this paper. The approach conveniently handles sparse sensor measurements to build a continuous model of the environment that exploits structural dependencies without the need to resort to a fixed resolution grid map. A gradient field of occupancy probability distribution is regressed from sensor data as a Gaussian process providing frontier boundaries for further exploration. The resulting continuous global frontier surface completely describes unexplored regions and, inherently, provides an automatic stop criterion for a desired sensitivity. The performance of the proposed approach is evaluated through simulation results in the well-known Freiburg and Cave maps.",
"We introduce a new statistical modelling technique for building occupancy maps. The problem of mapping is addressed as a classification task where the robot's environment is classified into regions of occupancy and free space. This is obtained by employing a modified Gaussian process as a non-parametric Bayesian learning technique to exploit the fact that real-world environments inherently possess structure. This structure introduces dependencies between points on the map which are not accounted for by many common mapping techniques such as occupancy grids. Our approach is an 'anytime' algorithm that is capable of generating accurate representations of large environments at arbitrary resolutions to suit many applications. It also provides inferences with associated variances into occluded regions and between sensor beams, even with relatively few observations. Crucially, the technique can handle noisy data, potentially from multiple sources, and fuse it into a robust common probabilistic representation of the robot's surroundings. We demonstrate the benefits of our approach on simulated datasets with known ground truth and in outdoor urban environments.",
"In this paper, a novel solution for autonomous robotic exploration is proposed. We model the distribution of information in an unknown environment as an unsteady diffusion process, which can be an appropriate mathematical formulation and analogy for expanding, time-varying, and dynamic envi- ronments. This information distribution map is the solution of the diffusion process partial differential equation, and is regressed from sensor data as a Gaussian Process. Optimization of the process parameters leads to an optimal frontier map which describes regions of interest for further exploration. Since the presented approach considers a continuous model of the environment, it can be used to plan smooth exploration paths exploiting the structural dependencies of the environment whilst handling sparse sensor measurements. The performance of the approach is evaluated through simulation results in the well- known Freiburg and Cave maps.",
"Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.",
"",
"This paper proposes a new method for occupancy map building using a mixture of Gaussian processes. We consider occupancy maps as a binary classification problem of positions being occupied or not, and apply Gaussian processes. Particularly, since the computational complexity of Gaussian processes grows as O(n3), where n is the number of data points, we divide the training data into small subsets and apply a mixture of Gaussian processes. The procedure of our map building method consists of three steps. First, we cluster acquired data by grouping laser hit points on the same line into the same cluster. Then, we build local occupancy maps by using Gaussian processes with clustered data. Finally, local occupancy maps are merged into one by using a mixture of Gaussian processes. Simulation results will be compared with previous researches and provided demonstrating the benefits of the approach."
]
} |
1701.01061 | 2583196029 | Application security traditionally strongly relies upon security of the underlying operating system. However, operating systems often fall victim to software attacks, compromising security of applications as well. To overcome this dependency, Intel SGX allows to protect application code against a subverted or malicious OS by running it in a hardware-protected enclave. However, SGX lacks support for generic trusted I O paths to protect user input and output between enclaves and I O devices. This work presents SGXIO, a generic trusted path architecture for SGX, allowing user applications to run securely on top of an untrusted OS, while at the same time supporting trusted paths to generic I O devices. To achieve this, SGXIO combines the benefits of SGX's easy programming model with traditional hypervisor-based trusted path architectures. Moreover, SGXIO can tweak insecure debug enclaves to behave like secure production enclaves. SGXIO surpasses traditional use cases in cloud computing and digital rights management and makes SGX technology usable for protecting user-centric, local applications against kernel-level keyloggers and likewise. It is compatible to unmodified operating systems and works on a modern commodity notebook out of the box. Hence, SGXIO is particularly promising for the broad x86 community to which SGX is readily available. | Traditionally, security kernels are used to achieve strong process and resource isolation. One soon realized that security kernels itself are the Achilles' heel of the whole system security and need complexity reduction. This paved the way for microkernels, for which seL4 is a prominent example. The developers of seL4 reduced kernel code size to 8700 lines of C code and conducted formal verification to proof correctness @cite_51 . Even with a secure microkernel in place, designing a fully-featured, secure OS on top of it is still an unmet challenge @cite_63 . Hence, one has to live with a feature-rich but untrusted OS. | {
"cite_N": [
"@cite_51",
"@cite_63"
],
"mid": [
"2136310957",
"2257576420"
],
"abstract": [
"Complete formal verification is the only known way to guarantee that a system is free of programming errors. We present our experience in performing the formal, machine-checked verification of the seL4 microkernel from an abstract specification down to its C implementation. We assume correctness of compiler, assembly code, and hardware, and we used a unique design approach that fuses formal and operating systems techniques. To our knowledge, this is the first formal proof of functional correctness of a complete, general-purpose operating-system kernel. Functional correctness means here that the implementation always strictly follows our high-level abstract specification of kernel behaviour. This encompasses traditional design and implementation safety properties such as the kernel will never crash, and it will never perform an unsafe operation. It also proves much more: we can predict precisely how the kernel will behave in every possible situation. seL4, a third-generation microkernel of L4 provenance, comprises 8,700 lines of C code and 600 lines of assembler. Its performance is comparable to other high-performance L4 kernels.",
"Despite a number of radical changes in how computer systems are used, the design principles behind the very core of the systems stack---an operating system kernel---has remained unchanged for decades. We run monolithic kernels developed with a combination of an unsafe programming language, global sharing of data structures, opaque interfaces, and no explicit knowledge of kernel protocols. Today, the monolithic architecture of a kernel is the main factor undermining its security, and even worse, limiting its evolution towards a safer, more secure environment. Lack of isolation across kernel subsystems allows attackers to take control over the entire machine with a single kernel vulnerability. Furthermore, complex, semantically rich monolithic code with globally shared data structures and no explicit interfaces is not amenable to formal analysis and verification tools. Even after decades of work to make monolithic kernels more secure, over a hundred serious kernel vulnerabilities are still reported every year. Modern kernels need decomposition as a practical means of confining the effects of individual attacks. Historically, decomposed kernels were prohibitively slow. Today, the complexity of a modern kernel prevents a trivial decomposition effort. We argue, however, that despite all odds modern kernels can be decomposed. Careful choice of communication abstractions and execution model, a general approach to decomposition, a path for incremental adoption, and automation through proper language tools can address complexity of decomposition and performance overheads of decomposed kernels. Our work on lightweight capability domains (LCDs) develops principles, mechanisms, and tools that enable incremental, practical decomposition of a modern operating system kernel."
]
} |
1701.01061 | 2583196029 | Application security traditionally strongly relies upon security of the underlying operating system. However, operating systems often fall victim to software attacks, compromising security of applications as well. To overcome this dependency, Intel SGX allows to protect application code against a subverted or malicious OS by running it in a hardware-protected enclave. However, SGX lacks support for generic trusted I O paths to protect user input and output between enclaves and I O devices. This work presents SGXIO, a generic trusted path architecture for SGX, allowing user applications to run securely on top of an untrusted OS, while at the same time supporting trusted paths to generic I O devices. To achieve this, SGXIO combines the benefits of SGX's easy programming model with traditional hypervisor-based trusted path architectures. Moreover, SGXIO can tweak insecure debug enclaves to behave like secure production enclaves. SGXIO surpasses traditional use cases in cloud computing and digital rights management and makes SGX technology usable for protecting user-centric, local applications against kernel-level keyloggers and likewise. It is compatible to unmodified operating systems and works on a modern commodity notebook out of the box. Hence, SGXIO is particularly promising for the broad x86 community to which SGX is readily available. | Isolating an application from such an untrusted OS is essential for any trusted path. There exist various techniques for isolated execution, which range from pure hypervisor designs @cite_13 @cite_30 @cite_26 @cite_35 @cite_37 @cite_25 over hardware-software co-designs @cite_54 @cite_57 @cite_40 to pure hardware extensions @cite_31 @cite_12 @cite_60 @cite_29 @cite_8 @cite_16 @cite_23 . SGXIO uses Intel SGX @cite_29 , which, from a functional perspective, cumulates previous work in software isolation, attestation and transparent memory encryption. Others isolate sensitive code by keeping it entirely in the CPU cache @cite_47 or by migrating it to system management RAM on x86 @cite_3 . Often, dedicated security co-processors are used @cite_1 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_54",
"@cite_29",
"@cite_3",
"@cite_31",
"@cite_60",
"@cite_8",
"@cite_23",
"@cite_37",
"@cite_26",
"@cite_57",
"@cite_40",
"@cite_16",
"@cite_25",
"@cite_12",
"@cite_1",
"@cite_47",
"@cite_13"
],
"mid": [
"2168601499",
"",
"",
"",
"2053343312",
"2108255910",
"",
"2011491452",
"",
"2150615820",
"",
"1978703818",
"2000375627",
"2463516579",
"2130694829",
"2043366397",
"2065207200",
"2149256534",
"2150709728"
],
"abstract": [
"Hypervisors are increasingly utilized in modern computer systems, ranging from PCs to web servers and data centers. Aside from server applications, hypervisors are also becoming a popular target for implementing many security systems, since they provide a small and easy-to-secure trusted computing base. This paper presents a novel way of using hypervisors to protect application data privacy even when the underlying operating system is not trustable. Each page in virtual address space is rendered to user applications according to the security context the application is running in. The hypervisor encrypts and decrypts each memory page requested depending on the application's access permission to the page. The main result of this system is the complete removal of the operating system from the trust base for user applications' data privacy. To reduce the runtime overhead of the system, two optimization techniques are employed. We use page-frame replication to reduce the number ofcryptographic operations by keeping decrypted versions of a page frame. We also employ lazy synchronization to minimize overhead due to an update to one of the replicated page frame. Our system is implemented and evaluated by modifying the Xen hypervisor, showing that it increases the application execution time only by 3 for CPU and memory-intensive workloads.",
"",
"",
"",
"SICE is a novel framework to provide hardware-level isolation and protection for sensitive workloads running on x86 platforms in compute clouds. Unlike existing isolation techniques, SICE does not rely on any software component in the host environment (i.e., an OS or a hypervisor). Instead, the security of the isolated environments is guaranteed by a trusted computing base that only includes the hardware, the BIOS, and the System Management Mode (SMM). SICE provides fast context switching to and from an isolated environment, allowing isolated workloads to time-share the physical platform with untrusted workloads. Moreover, SICE supports a large range (up to 4GB) of isolated memory. Finally, the most unique feature of SICE is the use of multicore processors to allow the isolated environments to run concurrently and yet securely beside the untrusted host. We have implemented a SICE prototype using an AMD x86 hardware platform. Our experiments show that SICE performs fast context switching (67 microseconds) to and from the isolated environment and that it imposes a reasonable overhead (3 on all but one benchmark) on the operation of an isolated Linux virtual machine. Our prototype demonstrates that, subject to a careful security review of the BIOS software and the SMM hardware implementation, current hardware architecture already provides abstractions that can support building strong isolation mechanisms using a very small SMM software foundation of about 300 lines of code.",
"Although there have been attempts to develop code transformations that yield tamper-resistant software, no reliable software-only methods are know. This paper studies the hardware implementation of a form of execute-only memory (XOM) that allows instructions stored in memory to be executed but not otherwise manipulated. To support XOM code we use a machine that supports internal compartments---a process in one compartment cannot read data from another compartment. All data that leaves the machine is encrypted, since we assume external memory is not secure. The design of this machine poses some interesting trade-offs between security, efficiency, and flexibility. We explore some of the potential security issues as one pushes the machine to become more efficient and flexible. Although security carries a performance penalty, our analysis indicates that it is possible to create a normal multi-tasking machine where nearly all applications can be run in XOM mode. While a virtual XOM machine is possible, the underlying hardware needs to support a unique private key, private memory, and traps on cache misses. For efficient operation, hardware assist to provide fast symmetric ciphers is also required.",
"",
"We consider the problem of how to provide an execution environment where the application's secrets are safe even in the presence of malicious system software layers. We propose Iso-X --- a flexible, fine-grained hardware-supported framework that provides isolation for security-critical pieces of an application such that they can execute securely even in the presence of untrusted system software. Isolation in Iso-X is achieved by creating and dynamically managing compartments to host critical fragments of code and associated data. Iso-X provides fine-grained isolation at the memory-page level, flexible allocation of memory, and a low-complexity, hardware-only trusted computing base. Iso-X requires minimal additional hardware, a small number of new ISA instructions to manage compartments, and minimal changes to the operating system which need not be in the trusted computing base. The run-time performance overhead of Iso-X is negligible and even the overhead of creating and destroying compartments is modest. Iso-X offers higher memory flexibility than the recently proposed SGX design from Intel, allowing both fluid partitioning of the vailable memory space and dynamic growth of compartments. An FPGA implementation of Iso-X runtime mechanisms shows a negligible impact on the processor cycle time.",
"",
"InkTag is a virtualization-based architecture that gives strong safety guarantees to high-assurance processes even in the presence of a malicious operating system. InkTag advances the state of the art in untrusted operating systems in both the design of its hypervisor and in the ability to run useful applications without trusting the operating system. We introduce paraverification, a technique that simplifies the InkTag hypervisor by forcing the untrusted operating system to participate in its own verification. Attribute-based access control allows trusted applications to create decentralized access control policies. InkTag is also the first system of its kind to ensure consistency between secure data and metadata, ensuring recoverability in the face of system crashes.",
"",
"We present Bastion, a new hardware-software architecture for protecting security-critical software modules in an untrusted software stack. Our architecture is composed of enhanced microprocessor hardware and enhanced hypervisor software. Each trusted software module is provided with a secure, fine-grained memory compartment and its own secure persistent storage area. Bastion is the first architecture to provide direct hardware protection of the hypervisor from both software and physical attacks, before employing the hypervisor to provide the same protection to security-critical OS and application modules. Our implementation demonstrates the feasibility of bypassing an untrusted commodity OS to provide application security and shows better security with higher performance when compared to the Trusted Platform Module (TPM), the current industry state-of-the-art security chip. We provide a proof-of-concept implementation on the OpenSPARC platform.",
"With computing increasingly becoming more dispersed, relying on mobile devices, distributed computing, cloud computing, etc. there is an increasing threat from adversaries obtaining physical access to some of the computer systems through theft or security breaches. With such an untrusted computing node, a key challenge is how to provide secure computing environment where we provide privacy and integrity for data and code of the application. We propose SecureME, a hardware-software mechanism that provides such a secure computing environment. SecureME protects an application from hardware attacks by using a secure processor substrate, and also from the Operating System (OS) through memory cloaking, permission paging, and system call protection. Memory cloaking hides data from the OS but allows the OS to perform regular virtual memory management functions, such as page initialization, copying, and swapping. Permission paging extends the OS paging mechanism to provide a secure way for two applications to establish shared pages for inter-process communication. Finally, system call protection applies spatio-temporal protection for arguments that are passed between the application and the OS. Based on our performance evaluation using microbenchmarks, single-program workloads, and multiprogrammed workloads, we found that SecureME only adds a small execution time overhead compared to a fully unprotected system. Roughly half of the overheads are contributed by the secure processor substrate. SecureME also incurs a negligible additional storage overhead over the secure processor substrate.",
"Sanctum offers the same promise as Intel’s Software Guard Extensions (SGX), namely strong provable isolation of software modules running concurrently and sharing resources, but protects against an important class of additional software attacks that infer private information from a program’s memory access patterns. Sanctum shuns unnecessary complexity, leading to a simpler security analysis. We follow a principled approach to eliminating entire attack surfaces through isolation, rather than plugging attack-specific privacy leaks. Most of Sanctum’s logic is implemented in trusted software, which does not perform cryptographic operations using keys, and is easier to analyze than SGX’s opaque microcode, which does. Our prototype targets a Rocket RISC-V core, an open implementation that allows any researcher to reason about its security properties. Sanctum’s extensions can be adapted to other processor cores, because we do not change any major CPU building block. Instead, we add hardware at the interfaces between generic building blocks, without impacting cycle time. Sanctum demonstrates that strong software isolation is achievable with a surprisingly small set of minimally invasive hardware changes, and a very reasonable overhead.",
"Applications that process sensitive data can be carefully designed and validated to be difficult to attack, but they are usually run on monolithic, commodity operating systems, which may be less secure. An OS compromise gives the attacker complete access to all of an application's data, regardless of how well the application is built. We propose a new system, Virtual Ghost, that protects applications from a compromised or even hostile OS. Virtual Ghost is the first system to do so by combining compiler instrumentation and run-time checks on operating system code, which it uses to create ghost memory that the operating system cannot read or write. Virtual Ghost interposes a thin hardware abstraction layer between the kernel and the hardware that provides a set of operations that the kernel must use to manipulate hardware, and provides a few trusted services for secure applications such as ghost memory management, encryption and signing services, and key management. Unlike previous solutions, Virtual Ghost does not use a higher privilege level than the kernel. Virtual Ghost performs well compared to previous approaches; it outperforms InkTag on five out of seven of the LMBench microbenchmarks with improvements between 1.3x and 14.3x. For network downloads, Virtual Ghost experiences a 45 reduction in bandwidth at most for small files and nearly no reduction in bandwidth for large files and web traffic. An application we modified to use ghost memory shows a maximum additional overhead of 5 due to the Virtual Ghost protections. We also demonstrate Virtual Ghost's efficacy by showing how it defeats sophisticated rootkit attacks.",
"We present OASIS, a CPU instruction set extension for externally verifiable initiation, execution, and termination of an isolated execution environment with a trusted computing base consisting solely of the CPU. OASIS leverages the hardware components available on commodity CPUs to achieve a low-cost, low-overhead design.",
"Abstract Secure coprocessors enable secure distributed applications by providing safe havens where an application program can execute (and accumulate state), free of observation and interference by an adversary with direct physical access to the device. However, for these coprocessors to be effective, participants in such applications must be able to verify that they are interacting with an authentic program on an authentic, untampered device. Furthermore, secure coprocessors that support general-purpose computation and will be manufactured and distributed as commercial products must provide these core sanctuary and authentication properties while also meeting many additional challenges, including: • the applications, operating system, and underlying security management may all come from different, mutually suspicious authorities; • configuration and maintenance must occur in a hostile environment, while minimizing disruption of operations; • the device must be able to recover from the vulnerabilities that inevitably emerge in complex software; • physical security dictates that the device itself can never be opened and examined; and • ever-evolving cryptographic requirements dictate that hardware accelerators be supported by reloadable on-card software. This paper summarizes the hardware, software, and cryptographic architecture we developed to address these problems. Furthermore, with our colleagues, we have implemented this solution, into a commercially available product.",
"Much effort has been spent to reduce the software Trusted Computing Base (TCB) of modern systems. However, there remains a large and complex hardware TCB, including memory, peripherals, and system buses. There are many stronger, but still realistic, adversary models where we need to consider that this hardware may be malicious or compromised. Thus, there is a practical need to determine whether we can achieve secure program execution in the presence of not only malicious software, but also malicious hardware.",
"Commodity operating systems entrusted with securing sensitive data are remarkably large and complex, and consequently, frequently prone to compromise. To address this limitation, we introduce a virtual-machine-based system called Overshadow that protects the privacy and integrity of application data, even in the event of a total OScompromise. Overshadow presents an application with a normal view of its resources, but the OS with an encrypted view. This allows the operating system to carry out the complex task of managing an application's resources, without allowing it to read or modify them. Thus, Overshadow offers a last line of defense for application data. Overshadow builds on multi-shadowing, a novel mechanism that presents different views of \"physical\" memory, depending on the context performing the access. This primitive offers an additional dimension of protection beyond the hierarchical protection domains implemented by traditional operating systems and processor architectures. We present the design and implementation of Overshadow and show how its new protection semantics can be integrated with existing systems. Our design has been fully implemented and used to protect a wide range of unmodified legacy applications running on an unmodified Linux operating system. We evaluate the performance of our implementation, demonstrating that this approach is practical."
]
} |
1701.01061 | 2583196029 | Application security traditionally strongly relies upon security of the underlying operating system. However, operating systems often fall victim to software attacks, compromising security of applications as well. To overcome this dependency, Intel SGX allows to protect application code against a subverted or malicious OS by running it in a hardware-protected enclave. However, SGX lacks support for generic trusted I O paths to protect user input and output between enclaves and I O devices. This work presents SGXIO, a generic trusted path architecture for SGX, allowing user applications to run securely on top of an untrusted OS, while at the same time supporting trusted paths to generic I O devices. To achieve this, SGXIO combines the benefits of SGX's easy programming model with traditional hypervisor-based trusted path architectures. Moreover, SGXIO can tweak insecure debug enclaves to behave like secure production enclaves. SGXIO surpasses traditional use cases in cloud computing and digital rights management and makes SGX technology usable for protecting user-centric, local applications against kernel-level keyloggers and likewise. It is compatible to unmodified operating systems and works on a modern commodity notebook out of the box. Hence, SGXIO is particularly promising for the broad x86 community to which SGX is readily available. | Any communication with the untrusted OS is problematic and needs careful validation @cite_22 @cite_62 . Baumann al therefore reduce the untrusted syscall interface to a bare minimum and shift a whole Windows 8 library into an SGX enclave, which tremendously increases the TCB @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_62",
"@cite_22"
],
"mid": [
"1852007091",
"2112735498",
"2109770980"
],
"abstract": [
"Today's cloud computing infrastructure requires substantial trust. Cloud users rely on both the provider's staff and its globally-distributed software hardware platform not to expose any of their private data. We introduce the notion of shielded execution, which protects the confidentiality and integrity of a program and its data from the platform on which it runs (i.e., the cloud operator's OS, VM and firmware). Our prototype, Haven, is the first system to achieve shielded execution of unmodified legacy applications, including SQL Server and Apache, on a commodity OS (Windows) and commodity hardware. Haven leverages the hardware protection of Intel SGX to defend against privileged code and physical attacks such as memory probes, but also addresses the dual challenges of executing unmodified legacy binaries and protecting them from a malicious host. This work motivated recent changes in the SGX specification.",
"In recent years, researchers have proposed systems for running trusted code on an untrusted operating system. Protection mechanisms deployed by such systems keep a malicious kernel from directly manipulating a trusted application's state. Under such systems, the application and kernel are, conceptually, peers, and the system call API defines an RPC interface between them. We introduce Iago attacks, attacks that a malicious kernel can mount in this model. We show how a carefully chosen sequence of integer return values to Linux system calls can lead a supposedly protected process to act against its interests, and even to undertake arbitrary computation at the malicious kernel's behest. Iago attacks are evidence that protecting applications from malicious kernels is more difficult than previously realized.",
"Complexity in commodity operating systems makes compromises inevitable. Consequently, a great deal of work has examined how to protect security-critical portions of applications from the OS through mechanisms such as microkernels, virtual machine monitors, and new processor architectures. Unfortunately, most work has focused on CPU and memory isolation and neglected OS semantics. Thus, while much is known about how to prevent OS and application processes from modifying each other, far less is understood about how different OS components can undermine application security if they turn malicious. We consider this problem in the context of our work on Overshadow, a virtual-machine-based system for retrofitting protection in commodity operating systems. We explore how malicious behavior in each major OS sub-system can undermine application security, and present potential mitigations. While our discussion is presented in terms of Overshadow and Linux, many of the problems and solutions are applicable to other systems where trusted applications rely on untrusted, potentially malicious OS components."
]
} |
1701.01061 | 2583196029 | Application security traditionally strongly relies upon security of the underlying operating system. However, operating systems often fall victim to software attacks, compromising security of applications as well. To overcome this dependency, Intel SGX allows to protect application code against a subverted or malicious OS by running it in a hardware-protected enclave. However, SGX lacks support for generic trusted I O paths to protect user input and output between enclaves and I O devices. This work presents SGXIO, a generic trusted path architecture for SGX, allowing user applications to run securely on top of an untrusted OS, while at the same time supporting trusted paths to generic I O devices. To achieve this, SGXIO combines the benefits of SGX's easy programming model with traditional hypervisor-based trusted path architectures. Moreover, SGXIO can tweak insecure debug enclaves to behave like secure production enclaves. SGXIO surpasses traditional use cases in cloud computing and digital rights management and makes SGX technology usable for protecting user-centric, local applications against kernel-level keyloggers and likewise. It is compatible to unmodified operating systems and works on a modern commodity notebook out of the box. Hence, SGXIO is particularly promising for the broad x86 community to which SGX is readily available. | Zhou al build a generic trusted path on x86 systems in a pure hypervisor-based design @cite_5 . They show the first comprehensive approach on x86 systems, protecting a trusted path all the way from the application level down to the device level. They consider PCI device misconfiguration, DMA attacks as well as interrupt spoofing attacks. However, pure hypervisor-based designs come at a price. They strictly separate the untrusted stack from the trusted one. Hence, the hypervisor is in charge of managing all secure applications and all associated resources itself. This includes secure process and memory management with scheduling, verified launch and attestation. Also, communication between both security domains might be non-trivial due to synchronization issues or potentially mismatching Application Binary Interfaces (ABI). In contrast, SGXIO uses the comparably easy programming model of SGX enclaves, in which the untrusted OS is in charge of managing secure enclaves. Moreover, SGX provides verified launch and attestation out of the box. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2018045700"
],
"abstract": [
"A trusted path is a protected channel that assures the secrecy and authenticity of data transfers between a user's input output (I O) device and a program trusted by that user. We argue that, despite its incontestable necessity, current commodity systems do not support trusted path with any significant assurance. This paper presents a hyper visor-based design that enables a trusted path to bypass an untrusted operating-system, applications, and I O devices, with a minimal Trusted Computing Base (TCB). We also suggest concrete I O architectural changes that will simplify future trusted-path system design. Our system enables users to verify the states and configurations of one or more trusted-paths using a simple, secret less, hand-held device. We implement a simple user-oriented trusted path as a case study."
]
} |
1701.01061 | 2583196029 | Application security traditionally strongly relies upon security of the underlying operating system. However, operating systems often fall victim to software attacks, compromising security of applications as well. To overcome this dependency, Intel SGX allows to protect application code against a subverted or malicious OS by running it in a hardware-protected enclave. However, SGX lacks support for generic trusted I O paths to protect user input and output between enclaves and I O devices. This work presents SGXIO, a generic trusted path architecture for SGX, allowing user applications to run securely on top of an untrusted OS, while at the same time supporting trusted paths to generic I O devices. To achieve this, SGXIO combines the benefits of SGX's easy programming model with traditional hypervisor-based trusted path architectures. Moreover, SGXIO can tweak insecure debug enclaves to behave like secure production enclaves. SGXIO surpasses traditional use cases in cloud computing and digital rights management and makes SGX technology usable for protecting user-centric, local applications against kernel-level keyloggers and likewise. It is compatible to unmodified operating systems and works on a modern commodity notebook out of the box. Hence, SGXIO is particularly promising for the broad x86 community to which SGX is readily available. | @cite_2 and @cite_55 , Zhou al discuss a trusted path to a single USB device. Yu al show how to apply the trusted path concept to GPU separation @cite_7 . Filyanov al discuss a pure uni-directional trusted path using the TPM and Intel TXT @cite_4 , which has two notable drawbacks: First, it is limited to trusted input paths only. Second, while waiting for secure input from the user, the OS is suspended. In contrast, SGXIO supports full parallelism between the untrusted OS and secure applications. | {
"cite_N": [
"@cite_55",
"@cite_4",
"@cite_7",
"@cite_2"
],
"mid": [
"2062025743",
"2156491735",
"2026210364",
"168427334"
],
"abstract": [
"To be trustworthy, security-sensitive applications must be formally verified and hence small and simple, i.e., wimpy. Thus, they cannot include a variety of basic services available only in large and untrustworthy commodity systems, i.e., in giants. Hence, wimps must securely compose with giants to survive on commodity systems, i.e., rely on giants' services but only after efficiently verifying their results. This paper presents a security architecture based on a wimpy kernel that provides on-demand isolated I O channels for wimp applications, without bloating the underlying trusted computing base. The size and complexity of the wimpy kernel are minimized by safely outsourcing I O subsystem functions to an untrusted commodity operating system and exporting driver and I O subsystem code to wimp applications. Using the USB subsystem as a case study, this paper illustrates the dramatic reduction of wimpy-kernel size and complexity, e.g., over 99 of the USB code base is removed. Performance measurements indicate that the wimpy-kernel architecture exhibits the desired execution efficiency.",
"Commodity computer systems today do not include a full trusted path capability. Consequently, malware can control the user's input and output in order to reveal sensitive information to malicious parties or to generate manipulated transaction requests to service providers. Recent hardware offers compelling features for remote attestation and isolated code execution, however, these mechanisms are not widely used in deployed systems to date. We show how to leverage these mechanisms to establish a “one-way” trusted path allowing service providers to gain assurance that users' transactions were indeed submitted by a human operating the computer, instead of by malware such as transaction generators. We design, implement, and evaluate our solution, and argue that it is practical and offers immediate value in e-commerce, as a replacement for captchas, and in other Internet scenarios.",
"A trusted display service assures the confidentiality and authenticity of content output by a security-sensitive application and thus prevents a compromised commodity operating system or application from surreptitiously reading or modifying the displayed output. Past approaches have failed to provide trusted display on commodity platforms that use modern graphics processing units (GPUs). For example, full GPU virtualization encourages the sharing of GPU address space with multiple virtual machines without providing adequate hardware protection mechanisms; e.g., address-space separation and instruction execution control. This paper proposes a new trusted display service that has a minimal trusted code base and maintains full compatibility with commodity computing platforms. The service relies on a GPU separation kernel that (1) defines different types of GPU objects, (2) mediates access to security-sensitive objects, and (3) emulates object whenever required by commodity-platform compatibility. The separation kernel employs a new address-space separation mechanism that avoids the challenging problem of GPU instruction verification without adequate hardware support. The implementation of the trusted-display service has a code base that is two orders of magnitude smaller than other similar services, such as those based on full GPU virtualization. Performance measurements show that the trusted-display overhead added over and above that of the underlying trusted system is fairly modest.",
"Today large software systems (i.e., giants) thrive in commodity markets, but are untrustworthy due to their numerous and inevitable software bugs that can be exploited by the adversary. Thus, the best hope of security is that some small, simple, and trustworthy software components (i.e., wimps) can be protected from attacks launched by adversary-controlled giants. However, wimps in isolation typically give up a variety of basic services (e.g., file system, networking, device I O), trading usefulness and viability with security. Among these basic services, isolated I O channels remained an unmet challenge over the past three decades. Isolated I O is a critical security primitive for a myriad of applications (e.g., secure user interface, remote device control). With this primitive, isolated wimps can transfer I O data to commodity peripheral devices and the data secrecy and authenticity are protected from the co-existing giants. This thesis addresses this challenge by proposing a new security architecture to provide services to isolated wimps. Instead of restructuring the giants or bloating the Trusted Computing Base that underpins wimp-giant isolation (dubbed underlying TCB), this thesis presents a unique on-demand I O isolation model and a trusted add-on component called wimpy kernel to instantiate this model. In our model, the wimpy kernel dynamically takes control of the devices managed by a commodity OS, connects them to the isolated wimps, and relinquishes controls to the OS when done. This model creates ample opportunities for the wimpy kernel to outsource I O subsystem functions to the untrusted OS and verify their results. The wimpy kernel further exports device drivers and I O subsystem code to wimps and mediates the operations of the exported code. These two methodologies help to significantly reduce the size and complexity of the wimpy kernel for high security assurance. Using the popular and complex USB subsystem as a case study, this thesis illustrates the dramatic reduction of the wimpy kernel; i.e., over 99 of the Linux USB code base is removed. In addition, the wimpy kernel also composes with the underlying TCB, by retaining its code size, complexity and security properties."
]
} |
1701.01061 | 2583196029 | Application security traditionally strongly relies upon security of the underlying operating system. However, operating systems often fall victim to software attacks, compromising security of applications as well. To overcome this dependency, Intel SGX allows to protect application code against a subverted or malicious OS by running it in a hardware-protected enclave. However, SGX lacks support for generic trusted I O paths to protect user input and output between enclaves and I O devices. This work presents SGXIO, a generic trusted path architecture for SGX, allowing user applications to run securely on top of an untrusted OS, while at the same time supporting trusted paths to generic I O devices. To achieve this, SGXIO combines the benefits of SGX's easy programming model with traditional hypervisor-based trusted path architectures. Moreover, SGXIO can tweak insecure debug enclaves to behave like secure production enclaves. SGXIO surpasses traditional use cases in cloud computing and digital rights management and makes SGX technology usable for protecting user-centric, local applications against kernel-level keyloggers and likewise. It is compatible to unmodified operating systems and works on a modern commodity notebook out of the box. Hence, SGXIO is particularly promising for the broad x86 community to which SGX is readily available. | ARM TrustZone combines secure execution with trusted path support. A TrustZone compatible CPU provides a secure world mode, which is orthogonal to classical privilege levels. The secure world is isolated against the normal world and operates a whole trusted stack, including security kernel, device drivers and applications. In addition, TrustZone is complemented by a set of hardware modules, which allow strong isolation of physical memory as well as peripherals. Also, device interrupts can be directly routed into the secure world. TrustZone can be combined with a System MMU, similar to an IOMMU, which can prevent DMA attacks. Thus, TrustZone not only allows isolated execution @cite_42 but also generic trusted paths @cite_38 , which is a significant advantage over SGX. In contrast to SGX, TrustZone does not distinguish between different secure application processes in hardware. It requires a security kernel for secure process isolation, management, attestation and similar. | {
"cite_N": [
"@cite_38",
"@cite_42"
],
"mid": [
"2136566423",
"1493190345"
],
"abstract": [
"Mobile devices are frequently used as terminals to interact with many security-critical services such as mobile payment and online banking. However, the large client software stack and the continuous proliferation of malware expose such interaction under various threats, including passive attacks like phishing and active ones like direct code manipulation. This paper proposes TrustUI, a new trusted path design for mobile devices that enables secure interaction between end users and services based on ARM's TrustZone technology. TrustUI is built with a combination of key techniques including cooperative randomization of the trusted path and secure delegation of network interaction. With such techniques, TrustUI not only requires no trust of the commodity software stack, but also takes a step further by excluding drivers for user-interacting devices like touch screen from its trusted computing base (TCB). Hence, TrustUI has a much smaller TCB, requires no access to device driver code, and may easily adapt to many devices. A prototype of TrustUI has been implemented on a Samsung Exynos 4412 board and evaluation shows that TrustUI provides strong protection of users interaction.",
"Mobile devices have been widely used to process sensitive data and perform important transactions. It is a challenge to protect secure code from a malicious mobile OS. ARM TrustZone technology can protect secure code in a secure domain from an untrusted normal domain. However, since the attack surface of the secure domain will increase along with the size of secure code, it becomes arduous to negotiate with OEMs to get new secure code installed. We propose a novel TrustZone-based isolation framework named TrustICE to create isolated computing environments (ICEs) in the normal domain. TrustICE securely isolates the secure code in an ICE from an untrusted Rich OS in the normal domain. The trusted computing base (TCB) of TrustICE remains small and unchanged regardless of the amount of secure code being protected. Our prototype shows that the switching time between an ICE and the Rich OS is less than 12 ms."
]
} |
1701.00837 | 2949546646 | This paper investigates the use of WiFi and mobile device-to-device networks, with vehicular ad hoc networks being a typical example, as a complementary means to offload and reduce the traffic load of cellular networks. A novel cooperative content dissemination strategy is proposed for a heterogeneous network consisting of different types of devices with different levels of mobility, ranging from static WiFi access points to mobile devices such as vehicles. The proposed strategy offloads a significant proportion of data traffic from cellular networks to WiFi or device-to-device networks. Detailed analysis is provided for the content dissemination process in heterogeneous networks adopting the strategy. On that basis, the optimal parameter settings for the content dissemination strategy are discussed. Simulation and numerical studies show that the proposed strategy effectively reduces data traffic for cellular networks while guaranteeing successful content delivery. | Using a typical cooperative content dissemination strategy @cite_13 , the service providers first deliver the content to only a small group of users, then these users can further disseminate the content to other subscribed users when their mobile devices are in the proximity and can communicate using WiFi tethering or Bluetooth technology. It is obvious that such opportunistic content dissemination cannot guarantee the delivery of content. This paper proposes a mechanism that can provide guarantee on the delivery of content. | {
"cite_N": [
"@cite_13"
],
"mid": [
"1986247300"
],
"abstract": [
"Over the last few years, data traffic over cellular networks has seen an exponential rise, primarily due to the explosion of smartphones, tablets, and laptops. This increase in data traffic on cellular networks has caused an immediate need for offloading traffic for optimum performance of both voice and data services. As a result, different innovative solutions have emerged to manage data traffic. Some of the key technologies include Wi-Fi, femtocells, and IP flow mobility. The growth of data traffic is also creating challenges for the backhaul of cellular networks; therefore, solutions such as core network offloading and media optimization are also gaining popularity. This article aims to provide a survey of mobile data offloading technologies including insights from the business perspective as well."
]
} |
1701.00837 | 2949546646 | This paper investigates the use of WiFi and mobile device-to-device networks, with vehicular ad hoc networks being a typical example, as a complementary means to offload and reduce the traffic load of cellular networks. A novel cooperative content dissemination strategy is proposed for a heterogeneous network consisting of different types of devices with different levels of mobility, ranging from static WiFi access points to mobile devices such as vehicles. The proposed strategy offloads a significant proportion of data traffic from cellular networks to WiFi or device-to-device networks. Detailed analysis is provided for the content dissemination process in heterogeneous networks adopting the strategy. On that basis, the optimal parameter settings for the content dissemination strategy are discussed. Simulation and numerical studies show that the proposed strategy effectively reduces data traffic for cellular networks while guaranteeing successful content delivery. | As vehicular networks form an important class of mobile networks, there is significant research on content dissemination in vehicular networks @cite_1 . Wireless access through vehicle-to-roadside communications can be used in public transport vehicles for streaming applications, e.g., videos and interactive advertisements. As pointed out in @cite_20 , it is challenging to develop an efficient wireless access scheme that minimises the cost of wireless connectivity for downloading data. There are also some empirical studies on content dissemination methods for vehicular networks @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_20"
],
"mid": [
"2124063321",
"2077022807",
"2143139102"
],
"abstract": [
"With the fast development in ad hoc wireless communications and vehicular technology, it is foreseeable that, in the near future, traffic information will be collected and disseminated in real-time by mobile sensors instead of fixed sensors used in the current infrastructure-based traffic information systems. A distributed network of vehicles such as a vehicular ad hoc network (VANET) can easily turn into an infrastructure-less self-organizing traffic information system, where any vehicle can participate in collecting and reporting useful traffic information such as section travel time, flow rate, and density. Disseminating traffic information relies on broadcasting protocols. Recently, there have been a significant number of broadcasting protocols for VANETs reported in the literature. In this paper, we classify and provide an in-depth review of these protocols.",
"A major problem in wireless sensor network localization is erroneous local geometric realizations in some parts of the network due to the sensitivity to certain distance measurement errors, which may in turn affect the reliability of the localization of the whole or a major portion of the sensor network. This phenomenon is well-described using the notion of \"flip ambiguity\" in rigid graph theory. In a recent study by the coauthors, an initial formal geometric analysis of flip ambiguity problems has been provided. The ultimate aim of that study was to quantify the likelihood of flip ambiguities in arbitrary sensor neighborhood geometries. In this paper we propose a more general robustness criterion to detect flip ambiguities in arbitrary sensor neighborhood geometries in planar sensor networks. This criterion enhances the recent study by the coauthors by removing the assumptions of accurately knowing some inter- sensor distances. The established robustness criterion is found to be useful in two aspects: (a) Analyzing the effects of flip ambiguity and (b) Enhancing the reliability of the location estimates of the prevailing localization algorithms by incorporating this robustness criterion to eliminate neighborhoods with flip ambiguity from being included in the localization process.",
"Wireless access through vehicle-to-roadside (V2R) communications can be used in public transportation vehicles for streaming applications, e.g., video and interactive advertisements. Efficient wireless access schemes are required to minimize the cost of wireless connectivity for downloading data and to minimize the probability of buffer outage (or underrun) during playout of the streaming data. In this paper, a hierarchical optimization framework is presented to obtain both short- and long-term optimal decisions on wireless access for streaming applications in a public transportation system. This framework considers the application requirements, the cost of wireless connectivity for transit service provider, and the revenue of the wireless network service provider. The onboard unit in a vehicle makes a short-term decision on whether to download streaming data while it is in the coverage area of a roadside unit (RSU). The objective of this decision is to minimize the cost of wireless connectivity while satisfying the application quality-of-service (QoS) requirement in terms of the maximum buffer underrun probability. The transit service provider makes a long-term decision on whether to reserve wireless access or to access wireless service in an on-demand basis in a service area of an RSU. The network service provider makes a long-term decision on pricing on-demand wireless access for V2R communications so that the highest revenue is achieved."
]
} |
1701.00858 | 2571114381 | This article is an extended version of previous work of the authors [40, 41] on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a paralel with the study of vector-spin glass models - presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low- RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods. | The present paper is built upon two previous shorter papers by the same authors @cite_33 @cite_54 . it focuses on the study of the general type of models described by probability measures ) and ). On the one hand, these represent Boltzmann measures of vectorial-spin systems on fully connected symmetric or bipartite graphs. Examples of previously studied physical models that are special cases of the setting considered here would be the Sherrington-Kirkpatrick model @cite_22 , the Hopfield model @cite_40 @cite_8 , the inference (not learning) in the restricted Boltzmann machine @cite_29 @cite_8 @cite_6 . Our work hence provides a unified replica symmetric solution and TAP equations for generic class of prior distributions and Hamiltonians. | {
"cite_N": [
"@cite_33",
"@cite_22",
"@cite_8",
"@cite_54",
"@cite_29",
"@cite_6",
"@cite_40"
],
"mid": [
"1700527341",
"2022820481",
"2490183153",
"",
"",
"2556341799",
"2128084896"
],
"abstract": [
"We study optimal estimation for sparse principal component analysis when the number of non-zero elements is small but on the same order as the dimension of the data. We employ approximate message passing (AMP) algorithm and its state evolution to analyze what is the information theoretically minimal mean-squared error and the one achieved by AMP in the limit of large sizes. For a special case of rank one and large enough density of non-zeros Deshpande and Montanari [1] proved that AMP is asymptotically optimal. We show that both for low density and for large rank the problem undergoes a series of phase transitions suggesting existence of a region of parameters where estimation is information theoretically possible, but AMP (and presumably every other polynomial algorithm) fails. The analysis of the large rank limit is particularly instructive.",
"We consider an Ising model in which the spins are coupled by infinite-ranged random interactions independently distributed with a Gaussian probability density. Both \"spinglass\" and ferromagnetic phases occur. The competition between the phases and the type of order present in each are studied.",
"Motivated by recent progress in using restricted Boltzmann machines as preprocess-ing algorithms for deep neural network, we revisit the mean-field equations (belief-propagation and TAP equations) in the best understood such machine, namely the Hopfield model of neural networks, and we explicit how they can be used as iterative message-passing algorithms, providing a fast method to compute the local polariza-tions of neurons. In the \"retrieval phase\" where neurons polarize in the direction of one memorized pattern, we point out a major difference between the belief propagation and TAP equations : the set of belief propagation equations depends on the pattern which is retrieved, while one can use a unique set of TAP equations. This makes the latter method much better suited for applications in the learning process of restricted Boltzmann machines. In the case where the patterns memorized in the Hopfield model are not independent, but are correlated through a combinatorial structure, we show that the TAP equations have to be modified. This modification can be seen either as an alteration of the reaction term in TAP equations, or, more interestingly, as the consequence of message passing on a graphical model with several hidden layers, where the number of hidden layers depends on the depth of the correlations in the memorized patterns. This layered structure is actually necessary when one deals with more general restricted Boltzmann machines.",
"",
"",
"Extracting automatically the complex set of features composing real high-dimensional data is crucial for achieving high performance in machine--learning tasks. Restricted Boltzmann Machines (RBM) are empirically known to be efficient for this purpose, and to be able to generate distributed and graded representations of the data. We characterize the structural conditions (sparsity of the weights, low effective temperature, nonlinearities in the activation functions of hidden units, and adaptation of fields maintaining the activity in the visible layer) allowing RBM to operate in such a compositional phase. Evidence is provided by the replica analysis of an adequate statistical ensemble of random RBMs and by RBM trained on the handwritten digits dataset MNIST.",
"Abstract Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices."
]
} |
1701.00858 | 2571114381 | This article is an extended version of previous work of the authors [40, 41] on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a paralel with the study of vector-spin glass models - presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low- RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods. | We note at this point that methodologically closely related series of work on matrix factorization is concerned with the case of high rank, i.e. when @math @cite_34 @cite_69 @cite_68 . While that case also has a set of important application (among then the learning of overcomplete dictionaries) it is different from the low-rank case considered here. The theory developed for the high rank case requires the prior distribution to be separable component-wise. The high rank case also does not present any known output channel universality, the details of the channel enter explicitly the resulting state evolution. Whereas the low-rank case can be viewed as a generalization of a spin model with pairwise interaction, in the graphical model for the high-rank case the interactions involve @math variables. | {
"cite_N": [
"@cite_68",
"@cite_69",
"@cite_34"
],
"mid": [
"1959879694",
"1510028821",
"2088332638"
],
"abstract": [
"We analyze the matrix factorization problem. Given a noisy measurement of a product of two matrices, the problem is to estimate back the original matrices. It arises in many applications, such as dictionary learning, blind matrix calibration, sparse principal component analysis, blind source separation, low rank matrix completion, robust principal component analysis, or factor analysis. It is also important in machine learning: unsupervised representation learning can often be studied through matrix factorization. We use the tools of statistical mechanics—the cavity and replica methods—to analyze the achievability and computational tractability of the inference problems in the setting of Bayes-optimal inference, which amounts to assuming that the two matrices have random-independent elements generated from some known distribution, and this information is available to the inference algorithm. In this setting, we compute the minimal mean-squared-error achievable, in principle, in any computational time, and the error that can be achieved by an efficient approximate message passing algorithm. The computation is based on the asymptotic state-evolution analysis of the algorithm. The performance that our analysis predicts, both in terms of the achieved mean-squared-error and in terms of sample complexity, is extremely promising and motivating for a further development of the algorithm.",
"In this paper, we extend the generalized approximate message passing (G-AMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. Here, in Part I of a two-part paper, we derive our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central-limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assumed priors, and two rank-selection strategies. In Part II of the paper, we will discuss the specializations of EM-BiG-AMP to the problems of matrix completion, robust PCA, and dictionary learning, and we will present the results of an extensive empirical study comparing EM-BiG-AMP to state-of-the-art algorithms on each problem.",
"We consider dictionary learning and blind calibration for signals and matrices created from a random ensemble. We study the mean-squared error in the limit of large signal dimension using the replica method and unveil the appearance of phase transitions delimiting impossible, possible-but-hard and possible inference regions. We also introduce an approximate message passing algorithm that asymptotically matches the theoretical performance, and show through numerical tests that it performs very well, for the calibration problem, for tractable system sizes."
]
} |
1701.00858 | 2571114381 | This article is an extended version of previous work of the authors [40, 41] on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a paralel with the study of vector-spin glass models - presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low- RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods. | No attempt is made at mathematical rigor in the present article. Contrary to the approach traditionally taken in statistics most of the analysis in this paper is non-rigorous, and rather relies of the accepted assumptions of the replica and cavity methods. It it worth, however, mentioning that for the case of Bayes-optimal inference, a large part of the results of this paper were proven rigorously in a recent series of works @cite_7 @cite_32 @cite_56 @cite_62 @cite_20 @cite_72 @cite_45 @cite_17 . These proofs include the mutual information (related to the replica free energy) in the Bayes-optimal setting and the corresponding minimum mean-squared-error (MMSE), and the rigorous establishment that the state evolution is indeed describing asymptotic evolution of the Low-RAMP algorithm. The study out of the Bayes-optimal conditions (without the Nishimori conditions) are move involved. | {
"cite_N": [
"@cite_62",
"@cite_7",
"@cite_32",
"@cite_56",
"@cite_72",
"@cite_45",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2040969041",
"2963072996",
"2964282092",
"2964213728",
"2802812228",
"2508909698",
"2586917466"
],
"abstract": [
"",
"We consider the problem of estimating a rank-one matrix in Gaussian noise under a probabilistic model for the left and right factors of the matrix. The probabilistic model can impose constraints on the factors including sparsity and positivity that arise commonly in learning problems. We propose a simple iterative procedure that reduces the problem to a sequence of scalar estimation computations. The method is similar to approximate message passing techniques based on Gaussian approximations of loopy belief propagation that have been used recently in compressed sensing. Leveraging analysis methods by Bayati and Montanari, we show that the asymptotic behavior of the estimates from the proposed iterative procedure is described by a simple scalar equivalent model, where the distribution of the estimates is identical to certain scalar estimates of the variables in Gaussian noise. Moreover, the effective Gaussian noise level is described by a set of state evolution equations. The proposed method thus provides a computationally simple and general method for rank-one estimation problems with a precise analysis in certain high-dimensional settings.",
"We consider a class of approximated message passing (AMP) algorithms and characterize their highdimensional behavior in terms of a suitable state evolution recursion. Our proof applies to Gaussian matrices with independent but not necessarily identically distributed entries. It covers—in particular—the analysis of generalized AMP, introduced by Rangan, and of AMP reconstruction in compressed sensing with spatially coupled sensing matrices. The proof technique builds on that of Bayati & Montanari [2], while simplifying and generalizing several steps.",
"Sparse Principal Component Analysis (PCA) is a dimensionality reduction technique wherein one seeks a lowrank representation of a data matrix with additional sparsity constraints on the obtained representation. We consider two probabilistic formulations of sparse PCA: a spiked Wigner and spiked Wishart (or spiked covariance) model. We analyze an Approximate Message Passing (AMP) algorithm to estimate the underlying signal and show, in the high dimensional limit, that the AMP estimates are information-theoretically optimal. As an immediate corollary, our results demonstrate that the posterior expectation of the underlying signal, which is often intractable to compute, can be obtained using a polynomial-time scheme. Our results also effectively provide a single-letter characterization of the sparse PCA problem.",
"Factorizing low-rank matrices has many applications in machine learning and statistics. For probabilistic models in the Bayes optimal setting, a general expression for the mutual information has been proposed using heuristic statistical physics computations, and proven in few specific cases. Here, we show how to rigorously prove the conjectured formula for the symmetric rank-one case. This allows to express the minimal mean-square-error and to characterize the detectability phase transitions in a large set of estimation problems ranging from community detection to sparse PCA. We also show that for a large set of parameters, an iterative algorithm called approximate message-passing is Bayes optimal. There exists, however, a gap between what currently known polynomial algorithms can do and what is expected information theoretically. Additionally, the proof technique has an interest of its own and exploits three essential ingredients: the interpolation method introduced in statistical physics by Guerra, the analysis of the approximate message-passing algorithm and the theory of spatial coupling and threshold saturation in coding. Our approach is generic and applicable to other open problems in statistical estimation where heuristic statistical physics predictions are available.",
"",
"We develop an information-theoretic view of the stochastic block model, a popular statistical model for the large-scale structure of complex networks. A graph G from such a model is generated by first assigning vertex labels at random from a finite alphabet, and then connecting vertices with edge probabilities depending on the labels of the endpoints. In the case of the symmetric two-group model, we establish an explicit ‘single-letter’ characterization of the per-vertex mutual information between the vertex labels and the graph, when the graph average degree diverges. The explicit expression of the mutual information is intimately related to estimation-theoretic quantities, and -in particular- reveals a phase transition at the critical point for community detection. Below the critical point the per-vertex mutual information is asymptotically the same as if edges were independent of the vertex labels. Correspondingly, no algorithm can estimate the partition better than random guessing. Conversely, above the threshold, the per-vertex mutual information is strictly smaller than the independent-edges upper bound. In this regime there exists a procedure that estimates the vertex labels better than random guessing.",
""
]
} |
1701.00858 | 2571114381 | This article is an extended version of previous work of the authors [40, 41] on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a paralel with the study of vector-spin glass models - presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low- RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods. | It has become a tradition in related literature @cite_53 to conjecture that the performance of the Low-RAMP algorithm cannot be improved by other polynomial algorithms. We do analyze here in detail the cases where Low-RAMP does not achieve the MMSE, and we remark that since effects of replica symmetry breaking need to be taken into account when evaluating the performance of the best polynomial algorithms, the conjecture of the Low-RAMP optimality among the polynomial algorithms deserves further detailed investigation. | {
"cite_N": [
"@cite_53"
],
"mid": [
"2151781750"
],
"abstract": [
"Many questions of fundamental interest in today's science can be formulated as inference problems: some partial, or noisy, observations are performed over a set of variables and the goal is to recover, or infer, the values of the variables based on the indirect information contained in the measurements. For such problems, the central scientific questions are: Under what conditions is the information contained in the measurements sufficient for a satisfactory inference to be possible? What are the most efficient algorithms for this task? A growing body of work has shown that often we can understand and locate these fundamental barriers by thinking of them as phase transitions in the sense of statistical physics. Moreover, it turned out that we can use the gained physical insight to develop new promising algorithms. The connection between inference and statistical physics is currently witnessing an impressive renaissance and we review here the current state-of-the-art, with a pedagogical focus on the Ising ..."
]
} |
1701.00858 | 2571114381 | This article is an extended version of previous work of the authors [40, 41] on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a paralel with the study of vector-spin glass models - presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low- RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods. | This section gives a brief summary of our main results and their relation to existing work. Approximate Message Passing for Low-Rank matrix estimation ( Low-RAMP ): In section we derive and detail the approximate message passing algorithm to estimate marginal probabilities of the probability measures ) and ) for general prior distribution, rank and Hamiltonian (output channel). We describe various special case of these equations that arise due to the Nishimori conditions or due to self-averaging. In the physics literature this would be the TAP equations @cite_44 generalized to vectorial spins with general local magnetic fields and generic type of pairwise interactions. The Low-RAMP equations encompass as a special case the original TAP equations for the Sherrington-Kirkpatrick model, TAP equations for the Hopfield model @cite_51 @cite_8 , or the restricted Boltzmann machine @cite_29 @cite_41 @cite_8 @cite_6 . Within the context of low-rank matrix estimation, the AMP equations were discussed in @cite_7 @cite_73 @cite_56 @cite_33 @cite_54 @cite_64 @cite_76 . Recently the Low-RAMP algorithm was even generalized to spin-variables that are not real vectors but live on compact groups @cite_75 . | {
"cite_N": [
"@cite_64",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_41",
"@cite_75",
"@cite_29",
"@cite_54",
"@cite_6",
"@cite_56",
"@cite_44",
"@cite_76",
"@cite_73",
"@cite_51"
],
"mid": [
"",
"1700527341",
"2040969041",
"2490183153",
"2433138581",
"2536490006",
"",
"",
"2556341799",
"2964282092",
"1990755770",
"2531902758",
"2097538064",
"2061362732"
],
"abstract": [
"",
"We study optimal estimation for sparse principal component analysis when the number of non-zero elements is small but on the same order as the dimension of the data. We employ approximate message passing (AMP) algorithm and its state evolution to analyze what is the information theoretically minimal mean-squared error and the one achieved by AMP in the limit of large sizes. For a special case of rank one and large enough density of non-zeros Deshpande and Montanari [1] proved that AMP is asymptotically optimal. We show that both for low density and for large rank the problem undergoes a series of phase transitions suggesting existence of a region of parameters where estimation is information theoretically possible, but AMP (and presumably every other polynomial algorithm) fails. The analysis of the large rank limit is particularly instructive.",
"We consider the problem of estimating a rank-one matrix in Gaussian noise under a probabilistic model for the left and right factors of the matrix. The probabilistic model can impose constraints on the factors including sparsity and positivity that arise commonly in learning problems. We propose a simple iterative procedure that reduces the problem to a sequence of scalar estimation computations. The method is similar to approximate message passing techniques based on Gaussian approximations of loopy belief propagation that have been used recently in compressed sensing. Leveraging analysis methods by Bayati and Montanari, we show that the asymptotic behavior of the estimates from the proposed iterative procedure is described by a simple scalar equivalent model, where the distribution of the estimates is identical to certain scalar estimates of the variables in Gaussian noise. Moreover, the effective Gaussian noise level is described by a set of state evolution equations. The proposed method thus provides a computationally simple and general method for rank-one estimation problems with a precise analysis in certain high-dimensional settings.",
"Motivated by recent progress in using restricted Boltzmann machines as preprocess-ing algorithms for deep neural network, we revisit the mean-field equations (belief-propagation and TAP equations) in the best understood such machine, namely the Hopfield model of neural networks, and we explicit how they can be used as iterative message-passing algorithms, providing a fast method to compute the local polariza-tions of neurons. In the \"retrieval phase\" where neurons polarize in the direction of one memorized pattern, we point out a major difference between the belief propagation and TAP equations : the set of belief propagation equations depends on the pattern which is retrieved, while one can use a unique set of TAP equations. This makes the latter method much better suited for applications in the learning process of restricted Boltzmann machines. In the case where the patterns memorized in the Hopfield model are not independent, but are correlated through a combinatorial structure, we show that the TAP equations have to be modified. This modification can be seen either as an alteration of the reaction term in TAP equations, or, more interestingly, as the consequence of message passing on a graphical model with several hidden layers, where the number of hidden layers depends on the depth of the correlations in the memorized patterns. This layered structure is actually necessary when one deals with more general restricted Boltzmann machines.",
"In this work, we consider compressed sensing reconstruction from M measurements of K-sparse structured signals which do not possess a writable correlation model. Assuming that a generative statistical model, such as a Boltzmann machine, can be trained in an unsupervised manner on example signals, we demonstrate how this signal model can be used within a Bayesian framework of signal reconstruction. By deriving a message-passing inference for general distribution restricted Boltzmann machines, we are able to integrate these inferred signal models into approximate message passing for compressed sensing reconstruction. Finally, we show for the MNIST dataset that this approach can be very effective, even for M < K.",
"Various alignment problems arising in cryo-electron microscopy, community detection, time synchronization, computer vision, and other fields fall into a common framework of synchronization problems over compact groups such as Z L, U(1), or SO(3). The goal of such problems is to estimate an unknown vector of group elements given noisy relative observations. We present an efficient iterative algorithm to solve a large class of these problems, allowing for any compact group, with measurements on multiple 'frequency channels' (Fourier modes, or more generally, irreducible representations of the group). Our algorithm is a highly efficient iterative method following the blueprint of approximate message passing (AMP), which has recently arisen as a central technique for inference problems such as structured low-rank estimation and compressed sensing. We augment the standard ideas of AMP with ideas from representation theory so that the algorithm can work with distributions over compact groups. Using standard but non-rigorous methods from statistical physics we analyze the behavior of our algorithm on a Gaussian noise model, identifying phases where the problem is easy, (computationally) hard, and (statistically) impossible. In particular, such evidence predicts that our algorithm is information-theoretically optimal in many cases, and that the remaining cases show evidence of statistical-to-computational gaps.",
"",
"",
"Extracting automatically the complex set of features composing real high-dimensional data is crucial for achieving high performance in machine--learning tasks. Restricted Boltzmann Machines (RBM) are empirically known to be efficient for this purpose, and to be able to generate distributed and graded representations of the data. We characterize the structural conditions (sparsity of the weights, low effective temperature, nonlinearities in the activation functions of hidden units, and adaptation of fields maintaining the activity in the visible layer) allowing RBM to operate in such a compositional phase. Evidence is provided by the replica analysis of an adequate statistical ensemble of random RBMs and by RBM trained on the handwritten digits dataset MNIST.",
"Sparse Principal Component Analysis (PCA) is a dimensionality reduction technique wherein one seeks a lowrank representation of a data matrix with additional sparsity constraints on the obtained representation. We consider two probabilistic formulations of sparse PCA: a spiked Wigner and spiked Wishart (or spiked covariance) model. We analyze an Approximate Message Passing (AMP) algorithm to estimate the underlying signal and show, in the high dimensional limit, that the AMP estimates are information-theoretically optimal. As an immediate corollary, our results demonstrate that the posterior expectation of the underlying signal, which is often intractable to compute, can be obtained using a polynomial-time scheme. Our results also effectively provide a single-letter characterization of the sparse PCA problem.",
"Abstract The Sherrmgton-Kirkpatrick model of a spin glass is solved by a mean field technique which is probably exact in the limit of infinite range interactions. At and above T c the solution is identical to that obtained by Sherrington and Kirkpatrick (1975) using the n → O replica method, but below T c the new result exhibits several differences and remains physical down to T = 0.",
"We consider the problem of Gaussian mixture clustering in the high-dimensional limit where the data consists of m points in n dimensions, n,m → ∞ and α = m n stays finite. Using exact but non-rigorous methods from statistical physics, we determine the critical value of α and the distance between the clusters at which it becomes information-theoretically possible to reconstruct the membership into clusters better than chance. We also determine the accuracy achievable by the Bayes-optimal estimation algorithm. In particular, we find that when the number of clusters is sufficiently large, r > 4+2√α, there is a gap between the threshold for information-theoretically optimal performance and the threshold at which known algorithms succeed.",
"We study the problem of reconstructing low-rank matrices from their noisy observations. We formulate the problem in the Bayesian framework, which allows us to exploit structural properties of matrices in addition to low-rankedness, such as sparsity. We propose an efficient approximate message passing algorithm, derived from the belief propagation algorithm, to perform the Bayesian inference for matrix reconstruction. We have also successfully applied the proposed algorithm to a clustering problem, by reformulating it as a low-rank matrix reconstruction problem with an additional structural property. Numerical experiments show that the proposed algorithm outperforms Lloyd's K-means algorithm.",
"Abstract For the quantum mechanical long-range random Heisenberg model we derive mean field equations in analogy to the TAP equations. The complete solution requires a self-consistent calculation of the dynamical local susceptibility. We discuss a variational approximation above and below the critical point."
]
} |
1701.00858 | 2571114381 | This article is an extended version of previous work of the authors [40, 41] on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a paralel with the study of vector-spin glass models - presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low- RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods. | In physics, message passing equations are always closely linked with the Bethe free energy who's stationary points are the message passing fixed point equations. In the presence of multiple fixed points it is the value of the Bethe free energy that decides which of the fixed points is the correct one. In section and appendix we derive the Bethe free energy on a single instance of the low-rank matrix estimation problem. The form of free energy that we derive has the convenience to be variational in the sense that in order to find the fixed point we are looking for a maximum of the free energy, not for a saddle. Corresponding free energy for the compressed sensing problem was derived in @cite_74 @cite_59 and can be used to guide the iterations of the Low-RAMP algorithm @cite_66 . | {
"cite_N": [
"@cite_59",
"@cite_66",
"@cite_74"
],
"mid": [
"2952636043",
"1539800233",
"1985715224"
],
"abstract": [
"The estimation of a random vector with independent components passed through a linear transform followed by a componentwise (possibly nonlinear) output map arises in a range of applications. Approximate message passing (AMP) methods, based on Gaussian approximations of loopy belief propagation, have recently attracted considerable attention for such problems. For large random transforms, these methods exhibit fast convergence and admit precise analytic characterizations with testable conditions for optimality, even for certain non-convex problem instances. However, the behavior of AMP under general transforms is not fully understood. In this paper, we consider the generalized AMP (GAMP) algorithm and relate the method to more common optimization techniques. This analysis enables a precise characterization of the GAMP algorithm fixed-points that applies to arbitrary transforms. In particular, we show that the fixed points of the so-called max-sum GAMP algorithm for MAP estimation are critical points of a constrained maximization of the posterior density. The fixed-points of the sum-product GAMP algorithm for estimation of the posterior marginals can be interpreted as critical points of a certain free energy.",
"The generalized approximate message passing (GAMP) algorithm is an efficient method of MAP or approximate-MMSE estimation of x observed from a noisy version of the transform coefficients z = Ax. In fact, for large zero-mean i.i.d sub-Gaussian A, GAMP is characterized by a state evolution whose fixed points, when unique, are optimal. For generic A, however, GAMP may diverge. In this paper, we propose adaptive-damping and mean-removal strategies that aim to prevent divergence. Numerical results demonstrate significantly enhanced robustness to non-zero-mean, rank-deficient, column-correlated, and ill-conditioned A.",
"We consider the variational free energy approach for compressed sensing. We first show that the na \"ive mean field approach performs remarkably well when coupled with a noise learning procedure. We also notice that it leads to the same equations as those used for iterative thresholding. We then discuss the Bethe free energy and how it corresponds to the fixed points of the approximate message passing algorithm. In both cases, we test numerically the direct optimization of the free energies as a converging sparse-estimationalgorithm."
]
} |
1701.00858 | 2571114381 | This article is an extended version of previous work of the authors [40, 41] on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a paralel with the study of vector-spin glass models - presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low- RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods. | In section we show that the state evolution with a Gaussian prior can be use to analyse the asymptotic performance of spectral algorithms such as PCA (symmetric case) or SVD (bipartite case) and derive the corresponding spectral mean-squared errors and phase transitions as studied in the random matrix literature @cite_77 . For a recent closely related discussion see @cite_9 . | {
"cite_N": [
"@cite_77",
"@cite_9"
],
"mid": [
"2066459155",
"2522434885"
],
"abstract": [
"AbstractWe compute the limiting distributions of the largest eigenvalue of a complex Gaussian samplecovariance matrix when both the number of samples and the number of variables in each samplebecome large. When all but finitely many, say r, eigenvalues of the covariance matrix arethe same, the dependence of the limiting distribution of the largest eigenvalue of the samplecovariance matrix on those distinguished r eigenvalues of the covariance matrix is completelycharacterized in terms of an infinite sequence of new distribution functions that generalizethe Tracy-Widom distributions of the random matrix theory. Especially a phase transitionphenomena is observed. Our results also apply to a last passage percolation model and aqueuing model. 1 Introduction Consider M independent, identically distributed samples y 1 ,..., y M , all of which are N ×1 columnvectors. We further assume that the sample vectors y k are Gaussian with mean µ and covarianceΣ, where Σ is a fixed N ×N positive matrix; the density of a sample y isp( y) =1(2π)",
"A central problem of random matrix theory is to understand the eigenvalues of spiked random matrix models, in which a prominent eigenvector is planted into a random matrix. These distributions form natural statistical models for principal component analysis (PCA) problems throughout the sciences. Baik, Ben Arous and Peche showed that the spiked Wishart ensemble exhibits a sharp phase transition asymptotically: when the signal strength is above a critical threshold, it is possible to detect the presence of a spike based on the top eigenvalue, and below the threshold the top eigenvalue provides no information. Such results form the basis of our understanding of when PCA can detect a low-rank signal in the presence of noise. However, not all the information about the spike is necessarily contained in the spectrum. We study the fundamental limitations of statistical methods, including non-spectral ones. Our results include: I) For the Gaussian Wigner ensemble, we show that PCA achieves the optimal detection threshold for a variety of benign priors for the spike. We extend previous work on the spherically symmetric and i.i.d. Rademacher priors through an elementary, unified analysis. II) For any non-Gaussian Wigner ensemble, we show that PCA is always suboptimal for detection. However, a variant of PCA achieves the optimal threshold (for benign priors) by pre-transforming the matrix entries according to a carefully designed function. This approach has been stated before, and we give a rigorous and general analysis. III) For both the Gaussian Wishart ensemble and various synchronization problems over groups, we show that inefficient procedures can work below the threshold where PCA succeeds, whereas no known efficient algorithm achieves this. This conjectural gap between what is statistically possible and what can be done efficiently remains open."
]
} |
1701.00858 | 2571114381 | This article is an extended version of previous work of the authors [40, 41] on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a paralel with the study of vector-spin glass models - presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low- RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods. | In section and appendix we discuss analytical results that we can obtain for the community detection, and joint-sparse PCA models in the limit of large rank @math . These large rank results are matching the rigorous bounds derived for these problems in @cite_43 . | {
"cite_N": [
"@cite_43"
],
"mid": [
"2485294266"
],
"abstract": [
"We study the problem of detecting a structured, low-rank signal matrix corrupted with additive Gaussian noise. This includes clustering in a Gaussian mixture model, sparse PCA, and submatrix localization. Each of these problems is conjectured to exhibit a sharp information-theoretic threshold, below which the signal is too weak for any algorithm to detect. We derive upper and lower bounds on these thresholds by applying the first and second moment methods to the likelihood ratio between these \"planted models\" and null models where the signal matrix is zero. Our bounds differ by at most a factor of root two when the rank is large (in the clustering and submatrix localization problems, when the number of clusters or blocks is large) or the signal matrix is very sparse. Moreover, our upper bounds show that for each of these problems there is a significant regime where reliable detection is information- theoretically possible but where known algorithms such as PCA fail completely, since the spectrum of the observed matrix is uninformative. This regime is analogous to the conjectured 'hard but detectable' regime for community detection in sparse graphs."
]
} |
1701.00903 | 2583967950 | Complex activity recognition is challenging due to the inherent uncertainty and diversity of performing a complex activity. Normally, each instance of a complex activity has its own configuration of atomic actions and their temporal dependencies. We propose in this paper an atomic action-based Bayesian model that constructs Allen's interval relation networks to characterize complex activities with structural varieties in a probabilistic generative way: By introducing latent variables from the Chinese restaurant process, our approach is able to capture all possible styles of a particular complex activity as a unique set of distributions over atomic actions and relations. We also show that local temporal dependencies can be retained and are globally consistent in the resulting interval network. Moreover, network structure can be learned from empirical data. A new dataset of complex hand activities has been constructed and made publicly available, which is much larger in size than any existing datasets. Empirical evaluations on benchmark datasets as well as our in-house dataset demonstrate the competitiveness of our approach. | Currently, a lot of research focuses on semantic-based complex activity modeling. Many semantic-based models such as context-free grammar (CFG) @cite_26 and Markov logic network (MLN) @cite_22 @cite_0 ) are used to represent complex activities, which can handle rich temporal relations. Yet formulae and their weights in these models (e.g. CFG grammars and MLN structures) need to be manually encoded, which could be rather difficult to scale up and is almost impossible for many practical scenarios where temporal relations among activities are intricate. Although a number of semantic-based approaches have been proposed for learning temporal relations, such as stochastic context-free grammars @cite_18 and Inductive Logic Programming (ILP) @cite_7 , they can only learn formulas that are either true or false, but cannot learn their weights, which hinders them from handling uncertainty. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_0"
],
"mid": [
"2107657559",
"2086001523",
"2151535150",
"1733761130",
"2478582605"
],
"abstract": [
"Automatic detection of dynamic events in video sequences has a variety of applications including visual surveillance and monitoring, video highlight extraction, intelligent transportation systems, video summarization, and many more. Learning an accurate description of the various events in real-world scenes is challenging owing to the limited user-labeled data as well as the large variations in the pattern of the events. Pattern differences arise either due to the nature of the events themselves such as the spatio-temporal events or due to missing or ambiguous data interpretation using computer vision methods. In this work, we introduce a novel method for representing and classifying events in video sequences using reversible context-free grammars. The grammars are learned using a semi-supervised learning method. More concretely, by using the classification entropy as a heuristic cost function, the grammars are iteratively learned using a search method. Experimental results demonstrating the efficacy of the learning algorithm and the event detection method applied to traffic video sequences are presented.",
"This paper describes a methodology for automated recognition of complex human activities. The paper proposes a general framework which reliably recognizes high-level human actions and human-human interactions. Our approach is a description-based approach, which enables a user to encode the structure of a high-level human activity as a formal representation. Recognition of human activities is done by semantically matching constructed representations with actual observations. The methodology uses a context-free grammar (CFG) based representation scheme as a formal syntax for representing composite activities. Our CFG-based representation enables us to define complex human activities based on simpler activities or movements. Our system takes advantage of both statistical recognition techniques from computer vision and knowledge representation concepts from traditional artificial intelligence. In the low-level of the system, image sequences are processed to extract poses and gestures. Based on the recognition of gestures, the high-level of the system hierarchically recognizes composite actions and interactions occurring in a sequence of image frames. The concept of hallucinations and a probabilistic semantic-level recognition algorithm is introduced to cope with imperfect lower-layers. As a result, the system recognizes human activities including fighting' and assault', which are high-level activities that previous systems had difficulties. The experimental results show that our system reliably recognizes sequences of complex human activities with a high recognition rate.",
"A majority of the approaches to activity recognition in sensor environments are either based on manually constructed rules for recognizing activities or lack the ability to incorporate complex temporal dependencies. Furthermore, in many cases, the rather unrealistic assumption is made that the subject carries out only one activity at a time. In this paper, we describe the use of Markov logic as a declarative framework for recognizing interleaved and concurrent activities incorporating both input from pervasive light-weight sensor technology and common-sense background knowledge. In particular, we assess its ability to learn statistical-temporal models from training data and to combine these models with background knowledge to improve the overall recognition accuracy. To this end, we propose two Markov logic formulations for inferring the foreground activity as well as each activities' start and end times. We evaluate the approach on an established dataset. where it outperforms state-of-the-art algorithms for activity recognition.",
"Event models obtained automatically from video can be used in applications ranging from abnormal event detection to content based video retrieval. When multiple agents are involved in the events, characterizing events naturally suggests encoding interactions as relations. Learning event models from this kind of relational spatio-temporal data using relational learning techniques such as Inductive Logic Programming (ILP) hold promise, but have not been successfully applied to very large datasets which result from video data. In this paper, we present a novel framework remind (Relational Event Model INDuction) for supervised relational learning of event models from large video datasets using ILP. Efficiency is achieved through the learning from interpretations setting and using a typing system that exploits the type hierarchy of objects in a domain. The use of types also helps prevent over generalization. Furthermore, we also present a type-refining operator and prove that it is optimal. The learned models can be used for recognizing events from previously unseen videos. We also present an extension to the framework by integrating an abduction step that improves the learning performance when there is noise in the input data. The experimental results on several hours of video data from two challenging real world domains (an airport domain and a physical action verbs domain) suggest that the techniques are suitable to real world scenarios.",
"Daily living activity recognition can be exploited to benefit mobile and ubiquitous computing applications. Techniques so far are mature to recognize simple actions. Due to the characteristics of diversity and uncertainty in daily living applications, most existing complex activity recognition approaches have notable limitations. First, graphical model-based approaches still lack sufficient expressive power to model rich temporal relations among activities. Second, it would be rather difficult for graphical model-based approaches to build a unified model for achieving multiple types of tasks. Third, current semantic-based approaches often fail to capture uncertainties. Fourth, formulae in these semantic-based approaches are often manually encoded. Meanwhile, it is impractical to handcraft each formula accurately in daily living scenarios where temporal relations among activities are intricate. To address these issues, we present a probabilistic semantic-based framework that combines Markov logic network with 15 temporal and hierarchical relations to explicitly perform diverse inference tasks of daily living in a unified manner. Advanced pattern mining techniques are introduced to automatically learn the propositional logic rules of intricate relations as well as their weights. Experimental results show that by logical reasoning with the mined temporal dependencies under uncertainty, the proposed model leads to an improved performance, particularly when recognizing complex activities involving the incomplete or incorrect observations of atomic actions. HighlightsA probabilistic semantic-based framework is presented to explicitly perform diverse inference tasks.The framework combines Markov logic networks with 15 relations between activities.Advanced pattern mining techniques are introduced to learn intricate temporal rules.Performances can be improved by logical reasoning with the mined rules under uncertainty.Our approach is robust to the incomplete or incorrect observations of intervals."
]
} |
1701.00546 | 2585072743 | Abstract Recently, distributed processing of large dynamic graphs has become very popular, especially in certain domains such as social network analysis, Web graph analysis and spatial network analysis. In this context, many distributed parallel graph processing systems have been proposed, such as Pregel, PowerGraph, GraphLab, and Trinity. However, these systems deal only with static graphs and do not consider the issue of processing evolving and dynamic graphs. In this paper, we are considering the issues of scale and dynamism in the case of graph processing systems. We present bladyg , a graph processing framework that addresses the issue of dynamism in large-scale graphs. We present an implementation of bladyg on top of akka framework. We experimentally evaluate the performance of the proposed framework by applying it to problems such as distributed k-core decomposition and partitioning of large dynamic graphs. The experimental results show that the performance and scalability of bladyg are satisfying for large-scale dynamic graphs. | Pregel @cite_15 is a computational model for large-scale graph processing problems. In Pregel, message exchanges occur among vertices of the input graph. As shown in Figure ), each vertex is associated to a state that controls its activity. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2170616854"
],
"abstract": [
"Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs - in some cases billions of vertices, trillions of edges - poses challenges to their efficient processing. In this paper we present a computational model suitable for this task. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. This vertex-centric approach is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Distribution-related details are hidden behind an abstract API. The result is a framework for processing large graphs that is expressive and easy to program."
]
} |
1701.00546 | 2585072743 | Abstract Recently, distributed processing of large dynamic graphs has become very popular, especially in certain domains such as social network analysis, Web graph analysis and spatial network analysis. In this context, many distributed parallel graph processing systems have been proposed, such as Pregel, PowerGraph, GraphLab, and Trinity. However, these systems deal only with static graphs and do not consider the issue of processing evolving and dynamic graphs. In this paper, we are considering the issues of scale and dynamism in the case of graph processing systems. We present bladyg , a graph processing framework that addresses the issue of dynamism in large-scale graphs. We present an implementation of bladyg on top of akka framework. We experimentally evaluate the performance of the proposed framework by applying it to problems such as distributed k-core decomposition and partitioning of large dynamic graphs. The experimental results show that the performance and scalability of bladyg are satisfying for large-scale dynamic graphs. | GraphLab @cite_16 is a graph processing framework that share the same motivation with Pregel. While pregel targets Google's large distributed system, GraphLab addresses shared memory parallel systems which means that there is more focus on parallel access of memory than on the issue of efficient message passing and synchronization. In the programming model of GraphLab, the users define an update function that can change all data associated to the scope of that node (its edges or its neighbors). Figure shows the scope of a vertex: an update function called on that vertex will be able to read and write all data in its scope. We notice that that scopes can overlap, so simultaneously executing two update functions can result in a collision. In this context, GraphLab offers some consistency models in order to allow its users to trade off performance and consistency as appropriate for their computation. As described in Figure , GraphLab offers a fully consistent model, a vertex consistent model or an edge consistent model. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2096544401"
],
"abstract": [
"While high-level data parallel frameworks, like MapReduce, simplify the design and implementation of large-scale data processing systems, they do not naturally or efficiently support many important data mining and machine learning algorithms and can lead to inefficient learning systems. To help fill this critical void, we introduced the GraphLab abstraction which naturally expresses asynchronous, dynamic, graph-parallel computation while ensuring data consistency and achieving a high degree of parallel performance in the shared-memory setting. In this paper, we extend the GraphLab framework to the substantially more challenging distributed setting while preserving strong data consistency guarantees. We develop graph based extensions to pipelined locking and data versioning to reduce network congestion and mitigate the effect of network latency. We also introduce fault tolerance to the GraphLab abstraction using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can be easily implemented by exploiting the GraphLab abstraction itself. Finally, we evaluate our distributed implementation of the GraphLab abstraction on a large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains over Hadoop-based implementations."
]
} |
1701.00546 | 2585072743 | Abstract Recently, distributed processing of large dynamic graphs has become very popular, especially in certain domains such as social network analysis, Web graph analysis and spatial network analysis. In this context, many distributed parallel graph processing systems have been proposed, such as Pregel, PowerGraph, GraphLab, and Trinity. However, these systems deal only with static graphs and do not consider the issue of processing evolving and dynamic graphs. In this paper, we are considering the issues of scale and dynamism in the case of graph processing systems. We present bladyg , a graph processing framework that addresses the issue of dynamism in large-scale graphs. We present an implementation of bladyg on top of akka framework. We experimentally evaluate the performance of the proposed framework by applying it to problems such as distributed k-core decomposition and partitioning of large dynamic graphs. The experimental results show that the performance and scalability of bladyg are satisfying for large-scale dynamic graphs. | Powergraph @cite_3 is an abstraction that exploits the structure of vertex-programs and explicitly factors computation over edges instead of vertices It uses a greedy approach, processing and assigning each edge before moving to the next. It keeps in memory the current sizes of each partition and, for each vertex, the set of partitions that contain at least one edge of that vertex. If both endpoints of the current edge are already inside one common partition, the edge will be added to that partition. If they have no partition in common, the node with the most edges still to assign will choose one of its partitions. If only one node is already in a partition, the edge will be assigned to that partition. Otherwise, if both nodes are free, the edge will be assigned to the smallest partition. | {
"cite_N": [
"@cite_3"
],
"mid": [
"78077100"
],
"abstract": [
"Large-scale graph-structured computation is central to tasks ranging from targeted advertising to natural language processing and has led to the development of several graph-parallel abstractions including Pregel and GraphLab. However, the natural graphs commonly found in the real-world have highly skewed power-law degree distributions, which challenge the assumptions made by these abstractions, limiting performance and scalability. In this paper, we characterize the challenges of computation on natural graphs in the context of existing graph-parallel abstractions. We then introduce the PowerGraph abstraction which exploits the internal structure of graph programs to address these challenges. Leveraging the PowerGraph abstraction we introduce a new approach to distributed graph placement and representation that exploits the structure of power-law graphs. We provide a detailed analysis and experimental evaluation comparing PowerGraph to two popular graph-parallel systems. Finally, we describe three different implementation strategies for PowerGraph and discuss their relative merits with empirical evaluations on large-scale real-world problems demonstrating order of magnitude gains."
]
} |
1701.00546 | 2585072743 | Abstract Recently, distributed processing of large dynamic graphs has become very popular, especially in certain domains such as social network analysis, Web graph analysis and spatial network analysis. In this context, many distributed parallel graph processing systems have been proposed, such as Pregel, PowerGraph, GraphLab, and Trinity. However, these systems deal only with static graphs and do not consider the issue of processing evolving and dynamic graphs. In this paper, we are considering the issues of scale and dynamism in the case of graph processing systems. We present bladyg , a graph processing framework that addresses the issue of dynamism in large-scale graphs. We present an implementation of bladyg on top of akka framework. We experimentally evaluate the performance of the proposed framework by applying it to problems such as distributed k-core decomposition and partitioning of large dynamic graphs. The experimental results show that the performance and scalability of bladyg are satisfying for large-scale dynamic graphs. | GraphX @cite_25 is a library provided by Spark @cite_7 , a framework for distributed and parallel programming. Spark introduces Resilient Distributed Datasets (RDD), that can be split in partitions and kept in memory by the machines of the cluster that is running the system. These RDD can be then passed to one of the predefined meta-functions such as map, reduce, filter or join, that will process them and return a new RDD. In GraphX, graphs are defined as a pair of two specialized RDD. The first one contains data related to vertices and the second one contains data related to edges of the graph. New operations are then defined on these RDD, to allow to map vertices's values via user defined functions, join them with the edge table or external RDDs, or also run iterative computation. | {
"cite_N": [
"@cite_25",
"@cite_7"
],
"mid": [
"1982003698",
"2131975293"
],
"abstract": [
"From social networks to targeted advertising, big graphs capture the structure in data and are central to recent advances in machine learning and data mining. Unfortunately, directly applying existing data-parallel tools to graph computation tasks can be cumbersome and inefficient. The need for intuitive, scalable tools for graph computation has lead to the development of new graph-parallel systems (e.g., Pregel, PowerGraph) which are designed to efficiently execute graph algorithms. Unfortunately, these new graph-parallel systems do not address the challenges of graph construction and transformation which are often just as problematic as the subsequent computation. Furthermore, existing graph-parallel systems provide limited fault-tolerance and support for interactive data mining. We introduce GraphX, which combines the advantages of both data-parallel and graph-parallel systems by efficiently expressing graph computation within the Spark data-parallel framework. We leverage new ideas in distributed graph representation to efficiently distribute graphs as tabular data-structures. Similarly, we leverage advances in data-flow systems to exploit in-memory computation and fault-tolerance. We provide powerful new operations to simplify graph construction and transformation. Using these primitives we implement the PowerGraph and Pregel abstractions in less than 20 lines of code. Finally, by exploiting the Scala foundation of Spark, we enable users to interactively load, transform, and compute on massive graphs.",
"We present Resilient Distributed Datasets (RDDs), a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner. RDDs are motivated by two types of applications that current computing frameworks handle inefficiently: iterative algorithms and interactive data mining tools. In both cases, keeping data in memory can improve performance by an order of magnitude. To achieve fault tolerance efficiently, RDDs provide a restricted form of shared memory, based on coarse-grained transformations rather than fine-grained updates to shared state. However, we show that RDDs are expressive enough to capture a wide class of computations, including recent specialized programming models for iterative jobs, such as Pregel, and new applications that these models do not capture. We have implemented RDDs in a system called Spark, which we evaluate through a variety of user applications and benchmarks."
]
} |
1701.00546 | 2585072743 | Abstract Recently, distributed processing of large dynamic graphs has become very popular, especially in certain domains such as social network analysis, Web graph analysis and spatial network analysis. In this context, many distributed parallel graph processing systems have been proposed, such as Pregel, PowerGraph, GraphLab, and Trinity. However, these systems deal only with static graphs and do not consider the issue of processing evolving and dynamic graphs. In this paper, we are considering the issues of scale and dynamism in the case of graph processing systems. We present bladyg , a graph processing framework that addresses the issue of dynamism in large-scale graphs. We present an implementation of bladyg on top of akka framework. We experimentally evaluate the performance of the proposed framework by applying it to problems such as distributed k-core decomposition and partitioning of large dynamic graphs. The experimental results show that the performance and scalability of bladyg are satisfying for large-scale dynamic graphs. | * Processing of large and dynamic graphs Chronos @cite_20 is an execution and storage engine designed for running in-memory iterative graph computation on evolving graphs. Locality is an important aspect of Chronos, where the in-memory layout of temporal graphs and the scheduling of the iterative computation on temporal and evolving graphs are carefully designed. The design of Chronos further explores the interesting interplay among locality, parallelism, and incremental computation in supporting common mining tasks on temporal graphs. We notice that traditional graph processing frameworks arrange computation around each vertex edge in a graph; while temporal graph engines, in addition, calculate the result across multiple snapshots. Chronos makes a decision to batch operations associated with each vertex (or each edge) across multiple snapshots, instead of batching operations for vertices edges within a certain snapshot @cite_20 . | {
"cite_N": [
"@cite_20"
],
"mid": [
"2097805736"
],
"abstract": [
"Temporal graphs capture changes in graphs over time and are becoming a subject that attracts increasing interest from the research communities, for example, to understand temporal characteristics of social interactions on a time-evolving social graph. Chronos is a storage and execution engine designed and optimized specifically for running in-memory iterative graph computation on temporal graphs. Locality is at the center of the Chronos design, where the in-memory layout of temporal graphs and the scheduling of the iterative computation on temporal graphs are carefully designed, so that common \"bulk\" operations on temporal graphs are scheduled to maximize the benefit of in-memory data locality. The design of Chronos further explores the interesting interplay among locality, parallelism, and incremental computation in supporting common mining tasks on temporal graphs. The result is a high-performance temporal-graph system that offers up to an order of magnitude speedup for temporal iterative graph mining compared to a straightforward application of existing graph engines on a series of snapshots."
]
} |
1701.00546 | 2585072743 | Abstract Recently, distributed processing of large dynamic graphs has become very popular, especially in certain domains such as social network analysis, Web graph analysis and spatial network analysis. In this context, many distributed parallel graph processing systems have been proposed, such as Pregel, PowerGraph, GraphLab, and Trinity. However, these systems deal only with static graphs and do not consider the issue of processing evolving and dynamic graphs. In this paper, we are considering the issues of scale and dynamism in the case of graph processing systems. We present bladyg , a graph processing framework that addresses the issue of dynamism in large-scale graphs. We present an implementation of bladyg on top of akka framework. We experimentally evaluate the performance of the proposed framework by applying it to problems such as distributed k-core decomposition and partitioning of large dynamic graphs. The experimental results show that the performance and scalability of bladyg are satisfying for large-scale dynamic graphs. | The problem of distributed processing of large dynamic graphs has attracted considerable attention. In this context, several traditional graph operations such as @math -core decomposition and maximal clique computation have been extended to dynamic graphs @cite_13 @cite_18 @cite_17 @cite_24 . While this field of large dynamic graph analysis represent an emerging class of applications, it is not sufficiently addressed by the current graph processing frameworks and only specific graph operations have been studied in the context of dynamic graphs. | {
"cite_N": [
"@cite_24",
"@cite_18",
"@cite_13",
"@cite_17"
],
"mid": [
"2963054764",
"2071213388",
"2046813095",
"2071864549"
],
"abstract": [
"The k-core decomposition in a graph is a fundamental problem for social network analysis. The problem of k-core decomposition is to calculate the core number for every node in a graph. Previous studies mainly focus on k-core decomposition in a static graph. There exists a linear time algorithm for k-core decomposition in a static graph. However, in many real-world applications such as online social networks and the Internet, the graph typically evolves over time. Under such applications, a key issue is to maintain the core numbers of nodes given the graph changes over time. In this paper, we propose a new efficient algorithm to maintain the core number for every node in a dynamic graph. Our main result is that only certain nodes need to update their core numbers given the graph is changed by inserting deleting an edge. We devise an efficient algorithm to identify and recompute the core numbers of such nodes. The complexity of our algorithm is independent of the graph size. In addition, to further accelerate the algorithm, we develop two pruning strategies by exploiting the lower and upper bounds of the core number. Finally, we conduct extensive experiments over both real-world and synthetic datasets, and the results demonstrate the efficiency of the proposed algorithm.",
"In graph theory, k-core is a key metric used to identify subgraphs of high cohesion, also known as the ‘dense’ regions of a graph. As the real world graphs such as social network graphs grow in size, the contents get richer and the topologies change dynamically, we are challenged not only to materialize k-core subgraphs for one time but also to maintain them in order to keep up with continuous updates. Adding to the challenge is that real world data sets are outgrowing the capacity of a single server and its main memory. These challenges inspired us to propose a new set of distributed algorithms for k-core view construction and maintenance on a horizontally scaling storage and computing platform. Our algorithms execute against the partitioned graph data in parallel and take advantage of k-core properties to aggressively prune unnecessary computation. Experimental evaluation results demonstrated orders of magnitude speedup and advantages of maintaining k-core incrementally and in batch windows over complete reconstruction. Our algorithms thus enable practitioners to create and maintain many k-core views on different topics in rich social network content simultaneously.",
"Maximal cliques are important substructures in graph analysis. Many algorithms for computing maximal cliques have been proposed in the literature, however, most of them are sequential algorithms that cannot scale due to the high complexity of the problem, while existing parallel algorithms for computing maximal cliques are mostly immature and especially suffer from skewed workload. In this paper, we first propose a distributed algorithm built on a share-nothing architecture for computing the set of maximal cliques. We effectively address the problem of skewed workload distribution due to high-degree vertices, which also leads to drastically reduced worst-case time complexity for computing maximal cliques in common real-world graphs. Then, we also devise algorithms to support efficient update maintenance of the set of maximal cliques when the underlying graph is updated. We verify the efficiency of our algorithms for computing and updating the set of maximal cliques with a range of real-world graphs from different application domains.",
"A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-Hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for streaming graph data. In this paper, we propose the first incremental k-core decomposition algorithms for streaming graph data. These algorithms locate a small subgraph that is guaranteed to contain the list of vertices whose maximum k-core values have to be updated, and efficiently process this subgraph to update the k-core decomposition. Our results show a significant reduction in run-time compared to non-incremental alternatives. We show the efficiency of our algorithms on different types of real and synthetic graphs, at different scales. For a graph of 16 million vertices, we observe speedups reaching a million times, relative to the non-incremental algorithms."
]
} |
1701.00295 | 2583585015 | We propose a unified formulation for the problem of 3D human pose estimation from a single raw RGB image that reasons jointly about 2D joint estimation and 3D pose reconstruction to improve both tasks. We take an integrated approach that fuses probabilistic knowledge of 3D human pose with a multi-stage CNN architecture and uses the knowledge of plausible 3D landmark locations to refine the search for better 2D locations. The entire process is trained end-to-end, is extremely efficient and obtains state-of-the-art results on Human3.6M outperforming previous approaches both on 2D and 3D errors. | Recently, Zhao al @cite_46 achieved state-of-the-art results by training a simple neural network to recover 3D pose from known 2D joint positions. Although the results on perfect 2D input data are impressive, the inaccuracies in 2D joint estimation are not modeled and the performance of this approach combined with joint detectors is unknown. | {
"cite_N": [
"@cite_46"
],
"mid": [
"2949822910"
],
"abstract": [
"Three-dimensional shape reconstruction of 2D landmark points on a single image is a hallmark of human vision, but is a task that has been proven difficult for computer vision algorithms. We define a feed-forward deep neural network algorithm that can reconstruct 3D shapes from 2D landmark points almost perfectly (i.e., with extremely small reconstruction errors), even when these 2D landmarks are from a single image. Our experimental results show an improvement of up to two-fold over state-of-the-art computer vision algorithms; 3D shape reconstruction of human faces is given at a reconstruction error < .004, cars at .0022, human bodies at .022, and highly-deformable flags at an error of .0004. Our algorithm was also a top performer at the 2016 3D Face Alignment in the Wild Challenge competition (done in conjunction with the European Conference on Computer Vision, ECCV) that required the reconstruction of 3D face shape from a single image. The derived algorithm can be trained in a couple hours and testing runs at more than 1, 000 frames s on an i7 desktop. We also present an innovative data augmentation approach that allows us to train the system efficiently with small number of samples. And the system is robust to noise (e.g., imprecise landmark points) and missing data (e.g., occluded or undetected landmark points)."
]
} |
1701.00295 | 2583585015 | We propose a unified formulation for the problem of 3D human pose estimation from a single raw RGB image that reasons jointly about 2D joint estimation and 3D pose reconstruction to improve both tasks. We take an integrated approach that fuses probabilistic knowledge of 3D human pose with a multi-stage CNN architecture and uses the knowledge of plausible 3D landmark locations to refine the search for better 2D locations. The entire process is trained end-to-end, is extremely efficient and obtains state-of-the-art results on Human3.6M outperforming previous approaches both on 2D and 3D errors. | Many earlier works on human pose estimation from a single image relied on discriminatively trained models to learn a direct mapping from image features such as silhouettes, HOG or SIFT, to 3D human poses without passing through 2D landmark estimation @cite_0 @cite_44 @cite_50 @cite_12 @cite_19 . Recent direct approaches make use of deep learning @cite_14 @cite_15 @cite_18 @cite_23 . Regression-based approaches train an end-to-end network to predict 3D joint locations directly from the image @cite_38 @cite_53 @cite_22 @cite_3 . Li al @cite_10 incorporate model joint dependencies in the CNN via a max-margin formalism, others @cite_3 impose kinematic constraints by embedding a differentiable kinematic model into the deep learning architecture. Tekin al @cite_28 propose a deep regression architecture for structured prediction that combines traditional CNNs for supervised learning with an auto-encoder that implicitly encodes 3D dependencies between body parts. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_28",
"@cite_53",
"@cite_3",
"@cite_0",
"@cite_44",
"@cite_19",
"@cite_50",
"@cite_23",
"@cite_15",
"@cite_10",
"@cite_12"
],
"mid": [
"2113325037",
"",
"2293220651",
"",
"2404595106",
"",
"2951206572",
"2123503110",
"2152386463",
"2024935685",
"1541825479",
"",
"",
"2949812103",
"2158866619"
],
"abstract": [
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"",
"In this paper, we propose a deep convolutional neural network for 3D human pose estimation from monocular images. We train the network using two strategies: (1) a multi-task framework that jointly trains pose regression and body part detectors; (2) a pre-training strategy where the pose regressor is initialized using a network trained for body part detection. We compare our network on a large data set and achieve significant improvement over baseline methods. Human pose estimation is a structured prediction problem, i.e., the locations of each body part are highly correlated. Although we do not add constraints about the correlations between body parts to the network, we empirically show that the network has disentangled the dependencies among different body parts, and learned their correlations.",
"",
"Most recent approaches to monocular 3D pose estimation rely on Deep Learning. They either train a Convolutional Neural Network to directly regress from image to 3D pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. In this paper, we introduce a Deep Learning regression architecture for structured prediction of 3D human pose from monocular images that relies on an overcomplete autoencoder to learn a high-dimensional latent pose representation and account for joint dependencies. We demonstrate that our approach outperforms state-of-the-art ones both in terms of structure preservation and prediction accuracy.",
"",
"Learning articulated object pose is inherently difficult because the pose is high dimensional but has many structural constraints. Most existing work do not model such constraints and does not guarantee the geometric validity of their pose estimation, therefore requiring a post-processing to recover the correct geometry if desired, which is cumbersome and sub-optimal. In this work, we propose to directly embed a kinematic object model into the deep neutral network learning for general articulated object pose estimation. The kinematic function is defined on the appropriately parameterized object motion variables. It is differentiable and can be used in the gradient descent based optimization in network training. The prior knowledge on the object geometric model is fully exploited and the structure is guaranteed to be valid. We show convincing experiment results on a toy example and the 3D human pose estimation problem. For the latter we achieve state-of-the-art result on Human3.6M dataset.",
"We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We evaluate several different regression methods: ridge regression, relevance vector machine (RVM) regression, and support vector machine (SVM) regression over both linear and kernel bases. The RVMs provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. The loss of depth and limb labeling information often makes the recovery of 3D pose from single silhouettes ambiguous. To handle this, the method is embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose. We show that the resulting system tracks long sequences stably. For realism and good generalization over a wide range of viewpoints, we train the regressors on images resynthesized from real human motion capture data. The method is demonstrated for several representations of full body pose, both quantitatively on independent but similar test data and qualitatively on real image sequences. Mean angular errors of 4-6 spl deg are obtained for a variety of walking motions.",
"We aim to infer 3D body pose directly from human silhouettes. Given a visual input (silhouette), the objective is to recover the intrinsic body configuration, recover the viewpoint, reconstruct the input and detect any spatial or temporal outliers. In order to recover intrinsic body configuration (pose) from the visual input (silhouette), we explicitly learn view-based representations of activity manifolds as well as learn mapping functions between such central representations and both the visual input space and the 3D body pose space. The body pose can be recovered in a closed form in two steps by projecting the visual input to the learned representations of the activity manifold, i.e., finding the point on the learned manifold representation corresponding to the visual input, followed by interpolating 3D pose.",
"Latent variable models (LVM), like the shared-GPLVM and the spectral latent variable model, help mitigate over-fitting when learning discriminative methods from small or moderately sized training sets. Nevertheless, existing methods suffer from several problems: (1) complexity; (2) the lack of explicit mappings to and from the latent space; (3) an inability to cope with multi-modality; and (4) the lack of a well-defined density over the latent space. We propose a LVM called the shared kernel information embedding (sKIE). It defines a coherent density over a latent space and multiple input output spaces (e.g., image features and poses), and it is easy to condition on a latent state, or on combinations of the input output states. Learning is quadratic, and it works well on small datasets. With datasets too large to learn a coherent global model, one can use sKIE to learn local online models. sKIE permits missing data during inference, and partially labelled data during learning. We use sKIE for human pose inference.",
"We describe a method for recovering 3D human body pose from silhouettes. Our model is based on learning a latent space using the Gaussian Process Latent Variable Model (GP-LVM) [1] encapsulating both pose and silhouette features Our method is generative, this allows us to model the ambiguities of a silhouette representation in a principled way. We learn a dynamical model over the latent space which allows us to disambiguate between ambiguous silhouettes by temporal consistency. The model has only two free parameters and has several advantages over both regression approaches and other generative methods. In addition to the application shown in this paper the suggested model is easily extended to multiple observation spaces without constraints on type.",
"",
"",
"This paper focuses on structured-output learning using deep neural networks for 3D human pose estimation from monocular images. Our network takes an image and 3D pose as inputs and outputs a score value, which is high when the image-pose pair matches and low otherwise. The network structure consists of a convolutional neural network for image feature extraction, followed by two sub-networks for transforming the image features and pose into a joint embedding. The score function is then the dot-product between the image and pose embeddings. The image-pose embedding and score function are jointly trained using a maximum-margin cost function. Our proposed framework can be interpreted as a special form of structured support vector machines where the joint feature space is discriminatively learned using deep neural networks. We test our framework on the Human3.6m dataset and obtain state-of-the-art results compared to other recent methods. Finally, we present visualizations of the image-pose embedding space, demonstrating the network has learned a high-level embedding of body-orientation and pose-configuration.",
"The problem we consider in this paper is to take a single two-dimensional image containing a human figure, locate the joint positions, and use these to estimate the body configuration and pose in three-dimensional space. The basic approach is to store a number of exemplar 2D views of the human body in a variety of different configurations and viewpoints with respect to the camera. On each of these stored views, the locations of the body joints (left elbow, right knee, etc.) are manually marked and labeled for future use. The input image is then matched to each stored view, using the technique of shape context matching in conjunction with a kinematic chain-based deformation model. Assuming that there is a stored view sufficiently similar in configuration and pose, the correspondence process would succeed. The locations of the body joints are then transferred from the exemplar view to the test shape. Given the 2D joint locations, the 3D body configuration and pose are then estimated using an existing algorithm. We can apply this technique to video by treating each frame independently - tracking just becomes repeated recognition. We present results on a variety of data sets"
]
} |
1701.00295 | 2583585015 | We propose a unified formulation for the problem of 3D human pose estimation from a single raw RGB image that reasons jointly about 2D joint estimation and 3D pose reconstruction to improve both tasks. We take an integrated approach that fuses probabilistic knowledge of 3D human pose with a multi-stage CNN architecture and uses the knowledge of plausible 3D landmark locations to refine the search for better 2D locations. The entire process is trained end-to-end, is extremely efficient and obtains state-of-the-art results on Human3.6M outperforming previous approaches both on 2D and 3D errors. | Simo-Serra al @cite_52 were one of the first to propose an approach that naturally copes with the noisy detections inherent to off-the-shelf body part detectors by modeling their uncertainty and propagating it through 3D shape space while satisfying geometric and kinematic 3D constraints. The work @cite_7 also estimates the location of 2D joints before predicting 3D pose using appearance and the probable 3D pose of discovered parts using a non-parametric model. Another recent example is Bogo al @cite_40 , who fit a detailed statistical 3D body model @cite_59 to 2D joint proposals. | {
"cite_N": [
"@cite_59",
"@cite_40",
"@cite_52",
"@cite_7"
],
"mid": [
"1967554269",
"2483862638",
"2088196373",
"2520324844"
],
"abstract": [
"We present a learned model of human body shape and pose-dependent shape variation that is more accurate than previous models and is compatible with existing graphics pipelines. Our Skinned Multi-Person Linear model (SMPL) is a skinned vertex-based model that accurately represents a wide variety of body shapes in natural human poses. The parameters of the model are learned from data including the rest pose template, blend weights, pose-dependent blend shapes, identity-dependent blend shapes, and a regressor from vertices to joint locations. Unlike previous models, the pose-dependent blend shapes are a linear function of the elements of the pose rotation matrices. This simple formulation enables training the entire model from a relatively large number of aligned 3D meshes of different people in different poses. We quantitatively evaluate variants of SMPL using linear or dual-quaternion blend skinning and show that both are more accurate than a Blend-SCAPE model trained on the same data. We also extend SMPL to realistically model dynamic soft-tissue deformations. Because it is based on blend skinning, SMPL is compatible with existing rendering engines and we make it available for research purposes.",
"We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fit it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art.",
"Markerless 3D human pose detection from a single image is a severely underconstrained problem because different 3D poses can have similar image projections. In order to handle this ambiguity, current approaches rely on prior shape models that can only be correctly adjusted if 2D image features are accurately detected. Unfortunately, although current 2D part detector algorithms have shown promising results, they are not yet accurate enough to guarantee a complete disambiguation of the 3D inferred shape. In this paper, we introduce a novel approach for estimating 3D human pose even when observations are noisy. We propose a stochastic sampling strategy to propagate the noise from the image plane to the shape space. This provides a set of ambiguous 3D shapes, which are virtually undistinguishable from their image projections. Disambiguation is then achieved by imposing kinematic constraints that guarantee the resulting pose resembles a 3D human shape. We validate the method on a variety of situations in which state-of-the-art 2D detectors yield either inaccurate estimations or partly miss some of the body parts.",
"We introduce a 3D human pose estimation method from single image, based on a hierarchical Bayesian non-parametric model. The proposed model relies on a representation of the idiosyncratic motion of human body parts, which is captured by a subdivision of the human skeleton joints into groups. A dictionary of motion snapshots for each group is generated. The hierarchy ensures to integrate the visual features within the pose dictionary. Given a query image, the learned dictionary is used to estimate the likelihood of the group pose based on its visual features. The full-body pose is reconstructed taking into account the consistency of the connected group poses. The results show that the proposed approach is able to accurately reconstruct the 3D pose of previously unseen subjects."
]
} |
1701.00295 | 2583585015 | We propose a unified formulation for the problem of 3D human pose estimation from a single raw RGB image that reasons jointly about 2D joint estimation and 3D pose reconstruction to improve both tasks. We take an integrated approach that fuses probabilistic knowledge of 3D human pose with a multi-stage CNN architecture and uses the knowledge of plausible 3D landmark locations to refine the search for better 2D locations. The entire process is trained end-to-end, is extremely efficient and obtains state-of-the-art results on Human3.6M outperforming previous approaches both on 2D and 3D errors. | Zhou al @cite_1 tackles the problem of 3D pose estimation for a monocular image sequence integrating 2D, 3D and temporal information to account for uncertainties in the model and the measurements. Similar to our proposed approach, Zhou al's method @cite_1 does not need synchronized 2D-3D training data, it only needs 2D pose annotations to train the CNN joint regressor and a separate 3D mocap dataset to learn the 3D sparse basis. Unlike our approach, it relies on temporal smoothness for its best performance, and performs poorly on a single image. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2285449971"
],
"abstract": [
"This paper addresses the challenge of 3D full-body human pose estimation from a monocular image sequence. Here, two cases are considered: (i) the image locations of the human joints are provided and (ii) the image locations of joints are unknown. In the former case, a novel approach is introduced that integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the latter case, the former case is extended by treating the image locations of the joints as latent variables. A deep fully convolutional network is trained to predict the uncertainty maps of the 2D joint locations. The 3D pose estimates are realized via an Expectation-Maximization algorithm over the entire sequence, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Empirical evaluation on the Human3.6M dataset shows that the proposed approaches achieve greater 3D pose estimation accuracy over state-of-the-art baselines. Further, the proposed approach outperforms a publicly available 2D pose estimation baseline on the challenging PennAction dataset."
]
} |
1701.00295 | 2583585015 | We propose a unified formulation for the problem of 3D human pose estimation from a single raw RGB image that reasons jointly about 2D joint estimation and 3D pose reconstruction to improve both tasks. We take an integrated approach that fuses probabilistic knowledge of 3D human pose with a multi-stage CNN architecture and uses the knowledge of plausible 3D landmark locations to refine the search for better 2D locations. The entire process is trained end-to-end, is extremely efficient and obtains state-of-the-art results on Human3.6M outperforming previous approaches both on 2D and 3D errors. | Finally, Wu al @cite_57 's 3D Interpreter Network, a recent approach to estimate the skelet al structure of common objects (chairs, sofas, ...) bears similarities with our method. Although our approaches share common ground in the decoupling of 3D and 2D training data and the use of projection from 3D to improve 2D predictions the network architectures are very different and, unlike us, they do not carry out a quantitative evaluation on 3D human pose estimation. | {
"cite_N": [
"@cite_57"
],
"mid": [
"2345308174"
],
"abstract": [
"Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information."
]
} |
1701.00117 | 1973619262 | The design of embedded systems is a complex activity that involves a lot of decisions. With high performance demands of present day usage scenarios and software, they often involve energy hungry state-of-the-art computing units. While focusing on power consumption of computing units, the physical properties of software are often ignored. Recently, there has been a growing interest to quantify and model the physical footprint of software (e.g. consumed power, generated heat, execution time, etc.), and a component based approach facilitates methods for describing such properties. Based on these, software architects can make energy-efficient software design solutions. This paper presents power consumption and execution time profiling of a component software that can be allocated on heterogenoeus computing units (CPU, GPU, FPGA) of a tracked robot. | There are numerous studies that compare the performance of CPUs, GPUs and FPGAs. Since the results of the performance are ambiguous in deciding which platform is the best, authors tend to focus on specific use cases, e.g. image processing, basic linear algebra subroutines, etc. A research by reports that for image processing (2D filters), a GPU is by far the best platform, followed by a CPU, but only for filters up to a certain size, beyond which an FPGA surpasses both a CPU and a GPU @cite_8 . However, the experiment results by , show that for real--time image processing an FPGA is without doubt the most suitable platform @cite_9 . A comparison between a CPU and a GPU by , showed that the performance gap between these platforms is not different in orders of magnitude as it is often considered @cite_0 . For software architects this means that the current software models must support multiple criteria. There are several papers that take into consideration software profiling from the perspective of energy consumption @cite_11 , and a lot more from the perspective of execution time @cite_4 @cite_3 , however additional research in this area is required for component based frameworks. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_11"
],
"mid": [
"2161775046",
"2034601083",
"2127192221",
"1994285239",
"2337766646",
"2005697749"
],
"abstract": [
"Processing speed and energy efficiency are two of the most critical issues for computer systems. This paper presents a systematic approach for profiling the power and performance characteristics of application targeting heterogeneous multi-core computing platforms. Our approach enables rapid and automated design space exploration involving optimisation of workload distribution for systems with accelerators such as FPGAs and GPUs. We demonstrate that, with minor modification to the design, it is possible to estimate performance and power efficiency trade off to identify optimized workload distribution. Our approach shows that, for N-body computation, the fastest design which involves 2 CPU cores, 10 FPGA cores and 40960 GPU threads, is 2 times faster than a design with only FPGAs while achieving better overall energy efficiency.",
"Many applications in image processing have high inherent parallelism. FPGAs have shown very high performance in spite of their low operational frequency by fully extracting the parallelism. In recent micro processors, it also becomes possible to utilize the parallelism using multi-cores which support improved SIMD instructions, though programmers have to use them explicitly to achieve high performance. Recent GPUs support a large number of cores, and have a potential for high performance in many applications. However, the cores are grouped, and data transfer between the groups is very limited. Programming tools for FPGA, SIMD instructions on CPU and a large number of cores on GPU have been developed, but it is still difficult to achieve high performance on these platforms. In this paper, we compare the performance of FPGA, GPU and CPU using three applications in image processing; two-dimensional filters, stereo-vision and k-means clustering, and make it clear which platform is faster under which conditions.",
"Low-level computer vision algorithms have extreme computational requirements. In this work, we compare two real-time architectures developed using FPGA and GPU devices for the computation of phase-based optical flow, stereo, and local image features (energy, orientation, and phase). The presented approach requires a massive degree of parallelism to achieve real-time performance and allows us to compare FPGA and GPU design strategies and trade-offs in a much more complex scenario than previous contributions. Based on this analysis, we provide suggestions to real-time system designers for selecting the most suitable technology, and for optimizing system development on this platform, for a number of diverse applications.",
"Technology, science, and engineering continue to redefine physical world capabilities. Take mobility of humans, for example. In the 20th century, transportation systems moved us to unimaginable distances, speeds on earth and made us set foot on the Moon. Star Trek popularized teleportation, a fictitious technology that instantly allows us to \"go where no person has gone before.\" Before end of the century, Internet and wireless networking helped to create the parallel \"cyber world,\" virtually \"teleporting\" us great distances to interact with remote objects, people, and places. In the new millennium, our restless society's need for such ground-breaking capabilities in time and space has never been greater. Cyber-physical system (CPS) is a promising new class of systems that deeply embed cyber capabilities in the physical world, either on humans, infrastructure or platforms, to transform interactions with the physical world. Advances in the cyber world such as communications, networking, sensing, computing, storage, and control, as well as in the physical world such as materials, hardware, and renewable \"green\" fuels, are all rapidly converging to realize this class of highly collaborative computational systems that are reliant on sensors and actuators to monitor and effect change.Tomorrow's CPS is expected to enrich cyber-physical interactions by intimately coupling assets and dynamics of the physical and engineered systems with the computing and communications of cyber systems, at grand scales and depths from nanosystems to geographically dispersed systems-of-systems. It must be able to adapt rapidly to anomalies in the environment and embrace the evolution of technologies while still providing critical assertions of performance and other constraints.",
"",
"In this paper we define and evaluate a framework for estimating the energy consumption of Java-based software systems. Our primary objective in devising the framework is to enable an engineer to make informed decisions when adapting a system's architecture, such that the energy consumption on hardware devices with a finite battery life is reduced, and the lifetime of the system's key software services increases. Our framework explicitly takes a component-based perspective, which renders it well suited for a large class of today's distributed, embedded, and pervasive applications. The framework allows the engineer to estimate the software system's energy consumption at system construction-time and refine it at runtime. In a large number of distributed application scenarios, the framework showed very good precision on the whole, giving results that were within 5 (and often less) of the actually measured power losses incurred by executing the software. Our work to date has also highlighted a number of possible enhancements"
]
} |
1701.00185 | 2576754561 | Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC2), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction method. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets. | There have been several studies that attempted to overcome the sparseness of short text representation. One way is to expand and enrich the context of data. For example, @cite_32 proposed a method of improving the accuracy of short text clustering by enriching their representation with additional features from Wikipedia, and @cite_9 incorporate semantic knowledge from an ontology into text clustering. However, these works need solid NLP knowledge and still use high-dimensional representation which may result in a waste of both memory and computation time. Another direction is to map the original features into reduced space, such as Latent Semantic Analysis (LSA) @cite_49 , Laplacian Eigenmaps (LE) @cite_4 , and Locality Preserving Indexing (LPI) @cite_11 . Even some researchers explored some sophisticated models to cluster short texts. For example, Yin and Wang @cite_21 proposed a Dirichlet multinomial mixture model-based approach for short text clustering. Moreover, some studies even focus the above both two streams. For example, @cite_12 proposed a novel framework which enrich the text features by employing machine translation and reduce the original features simultaneously through matrix factorization techniques. | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_21",
"@cite_32",
"@cite_49",
"@cite_12",
"@cite_11"
],
"mid": [
"2165874743",
"1993691975",
"2061922307",
"2088314245",
"2147152072",
"2135955143",
"2154872931"
],
"abstract": [
"Despite many empirical successes of spectral clustering methods— algorithms that cluster points using eigenvectors of matrices derived from the data—there are several unresolved issues. First. there are a wide variety of algorithms that use the eigenvectors in slightly different ways. Second, many of these algorithms have no proof that they will actually compute a reasonable clustering. In this paper, we present a simple spectral clustering algorithm that can be implemented using a few lines of Matlab. Using tools from matrix perturbation theory, we analyze the algorithm, and give conditions under which it can be expected to do well. We also show surprisingly good experimental results on a number of challenging clustering problems.",
"Incorporating semantic knowledge from an ontology into document clustering is an important but challenging problem. While numerous methods have been developed, the value of using such an ontology is still not clear. We show in this paper that an ontology can be used to greatly reduce the number of features needed to do document clustering. Our hypothesis is that polysemous and synonymous nouns are both relatively prevalent and fundamentally important for document cluster formation. We show that nouns can be efficiently identified in documents and that this alone provides improved clustering. We next show the importance of the polysemous and synonymous nouns in clustering and develop a unique approach that allows us to measure the information gain in disambiguating these nouns in an unsupervised learning setting. In so doing, we can identify a core subset of semantic features that represent a text corpus. Empirical results show that by using core semantic features for clustering, one can reduce the number of features by 90 or more and still produce clusters that capture the main themes in a text corpus.",
"Short text clustering has become an increasingly important task with the popularity of social media like Twitter, Google+, and Facebook. It is a challenging problem due to its sparse, high-dimensional, and large-volume characteristics. In this paper, we proposed a collapsed Gibbs Sampling algorithm for the Dirichlet Multinomial Mixture model for short text clustering (abbr. to GSDMM). We found that GSDMM can infer the number of clusters automatically with a good balance between the completeness and homogeneity of the clustering results, and is fast to converge. GSDMM can also cope with the sparse and high-dimensional problem of short texts, and can obtain the representative words of each cluster. Our extensive experimental study shows that GSDMM can achieve significantly better performance than three other clustering models.",
"Subscribers to the popular news or blog feeds (RSS Atom) often face the problem of information overload as these feed sources usually deliver large number of items periodically. One solution to this problem could be clustering similar items in the feed reader to make the information more manageable for a user. Clustering items at the feed reader end is a challenging task as usually only a small part of the actual article is received through the feed. In this paper, we propose a method of improving the accuracy of clustering short texts by enriching their representation with additional features from Wikipedia. Empirical results indicate that this enriched representation of text items can substantially improve the clustering accuracy when compared to the conventional bag of words representation.",
"A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.",
"Social media websites allow users to exchange short texts such as tweets via microblogs and user status in friendship networks. Their limited length, pervasive abbreviations, and coined acronyms and words exacerbate the problems of synonymy and polysemy, and bring about new challenges to data mining applications such as text clustering and classification. To address these issues, we dissect some potential causes and devise an efficient approach that enriches data representation by employing machine translation to increase the number of features from different languages. Then we propose a novel framework which performs multi-language knowledge integration and feature reduction simultaneously through matrix factorization techniques. The proposed approach is evaluated extensively in terms of effectiveness on two social media datasets from Facebook and Twitter. With its significant performance improvement, we further investigate potential factors that contribute to the improved performance.",
"Many problems in information processing involve some form of dimensionality reduction. In this paper, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. This is borne out by illustrative examples on some high dimensional data sets."
]
} |
1701.00185 | 2576754561 | Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC2), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction method. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets. | Recently, there is a revival of interest in DNN and many researchers have concentrated on using Deep Learning to learn features. Hinton and Salakhutdinov @cite_1 use DAE to learn text representation. During the fine-tuning procedure, they use backpropagation to find codes that are good at reconstructing the word-count vector. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2100495367"
],
"abstract": [
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."
]
} |
1701.00185 | 2576754561 | Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC2), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction method. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets. | More recently, researchers propose to use external corpus to learn a distributed representation for each word, called word embedding @cite_17 , to improve DNN performance on NLP tasks. The Skip-gram and continuous bag-of-words models of Word2vec @cite_34 propose a simple single-layer architecture based on the inner product between two word vectors, and @cite_15 introduce a new model for word representation, called GloVe, which captures the global corpus statistics. | {
"cite_N": [
"@cite_15",
"@cite_34",
"@cite_17"
],
"mid": [
"2250539671",
"2153579005",
"2158139315"
],
"abstract": [
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http: metaoptimize.com projects wordreprs"
]
} |
1701.00199 | 2570686563 | Recommendation has become one of the most important components of online services for improving sale records, however visualization work for online recommendation is still very limited. This paper presents an interactive recommendation approach with the following two components. First, rating records are the most widely used data for online recommendation, but they are often processed in high-dimensional spaces that can not be easily understood or interacted with. We propose a Latent Semantic Model (LSM) that captures the statistical features of semantic concepts on 2D domains and abstracts user preferences for personal recommendation. Second, we propose an interactive recommendation approach through a storytelling mechanism for promoting the communication between the user and the recommendation system. Our approach emphasizes interactivity, explicit user input, and semantic information convey; thus it can be used by general users without any knowledge of recommendation or visualization algorithms. We validate our model with data statistics and demonstrate our approach with case studies from the MovieLens100K dataset. Our approaches of latent semantic analysis and interactive recommendation can also be extended to other network-based visualization applications, including various online recommendation systems. | Different from the topic of visualization recommendation that suggests suitable visualization formats given data or tasks, we focus on visualization approaches for recommender systems, where graph visualizations are often adopted. For example, @cite_32 used hyperbolic and multi-modal view to visualize a recommendation list. @cite_24 used SVD-like matrix factorization and PCA for global mapping of movie ratings from high dimensions to a two-dimensional space. @cite_16 proposed a task-based and information-based network representation for users to interact and visualize a recommendation list. @cite_43 used bipartite graphs and minimum spanning trees to explore and visualize recommendation results of a movie-actor dataset. | {
"cite_N": [
"@cite_24",
"@cite_43",
"@cite_16",
"@cite_32"
],
"mid": [
"2530695122",
"2125542619",
"2136304024",
"2115957127"
],
"abstract": [
"Collaborative Filtering (CF) is the most successful approach to Recommender Systems (RS). In this paper, we suggest methods for global and personalized visualization of CF data. Users and items are first embedded into a high-dimensional latent feature space according to a predictor function particularly designated to conform with visualization requirements. The data is then projected into 2-dimensional space by Principal Component Analysis (PCA) and Curvilinear Component Analysis (CCA). Each projection technique targets a di fferent application, and has its own advantages. PCA places all items on a Global Item Map (GIM) such that the correlation between their latent features is revealed optimally. CCA draws personalized Item Maps (PIMs) representing a small subset of items to a specifi c user. Unlike in GIM, a user is present in PIM and items are placed closer or further to her based on their predicted ratings. The intra-item semantic correlations are inherited from the high-dimensional space as much as possible. The algorithms are tested on three versions of the MovieLens dataset and the Netflix dataset to show they combine good accuracy with satisfactory visual properties. We rely on a few examples to argue our methods can reveal links which are hard to be extracted, even if explicit item features are available.",
"Exploration of graph structures is an important topic in data mining and data visualization. This work presents a novel technique for visualizing neighbourhood and cluster relationships in graphs; we also show how this methodology can be used within the setting of a recommendation system. Our technique works by projecting the original object distances onto two dimensions while carefully retaining the 'backbone' of important distances. Cluster information is also overlayed on the same projected space. A significant advantage of our approach is that it can accommodate both metric and non-metric distance functions. Our methodology is applied to a visual recommender system for movies to allow easy exploration of the actor-movie bipartite graph. The work offers intuitive movie recommendations based on a selected pivot movie and allows the interactive discovery of related movies based on both textual and semantic features.",
"Understanding large, complex networks is important for many critical tasks, including decision making, process optimization, and threat detection. Existing network analysis tools often lack intuitive interfaces to support the exploration of large scale data. We present a visual recommendation system to help guide users during navigation of network data. Collaborative filtering, similarity metrics, and relative importance are used to generate recommendations of potentially significant nodes for users to explore. In addition, graph layout and node visibility are adjusted in real-time to accommodate recommendation display and to reduce visual clutter. Case studies are presented to show how our design can improve network exploration.",
"In this paper, we have developed an interactive system to enable personalized news video recommendation. First, multi-modal information channels (audio, video and closed captions) are seamlessly integrated and synchronized to achieve more reliable news topic detection, and the contextual relationships between the news topics are extracted automatically. Second, topic network and hyperbolic visualization are seamlessly integrated to achieve interactive navigation and exploration of large-scale collections of news videos at the topic level, so that users can have a good global overview of large-scale collections of news videos at the first glance. In such interactive topic network navigation and exploration process, the users' personal background knowledge can be taken into consideration for obtaining the news topics of interest interactively, building up their mental models of news needs precisely and formulating their searches easily by selecting the visible news topics on the screen directly. Our system can further recommend the relevant web news, the new search directions, and the most relevant news videos according to their importance and representativeness scores."
]
} |
1701.00199 | 2570686563 | Recommendation has become one of the most important components of online services for improving sale records, however visualization work for online recommendation is still very limited. This paper presents an interactive recommendation approach with the following two components. First, rating records are the most widely used data for online recommendation, but they are often processed in high-dimensional spaces that can not be easily understood or interacted with. We propose a Latent Semantic Model (LSM) that captures the statistical features of semantic concepts on 2D domains and abstracts user preferences for personal recommendation. Second, we propose an interactive recommendation approach through a storytelling mechanism for promoting the communication between the user and the recommendation system. Our approach emphasizes interactivity, explicit user input, and semantic information convey; thus it can be used by general users without any knowledge of recommendation or visualization algorithms. We validate our model with data statistics and demonstrate our approach with case studies from the MovieLens100K dataset. Our approaches of latent semantic analysis and interactive recommendation can also be extended to other network-based visualization applications, including various online recommendation systems. | Interactive recommendation approaches have also been developed. @cite_2 visualized recommended users in social networks with node-link diagram and grouped relevant nodes on the recommendation list in parallel layers. presented an interactive recommendation approach by having users to choose between two sets of sample items iteratively and extracting latent factors @cite_30 . Recently, @cite_22 presented MyMovieMixer for interactively expressing user preferences over the hybrid recommendation process. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_2"
],
"mid": [
"2004557467",
"",
"1991352880"
],
"abstract": [
"We present an approach to interactive recommending that combines the advantages of algorithmic techniques with the benefits of user-controlled, interactive exploration in a novel manner. The method extracts latent factors from a matrix of user rating data as commonly used in Collaborative Filtering, and generates dialogs in which the user iteratively chooses between two sets of sample items. Samples are chosen by the system for low and high values of each latent factor considered. The method positions the user in the latent factor space with few interaction steps, and finally selects items near the user position as recommendations. In a user study, we compare the system with three alternative approaches including manual search and automatic recommending. The results show significant advantages of our approach over the three competing alternatives in 15 out of 24 possible parameter comparisons, in particular with respect to item fit, interaction effort and user control. The findings corroborate our assumption that the proposed method achieves a good trade-off between automated and interactive functions in recommender systems.",
"",
"We present SmallWorlds, a visual interactive graph-based interface that allows users to specify, refine and build item-preference profiles in a variety of domains. The interface facilitates expressions of taste through simple graph interactions and these preferences are used to compute personalized, fully transparent item recommendations for a target user. Predictions are based on a collaborative analysis of preference data from a user's direct peer group on a social network. We find that in addition to receiving transparent and accurate item recommendations, users also learn a wealth of information about the preferences of their peers through interaction with our visualization. Such information is not easily discoverable in traditional text based interfaces. A detailed analysis of our design choices for visual layout, interaction and prediction techniques is presented. Our evaluations discuss results from a user study in which SmallWorlds was deployed as an interactive recommender system on Facebook."
]
} |
1701.00199 | 2570686563 | Recommendation has become one of the most important components of online services for improving sale records, however visualization work for online recommendation is still very limited. This paper presents an interactive recommendation approach with the following two components. First, rating records are the most widely used data for online recommendation, but they are often processed in high-dimensional spaces that can not be easily understood or interacted with. We propose a Latent Semantic Model (LSM) that captures the statistical features of semantic concepts on 2D domains and abstracts user preferences for personal recommendation. Second, we propose an interactive recommendation approach through a storytelling mechanism for promoting the communication between the user and the recommendation system. Our approach emphasizes interactivity, explicit user input, and semantic information convey; thus it can be used by general users without any knowledge of recommendation or visualization algorithms. We validate our model with data statistics and demonstrate our approach with case studies from the MovieLens100K dataset. Our approaches of latent semantic analysis and interactive recommendation can also be extended to other network-based visualization applications, including various online recommendation systems. | We focus on techniques of narrative structure, which is a key concept in storytelling and narrative visualization. It refers to a series of events, facts, etc., given in order and with the establishing of connections between them" from the Oxford English Dictionary and it is often simplified to structures like beginning, middle, and end in visualization systems @cite_33 . Studies from journalism @cite_33 and political messaging and decision-making @cite_34 have been performed to understand useful narrative structures for visualization. | {
"cite_N": [
"@cite_34",
"@cite_33"
],
"mid": [
"2108982845",
"2157900633"
],
"abstract": [
"Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that \"tell a story\" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation.",
"Data visualization is regularly promoted for its ability to reveal stories within data, yet these “data stories” differ in important ways from traditional forms of storytelling. Storytellers, especially online journalists, have increasingly been integrating visualizations into their narratives, in some cases allowing the visualization to function in place of a written story. In this paper, we systematically review the design space of this emerging class of visualizations. Drawing on case studies from news media to visualization research, we identify distinct genres of narrative visualization. We characterize these design differences, together with interactivity and messaging, in terms of the balance between the narrative flow intended by the author (imposed by graphical elements and the interface) and story discovery on the part of the reader (often through interactive exploration). Our framework suggests design strategies for narrative visualization, including promising under-explored approaches to journalistic storytelling and educational media."
]
} |
1701.00199 | 2570686563 | Recommendation has become one of the most important components of online services for improving sale records, however visualization work for online recommendation is still very limited. This paper presents an interactive recommendation approach with the following two components. First, rating records are the most widely used data for online recommendation, but they are often processed in high-dimensional spaces that can not be easily understood or interacted with. We propose a Latent Semantic Model (LSM) that captures the statistical features of semantic concepts on 2D domains and abstracts user preferences for personal recommendation. Second, we propose an interactive recommendation approach through a storytelling mechanism for promoting the communication between the user and the recommendation system. Our approach emphasizes interactivity, explicit user input, and semantic information convey; thus it can be used by general users without any knowledge of recommendation or visualization algorithms. We validate our model with data statistics and demonstrate our approach with case studies from the MovieLens100K dataset. Our approaches of latent semantic analysis and interactive recommendation can also be extended to other network-based visualization applications, including various online recommendation systems. | Several interactive or automatic storytelling approaches have been developed for applications ranging from general visualization process @cite_3 @cite_7 to specific domains. For example, Wohlfart and Hauser @cite_23 used storytelling as a guided and interactive visualization presentation approach for medical visualization. @cite_28 detected geo-temporal patterns and integrates story narration to increase analytic sense-making cohesion in GeoTime. @cite_39 generated automatic animations with narrative structures extracted from event graphs for time-varying scientific visualization. @cite_17 presented a graph-driven approach for automatically identifying effective sequences in a set of visualizations to be presented linearly. @cite_10 presented a storytelling process with steps involved in finding insights, turning insights into a narrative, and communicating to an audience. Satyanarayan and Heer @cite_18 developed a model of storytelling abstractions and instantiate the model in Ellipsis with a graphical interface for story authoring. presented a narrative visualization system that presents literature review as interactive slides with three levels of narrative structures @cite_31 . @cite_12 generated textual annotations with a temporal layout and comic strip-style data snapshots for visualizing multidimensional and time-varying data. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_28",
"@cite_3",
"@cite_39",
"@cite_23",
"@cite_31",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"1530372876",
"",
"2087438815",
"2059435883",
"2004682330",
"1538591984",
"2516554963",
"2048905912",
"2509421351",
"1980209219"
],
"abstract": [
"Data visualization is now a popular medium for journalistic storytelling. However, current visualization tools either lack support for storytelling or require significant technical expertise. Informed by interviews with journalists, we introduce a model of storytelling abstractions that includes state-based scene structure, dynamic annotations and decoupled coordination of multiple visualization components. We instantiate our model in Ellipsis: a system that combines a domain-specific language DSL for storytelling with a graphical interface for story authoring. User interactions are automatically translated into statements in the Ellipsis DSL. By enabling storytelling without programming, the Ellipsis interface lowers the threshold for authoring narrative visualizations. We evaluate Ellipsis through example applications and user studies with award-winning journalists. Study participants find Ellipsis to be a valuable prototyping tool that can empower journalists in the creation of interactive narratives.",
"",
"A story is a powerful abstraction used by intelligence analysts to conceptualize threats and understand patterns as part of the analytical process. This paper demonstrates a system that detects geo-temporal patterns and integrates story narration to increase analytic sense-making cohesion in GeoTime. The GeoTime geo-temporal event visualization tool was augmented with a story system that uses narratives, hypertext linked visualizations, visual annotations, and pattern detection to create an environment for analytic exploration and communication, thereby assisting the analyst in identifying, extracting, arranging and presenting stories within the data The story system lets analysts operate at the story level with higher-level abstractions of data, such as behaviors and events, while staying connected to the evidence. The story system was developed and evaluated in collaboration with analysts.",
"The paper address creating stories from data fabulas using computer graphics as a narrative medium. This approach aims to build various stories conveying the same fabula from a given dataset..",
"This paper presents a digital storytelling approach that generates automatic animations for time-varying data visualization. Our approach simulates the composition and transition of storytelling techniques and synthesizes animations to describe various event features. Specifically, we analyze information related to a given event and abstract it as an event graph, which represents data features as nodes and event relationships as links. This graph embeds a tree-like hierarchical structure which encodes data features at different scales. Next, narrative structures are built by exploring starting nodes and suitable search strategies in this graph. Different stages of narrative structures are considered in our automatic rendering parameter decision process to generate animations as digital stories. We integrate this animation generation approach into an interactive exploration process of time-varying data, so that more comprehensive information can be provided in a timely fashion. We demonstrate with a storm surge application that our approach allows semantic visualization of time-varying data and easy animation generation for users without special knowledge about the underlying visualization techniques.",
"In this paper we present a novel approach to volume visualization for presentation purposes that improves both the comprehensibility and credibility of the intended visualization message. Therefore, we combine selected aspects from storytelling as well as from interactive volume visualization to create a guided but at the same time interactive visualization presentation approach. To ease the observer's access to a presented visualization result we not only communicate the result itself, but also deliver its creational process in the form of an annotated visualization animation, which we call a visualization story. Additionally, we enable variable means of interactivity during story playback. The story observers may just watch the presentation passively, but they are also allowed to reinvestigate the visualization independently from story guidance, offering the ability to verify, confirm, or even disapprove the presented visualization message. For demonstration purposes, we developed a prototype application that provides tools to author, edit, and watch visualization stories. We demonstrate the potential of our approach on the basis of medical visualization examples.",
"Reading academic paper is a daily task for researchers and graduate students. However, reading effectively can be challenging, particularly for novices in scientific research. For example, when readers are reading the related work section that cites a fair number of references in limited page space, they often need to flip back and forth between the text and the references and may also frequently search elsewhere for more information about the references. This increases the difficulty of understanding a paper. In this paper, we propose a narrative visualization system that helps the reading of academic papers. As a first step, we adopt narrative visualization to present literature review as interactive slides. Specifically, we propose a narrative structure with three levels of granularities that the reader can drill down or roll up freely. The logic flow of a slideshow can be organized based on the paper's presentation or citations. We demonstrate the effectiveness of our system through several case studies and user studies. The results show that the system allows users to quickly track and glance related work, making paper reading more effective and enjoyable.",
"Presenting and communicating insights to an audience-telling a story-is one of the main goals of data exploration. Even though visualization as a storytelling medium has recently begun to gain attention, storytelling is still underexplored in information visualization and little research has been done to help people tell their stories with data. To create a new, more engaging form of storytelling with data, we leverage and extend the narrative storytelling attributes of whiteboard animation with pen and touch interactions. We present SketchStory, a data-enabled digital whiteboard that facilitates the creation of personalized and expressive data charts quickly and easily. SketchStory recognizes a small set of sketch gestures for chart invocation, and automatically completes charts by synthesizing the visuals from the presenter-provided example icon and binding them to the underlying data. Furthermore, SketchStory allows the presenter to move and resize the completed data charts with touch, and filter the underlying data to facilitate interactive exploration. We conducted a controlled experiment for both audiences and presenters to compare SketchStory with a traditional presentation system, Microsoft PowerPoint. Results show that the audience is more engaged by presentations done with SketchStory than PowerPoint. Eighteen out of 24 audience participants preferred SketchStory to PowerPoint. Four out of five presenter participants also favored SketchStory despite the extra effort required for presentation.",
"Visualization is a powerful technique for analysis and communication of complex, multidimensional, and time-varying data. However, it can be difficult to manually synthesize a coherent narrative in a chart or graph due to the quantity of visualized attributes, a variety of salient features, and the awareness required to interpret points of interest (POls). We present Temporal Summary Images (TSIs) as an approach for both exploring this data and creating stories from it. As a visualization, a TSI is composed of three common components: (1) a temporal layout, (2) comic strip-style data snapshots, and (3) textual annotations. To augment user analysis and exploration, we have developed a number of interactive techniques that recommend relevant data features and design choices, including an automatic annotations workflow. As the analysis and visual design processes converge, the resultant image becomes appropriate for data storytelling. For validation, we use a prototype implementation for TSIs to conduct two case studies with large-scale, scientific simulation datasets.",
"Conveying a narrative with visualizations often requires choosing an order in which to present visualizations. While evidence exists that narrative sequencing in traditional stories can affect comprehension and memory, little is known about how sequencing choices affect narrative visualization. We consider the forms and reactions to sequencing in narrative visualization presentations to provide a deeper understanding with a focus on linear, 'slideshow-style' presentations. We conduct a qualitative analysis of 42 professional narrative visualizations to gain empirical knowledge on the forms that structure and sequence take. Based on the results of this study we propose a graph-driven approach for automatically identifying effective sequences in a set of visualizations to be presented linearly. Our approach identifies possible transitions in a visualization set and prioritizes local (visualization-to-visualization) transitions based on an objective function that minimizes the cost of transitions from the audience perspective. We conduct two studies to validate this function. We also expand the approach with additional knowledge of user preferences for different types of local transitions and the effects of global sequencing strategies on memory, preference, and comprehension. Our results include a relative ranking of types of visualization transitions by the audience perspective and support for memory and subjective rating benefits of visualization sequences that use parallelism as a structural device. We discuss how these insights can guide the design of narrative visualization and systems that support optimization of visualization sequence."
]
} |
1612.09346 | 2569680626 | In many computer vision tasks, we expect a particular behavior of the output with respect to rotations of the input image. If this relationship is explicitly encoded, instead of treated as any other variation, the complexity of the problem is decreased, leading to a reduction in the size of the required model. In this paper, we propose the Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network (CNN) architecture encoding rotation equivariance, invariance and covariance. Each convolutional filter is applied at multiple orientations and returns a vector field representing magnitude and angle of the highest scoring orientation at every spatial location. We develop a modified convolution operator relying on this representation to obtain deep architectures. We test RotEqNet on several problems requiring different responses with respect to the inputs’ rotation: image classification, biomedical image segmentation, orientation estimation and patch matching. In all cases, we show that RotEqNet offers extremely compact models in terms of number of parameters and provides results in line to those of networks orders of magnitude larger. | Two families of approaches explicitly account for rotation invariance or equivariance: 1) those that transform the representation (image or feature maps) and 2) those that rotate the filters. RotEqNet belongs to the latter. * -3mm Jaderberg al @cite_23 propose the Spatial Transformer layer, which learns how to crop and transform a region of the image (or a feature map) before passing it to the next layer. This transforms relevant regions into a canonical form, improving the learning process by reducing geometrical appearance variations in subsequent layers. TI-pooling @cite_20 inputs several rotated versions of a same image to the same CNN and then performs pooling across the different feature vectors at the first fully connected layer. Such scheme allows another subsequent fully connected layer to choose among rotated inputs to perform classification. Cheng al @cite_3 employ in every minibatch several rotated versions of the input images. Their representations after the first fully connected layer are then encouraged to be similar, forcing the CNN to learn rotation invariance. Henriques al @cite_16 warp the images such that the translation equivariance inherent to convolutions is transformed into rotation and scale equivariance. | {
"cite_N": [
"@cite_20",
"@cite_3",
"@cite_23",
"@cite_16"
],
"mid": [
"2340427832",
"2461725797",
"603908379",
"2518995263"
],
"abstract": [
"In this paper we present a deep neural network topology that incorporates a simple to implement transformationinvariant pooling operator (TI-POOLING). This operator is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes. Most current methods usually make use of dataset augmentation to address this issue, but this requires larger number of model parameters and more training data, and results in significantly increased training time and larger chance of under-or overfitting. The main reason for these drawbacks is that that the learned model needs to capture adequate features for all the possible transformations of the input. On the other hand, we formulate features in convolutional neural networks to be transformation-invariant. We achieve that using parallel siamese architectures for the considered transformation set and applying the TI-POOLING operator on their outputs before the fully-connected layers. We show that this topology internally finds the most optimal \"canonical\" instance of the input image for training and therefore limits the redundancy in learned features. This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.",
"Thanks to the powerful feature representations obtained through deep convolutional neural network (CNN), the performance of object detection has recently been substantially boosted. Despite the remarkable success, the problems of object rotation, within-class variability, and between-class similarity remain several major challenges. To address these problems, this paper proposes a novel and effective method to learn a rotation-invariant and Fisher discriminative CNN (RIFD-CNN) model. This is achieved by introducing and learning a rotation-invariant layer and a Fisher discriminative layer, respectively, on the basis of the existing high-capacity CNN architectures. Specifically, the rotation-invariant layer is trained by imposing an explicit regularization constraint on the objective function that enforces invariance on the CNN features before and after rotating. The Fisher discriminative layer is trained by imposing the Fisher discrimination criterion on the CNN features so that they have small within-class scatter but large between-class separation. In the experiments, we comprehensively evaluate the proposed method for object detection task on a public available aerial image dataset and the PASCAL VOC 2007 dataset. State-of-the-art results are achieved compared with the existing baseline methods.",
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.",
"Convolutional Neural Networks (CNNs) are extremely efficient, since they exploit the inherent translation-invariance of natural images. However, translation is just one of a myriad of useful spatial transformations. Can the same efficiency be attained when considering other spatial invariances? Such generalized convolutions have been considered in the past, but at a high computational cost. We present a construction that is simple and exact, yet has the same computational complexity that standard convolutions enjoy. It consists of a constant image warp followed by a simple convolution, which are standard blocks in deep learning toolboxes. With a carefully crafted warp, the resulting architecture can be made equivariant to a wide range of two-parameter spatial transformations. We show encouraging results in realistic scenarios, including the estimation of vehicle poses in the Google Earth dataset (rotation and scale), and face poses in Annotated Facial Landmarks in the Wild (3D rotations under perspective)."
]
} |
1612.09346 | 2569680626 | In many computer vision tasks, we expect a particular behavior of the output with respect to rotations of the input image. If this relationship is explicitly encoded, instead of treated as any other variation, the complexity of the problem is decreased, leading to a reduction in the size of the required model. In this paper, we propose the Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network (CNN) architecture encoding rotation equivariance, invariance and covariance. Each convolutional filter is applied at multiple orientations and returns a vector field representing magnitude and angle of the highest scoring orientation at every spatial location. We develop a modified convolution operator relying on this representation to obtain deep architectures. We test RotEqNet on several problems requiring different responses with respect to the inputs’ rotation: image classification, biomedical image segmentation, orientation estimation and patch matching. In all cases, we show that RotEqNet offers extremely compact models in terms of number of parameters and provides results in line to those of networks orders of magnitude larger. | It is worth mentioning that standard data augmentation strategies belong to this first family. They rely on random rotations and flips of the training samples @cite_4 : given abundant training samples and enough model capacity, a CNN might learn that different orientations should score the same by learning equivalent filters at different orientations @cite_9 . Unlike this, RotEqNet is well suited for problems with limited training samples that can profit from reduced model sizes, since the behavior with respect to rotations is hardcoded and it does not need to be learned. | {
"cite_N": [
"@cite_9",
"@cite_4"
],
"mid": [
"1912570122",
"2156163116"
],
"abstract": [
"Despite the importance of image representations such as histograms of oriented gradients and deep Convolutional Neural Networks (CNN), our theoretical understanding of them remains limited. Aiming at filling this gap, we investigate three key mathematical properties of representations: equivariance, invariance, and equivalence. Equivariance studies how transformations of the input image are encoded by the representation, invariance being a special case where a transformation has no effect. Equivalence studies whether two representations, for example two different parametrisations of a CNN, capture the same visual information or not. A number of methods to establish these properties empirically are proposed, including introducing transformation and stitching layers in CNNs. These methods are then applied to popular representations to reveal insightful aspects of their structure, including clarifying at which layers in a CNN certain geometric invariances are achieved. While the focus of the paper is theoretical, direct applications to structured-output regression are demonstrated too.",
"Neural networks are a powerful technology forclassification of visual inputs arising from documents.However, there is a confusing plethora of different neuralnetwork methods that are used in the literature and inindustry. This paper describes a set of concrete bestpractices that document analysis researchers can use toget good results with neural networks. The mostimportant practice is getting a training set as large aspossible: we expand the training set by adding a newform of distorted data. The next most important practiceis that convolutional neural networks are better suited forvisual document tasks than fully connected networks. Wepropose that a simple \"do-it-yourself\" implementation ofconvolution with a flexible architecture is suitable formany visual document problems. This simpleconvolutional neural network does not require complexmethods, such as momentum, weight decay, structure-dependentlearning rates, averaging layers, tangent prop,or even finely-tuning the architecture. The end result is avery simple yet general architecture which can yieldstate-of-the-art performance for document analysis. Weillustrate our claims on the MNIST set of English digitimages."
]
} |
1612.09322 | 2569120202 | Logo detection in unconstrained images is challenging, particularly when only very sparse labelled training images are accessible due to high labelling costs. In this work, we describe a model training image synthesising method capable of improving significantly logo detection performance when only a handful of (e.g., 10) labelled training images captured in realistic context are available, avoiding extensive manual labelling costs. Specifically, we design a novel algorithm for generating Synthetic Context Logo (SCL) training images to increase model robustness against unknown background clutters, resulting in superior logo detection performance. For benchmarking model performance, we introduce a new logo detection dataset TopLogo-10 collected from top 10 most popular clothing wearable brandname logos captured in rich visual context. Extensive comparisons show the advantages of our proposed SCL model over the state-of-the-art alternatives for logo detection using two real-world logo benchmark datasets: FlickrLogo-32 and our new TopLogo-10. | Logo Detection. Most existing approaches to logo detection rely on hand-crafted features, e.g., HOG, SIFT, colour histogram, edge @cite_27 @cite_5 @cite_32 @cite_17 @cite_9 @cite_34 @cite_24 . They are limited in obtaining more expressive representation and model robustness for recognising a large number of different logos. One reason is due to the unavailability of sufficiently large datasets required for exploring deep learning a more discriminative representation. For example, among all publicly available logo datasets, the most common FlickrLogos-32 dataset @cite_5 contains @math logo classes each with only @math images and in total @math logo objects, whilst BelgaLogos @cite_16 has @math logo images from @math logo classes with bounding box (bbox) location labelled (Table ). | {
"cite_N": [
"@cite_9",
"@cite_32",
"@cite_16",
"@cite_24",
"@cite_27",
"@cite_5",
"@cite_34",
"@cite_17"
],
"mid": [
"1977805283",
"2044557744",
"2031039904",
"",
"",
"2003813371",
"2011786013",
"2040335539"
],
"abstract": [
"In this paper we address the challenging problem of logo recognition. As logos appear in various scales, colors and with illuminations coming from source lights of various intensities, positions or colors, their recognition is a tedious task. The typical approach stands within Bag-of-Words (BoW) framework with SIFT (Scale Invariant Feature Transform) based features. To increase the accuracy, we rely on the recently introduced feature of the complete rank transform, which we extend for a multi-scale description. The system's performance is verified on the FlickrLogos-32 database.",
"Detecting logos in photos is challenging. A reason is that logos locally resemble patterns frequently seen in random images. We propose to learn a statistical model for the distribution of incorrect detections output by an image matching algorithm. It results in a novel scoring criterion in which the weight of correlated keypoint matches is reduced, penalizing irrelevant logo detections. In experiments on two very different logo retrieval benchmarks, our approach largely improves over the standard matching criterion as well as other state-of-the-art approaches.",
"This paper presents a new content-based retrieval framework applied to logo retrieval in large natural image collections. The first contribution is a new challenging dataset, called BelgaLogos, which was created in collaboration with professionals of a press agency, in order to evaluate logo retrieval technologies in real-world scenarios. The second and main contribution is a new visual query expansion method using an a contrario thresholding strategy in order to improve the accuracy of expanded query images. Whereas previous methods based on the same paradigm used a purely hand tuned fixed threshold, we provide a fully adaptive method enhancing both genericity and effectiveness. This new technique is evaluated on both OxfordBuilding dataset and our new BelgaLogos dataset.",
"",
"",
"We propose a scalable logo recognition approach that extends the common bag-of-words model and incorporates local geometry in the indexing process. Given a query image and a large logo database, the goal is to recognize the logo contained in the query, if any. We locally group features in triples using multi-scale Delaunay triangulation and represent triangles by signatures capturing both visual appearance and local geometry. Each class is represented by the union of such signatures over all instances in the class. We see large scale recognition as a sub-linear search problem where signatures of the query image are looked up in an inverted index structure of the class models. We evaluate our approach on a large-scale logo recognition dataset with more than four thousand classes.",
"Logos are specially designed marks that identify goods, services, and organizations using distinguished characters, graphs, signals, and colors. Identifying logos can facilitate scene understanding, intelligent navigation, and object recognition. Although numerous logo recognition methods have been proposed for printed logos, a few methods have been specifically designed for logos in photos. Furthermore, most recognition methods use codebook-based approaches for the logos in photos. A codebook-based method is concerned with the generation of visual words for all the logo models. When new logos are added, the codebook reconstruction is required if effectiveness is a crucial factor. Moreover, logo detection in natural scenes is difficult because of perspective tilt and non-rigid deformation. Therefore, this study develops an extendable, but discriminating, model-based logo detection method. The proposed logo detection method is based on a support vector machine (SVM) using edge-based histograms of oriented gradient (HOGE) as features through multi-scale sliding window scanning. Thereafter, anti-distortion affine scale invariant feature transform (ASIFT) is used for logo verification with constraints on the ASIFT matching pairs and neighbors. The experimental results using the public Flickr-Logo database confirm that the proposed method has a higher retrieval and precision accuracy compared to existing model-based methods.",
"We present a scalable logo recognition technique based on feature bundling. Individual local features are aggregated with features from their spatial neighborhood into bundles. These bundles carry more information about the image content than single visual words. The recognition of logos in novel images is then performed by querying a database of reference images. We further propose a novel WGC-constrained RANSAC and a technique that boosts recall for object retrieval by synthesizing images from original query or reference images. We demonstrate the benefits of these techniques for both small object retrieval and logo recognition. Our logo recognition system clearly outperforms the current state-of-the-art with a recall of 83 at a precision of 99 ."
]
} |
1612.09322 | 2569120202 | Logo detection in unconstrained images is challenging, particularly when only very sparse labelled training images are accessible due to high labelling costs. In this work, we describe a model training image synthesising method capable of improving significantly logo detection performance when only a handful of (e.g., 10) labelled training images captured in realistic context are available, avoiding extensive manual labelling costs. Specifically, we design a novel algorithm for generating Synthetic Context Logo (SCL) training images to increase model robustness against unknown background clutters, resulting in superior logo detection performance. For benchmarking model performance, we introduce a new logo detection dataset TopLogo-10 collected from top 10 most popular clothing wearable brandname logos captured in rich visual context. Extensive comparisons show the advantages of our proposed SCL model over the state-of-the-art alternatives for logo detection using two real-world logo benchmark datasets: FlickrLogo-32 and our new TopLogo-10. | Nevertheless, a few deep learning based logo detection models have been reported recently. @cite_10 applied the Fast R-CNN model @cite_29 for logo detection, which inevitably suffers from the training data scarcity challenge. To facilitate deep learning logo detection, @cite_12 built a large scale dataset called LOGO-Net by exhaustively collecting images from online retailer websites and then manually labelling them. This requires a huge amount of construction effort and moreover, LOGO-Net is inaccessible publicly. In contrast to all existing attempts above, we explore the potentials for learning a deep logo detection model by synthesising a large scale sized training data to address the sparse data annotation problem without additional human labelling cost. Compared to @cite_12 , our method is much more cost-effective and scalable for logo variations in diverse visual context, e.g., accurate logo annotation against diverse visual scene context can be rapidly generated without any manual labelling, and potentially also for generalising to a large number of new logo classes with minimal labelling. | {
"cite_N": [
"@cite_29",
"@cite_10",
"@cite_12"
],
"mid": [
"",
"2219856611",
"2103577800"
],
"abstract": [
"",
"Recently, there has been a flurry of industrial activity around logo recognition, such as Ditto's service for marketers to track their brands in user-generated images, and LogoGrab's mobile app platform for logo recognition. However, relatively little academic or open-source logo recognition progress has been made in the last four years. Meanwhile, deep convolutional neural networks (DCNNs) have revolutionized a broad range of object recognition applications. In this work, we apply DCNNs to logo recognition. We propose several DCNN architectures, with which we surpass published state-of-art accuracy on a popular logo recognition dataset.",
"Logo detection from images has many applications, particularly for brand recognition and intellectual property protection. Most existing studies for logo recognition and detection are based on small-scale datasets which are not comprehensive enough when exploring emerging deep learning techniques. In this paper, we introduce \"LOGO-Net\", a large-scale logo image database for logo detection and brand recognition from real-world product images. To facilitate research, LOGO-Net has two datasets: (i)\"logos-18\" consists of 18 logo classes, 10 brands, and 16,043 logo objects, and (ii) \"logos-160\" consists of 160 logo classes, 100 brands, and 130,608 logo objects. We describe the ideas and challenges for constructing such a large-scale database. Another key contribution of this work is to apply emerging deep learning techniques for logo detection and brand recognition tasks, and conduct extensive experiments by exploring several state-of-the-art deep region-based convolutional networks techniques for object detection tasks. The LOGO-net will be released at this http URL"
]
} |
1612.09368 | 2951486203 | This paper initiates the studies of parallel algorithms for core maintenance in dynamic graphs. The core number is a fundamental index reflecting the cohesiveness of a graph, which are widely used in large-scale graph analytics. The core maintenance problem requires to update the core numbers of vertices after a set of edges and vertices are inserted into or deleted from the graph. We investigate the parallelism in the core update process when multiple edges and vertices are inserted or deleted. Specifically, we discover a structure called superior edge set, the insertion or deletion of edges in which can be processed in parallel. Based on the structure of superior edge set, efficient parallel algorithms are then devised for incremental and decremental core maintenance respectively. To the best of our knowledge, the proposed algorithms are the first parallel ones for the fundamental core maintenance problem. The algorithms show a significant speedup in the processing time compared with previous results that sequentially handle edge and vertex insertions deletions. Finally, extensive experiments are conducted on different types of real-world and synthetic datasets, and the results illustrate the efficiency, stability and scalability of the proposed algorithms. | In static graphs, the core decomposition problem has been extensively studied. The state-of-the art algorithm was given in @cite_6 , the runtime of which is linear in the number of edges. In @cite_25 , an external-memory algorithm was proposed when the graph is too large to hold in memory. Core decomposition in the distributed setting was studied in @cite_12 . The above three algorithms were compared in @cite_13 under the GraphChi and WebGraph models. Parallel core decomposition was studied in @cite_26 . | {
"cite_N": [
"@cite_26",
"@cite_6",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"1676999057",
"2291620117",
"2142599626",
"1971717360"
],
"abstract": [
"",
"The structure of large networks can be revealed by partitioning them to smaller parts, which are easier to handle. One of such decompositions is based on k cores, proposed in 1983 by Seidman. In the paper an ecien t, O(m), m is the number of lines, algorithm for determining the cores decomposition of a given simple network is presented. An application on the authors collaboration network in computational geometry is presented.",
"Studying the topology of a network is critical to inferring underlying dynamics such as tolerance to failure, group behavior and spreading patterns. k-core decomposition is a well-established metric which partitions a graph into layers from external to more central vertices. In this paper we aim to explore whether k-core decomposition of large networks can be computed using a consumer-grade PC. We feature implementations of the \"vertex-centric\" distributed protocol introduced by Montresor, De Pellegrini and Miorandi on GraphChi and Webgraph. Also, we present an accurate implementation of the Batagelj and Zaversnik algorithm for k-core decomposition in Webgraph. With our implementations, we show that we can efficiently handle networks of billions of edges using a single consumer-level machine within reasonable time and can produce excellent approximations in only a fraction of the execution time. To the best of our knowledge, our biggest graphs are considerably larger than the graphs considered in the literature. Next, we present an optimized implementation of an external-memory algorithm (EMcore) by Cheng, Ke, Chu, and Ozsu. We show that this algorithm also performs well for large datasets, however, it cannot predict whether a given memory budget is sufficient for a new dataset. We present a thorough analysis of all algorithms concluding that it is viable to compute k-core decomposition for large networks in a consumer-grade PC.",
"The k-core of a graph is the largest subgraph in which every vertex is connected to at least k other vertices within the subgraph. Core decomposition finds the k-core of the graph for every possible k. Past studies have shown important applications of core decomposition such as in the study of the properties of large networks (e.g., sustainability, connectivity, centrality, etc.), for solving NP-hard problems efficiently in real networks (e.g., maximum clique finding, densest subgraph approximation, etc.), and for large-scale network fingerprinting and visualization. The k-core is a well accepted concept partly because there exists a simple and efficient algorithm for core decomposition, by recursively removing the lowest degree vertices and their incident edges. However, this algorithm requires random access to the graph and hence assumes the entire graph can be kept in main memory. Nevertheless, real-world networks such as online social networks have become exceedingly large in recent years and still keep growing at a steady rate. In this paper, we propose the first external-memory algorithm for core decomposition in massive graphs. When the memory is large enough to hold the graph, our algorithm achieves comparable performance as the in-memory algorithm. When the graph is too large to be kept in the memory, our algorithm requires only O(k max ) scans of the graph, where k max is the largest core number of the graph. We demonstrate the efficiency of our algorithm on real networks with up to 52.9 million vertices and 1.65 billion edges.",
"Several novel metrics have been proposed in recent literature in order to study the relative importance of nodes in complex networks. Among those, k-coreness has found a number of applications in areas as diverse as sociology, proteinomics, graph visualization, and distributed system analysis and design. This paper proposes new distributed algorithms for the computation of the k-coreness of a network, a process also known as k-core decomposition. This technique 1) allows the decomposition, over a set of connected machines, of very large graphs, when size does not allow storing and processing them on a single host, and 2) enables the runtime computation of k-cores in “live” distributed systems. Lower bounds on the algorithms complexity are given, and an exhaustive experimental analysis on real-world data sets is provided."
]
} |
1612.09548 | 2567890977 | Appearance variations result in many difficulties in face image analysis. To deal with this challenge, we present a Unified Tensor-based Active Appearance Model (UT-AAM) for jointly modelling the geometry and texture information of 2D faces. For each type of face information, namely shape and texture, we construct a unified tensor model capturing all relevant appearance variations. This contrasts with the variation-specific models of the classical tensor AAM. To achieve the unification across pose variations, a strategy for dealing with self-occluded faces is proposed to obtain consistent shape and texture representations of pose-varied faces. In addition, our UT-AAM is capable of constructing the model from an incomplete training dataset, using tensor completion methods. Last, we use an effective cascaded-regression-based method for UT-AAM fitting. With these advancements, the utility of UT-AAM in practice is considerably enhanced. As an example, we demonstrate the improvements in training facial landmark detectors through the use of UT-AAM to synthesise a large number of virtual samples. Experimental results obtained on a number of well-known face datasets demonstrate the merits of the proposed approach. | To obtain the geometry and texture information of 2D faces, a variety of methods have been developed during the past decades, Active Shape Model (ASM) @cite_28 , Active Appearance Model (AAM) @cite_17 @cite_62 , Constrained Local Model (CLM) @cite_5 @cite_29 and Cascaded Regression (CR-) based facial landmark detection methods @cite_57 @cite_58 @cite_68 @cite_0 @cite_8 . Among these algorithms, AAM is capable of jointly modelling the shape and texture information of faces. ASM, CLM and CR-based approaches are mainly used for obtaining the shape information conveyed by facial landmarks. However, fitting AAM to 2D faces is non-trivial, especially for faces exhibiting a wide range of appearance variations. The developments of AAM addressing this issue can be divided into two categories. In the category, the aim is to improve the structure of the underlying AAM models for better representation of shape or texture information, which is also expected to benefit the subsequent model fitting phase. The category focuses on developing fitting algorithms that generalise well for unseen faces with higher accuracy and less computational cost. | {
"cite_N": [
"@cite_62",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_57",
"@cite_68",
"@cite_0",
"@cite_5",
"@cite_58",
"@cite_17"
],
"mid": [
"2152826865",
"",
"2038952578",
"2163998463",
"2136000821",
"1990937109",
"2554268477",
"1977821862",
"2157285372",
""
],
"abstract": [
"We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.",
"",
"!, Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion. It is more problematic to apply modelbased methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed. The problem with existing methods is that they sacrifice model specificity in order to accommodate variability, thereby compromising robustness during image interpretation. We argue that a model should only be able to deform in ways characteristic of the class of objects it represents. We describe a method for building models by learning patterns of variability from a training set of correctly annotated images. These models can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes). The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set. We show several practical examples where we have built such models and used them to locate partially occluded objects in noisy, cluttered images. Q 199s A&& prrss, IN.",
"We present an efficient and robust method of locating a set of feature points in an object of interest. From a training set we construct a joint model of the appearance of each feature together with their relative positions. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the Active Appearance Models (AAM) [T.F. Cootes, G.J. Edwards, C.J. Taylor, Active appearance models, in: Proceedings of the 5th European Conference on Computer Vision 1998, vol. 2, Freiburg, Germany, 1998.]. However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to a wide range of data sets, our Constrained Local Model (CLM) algorithm is more robust and more accurate than the AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on photographs of human faces, magnetic resonance (MR) images of the brain and a set of dental panoramic tomograms. We also show improved tracking performance on a challenging set of in car video sequences.",
"We present a fast and accurate algorithm for computing the 2D pose of objects in images called cascaded pose regression (CPR). CPR progressively refines a loosely specified initial guess, where each refinement is carried out by a different regressor. Each regressor performs simple image measurements that are dependent on the output of the previous regressors; the entire system is automatically learned from human annotated training examples. CPR is not restricted to rigid transformations: ‘pose’ is any parameterized variation of the object's appearance such as the degrees of freedom of deformable and articulated objects. We compare CPR against both standard regression techniques and human performance (computed from redundant human annotations). Experiments on three diverse datasets (mice, faces, fish) suggest CPR is fast (2–3ms per pose estimate), accurate (approaching human performance), and easy to train from small amounts of labeled data.",
"We present a very efficient, highly accurate, \"Explicit Shape Regression\" approach for face alignment. Unlike previous regression-based approaches, we directly learn a vectorial regression function to infer the whole facial shape (a set of facial landmarks) from the image and explicitly minimize the alignment errors over the training data. The inherent shape constraint is naturally encoded into the regressor in a cascaded learning framework and applied from coarse to fine during the test, without using a fixed parametric shape model as in most previous methods. To make the regression more effective and efficient, we design a two-level boosted regression, shape indexed features and a correlation-based feature selection method. This combination enables us to learn accurate models from large training data in a short time (20 min for 2,000 training images), and run regression extremely fast in test (15 ms for a 87 landmarks shape). Experiments on challenging data show that our approach significantly outperforms the state-of-the-art in terms of both accuracy and efficiency.",
"We present a new Cascaded Shape Regression (CSR) architecture, namely Dynamic Attention-Controlled CSR (DAC-CSR), for robust facial landmark detection on unconstrained faces. Our DAC-CSR divides facial landmark detection into three cascaded sub-tasks: face bounding box refinement, general CSR and attention-controlled CSR. The first two stages refine initial face bounding boxes and output intermediate facial landmarks. Then, an online dynamic model selection method is used to choose appropriate domain-specific CSRs for further landmark refinement. The key innovation of our DAC-CSR is the fault-tolerant mechanism, using fuzzy set sample weighting, for attention-controlled domain-specific model training. Moreover, we advocate data augmentation with a simple but effective 2D profile face generator, and context-aware feature extraction for better facial feature representation. Experimental results obtained on challenging datasets demonstrate the merits of our DAC-CSR over the state-of-the-art methods.",
"We present an efficient and robust model matching method which uses a joint shape and texture appearance model to generate a set of region template detectors. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the Active Appearance Model due to However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to human faces, our constrained local model (CLM) algorithm is more robust and more accurate than the original AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on two publicly available face data sets and improved tracking on a challenging set of in-car face sequences.",
"Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu intraface.",
""
]
} |
1612.09548 | 2567890977 | Appearance variations result in many difficulties in face image analysis. To deal with this challenge, we present a Unified Tensor-based Active Appearance Model (UT-AAM) for jointly modelling the geometry and texture information of 2D faces. For each type of face information, namely shape and texture, we construct a unified tensor model capturing all relevant appearance variations. This contrasts with the variation-specific models of the classical tensor AAM. To achieve the unification across pose variations, a strategy for dealing with self-occluded faces is proposed to obtain consistent shape and texture representations of pose-varied faces. In addition, our UT-AAM is capable of constructing the model from an incomplete training dataset, using tensor completion methods. Last, we use an effective cascaded-regression-based method for UT-AAM fitting. With these advancements, the utility of UT-AAM in practice is considerably enhanced. As an example, we demonstrate the improvements in training facial landmark detectors through the use of UT-AAM to synthesise a large number of virtual samples. Experimental results obtained on a number of well-known face datasets demonstrate the merits of the proposed approach. | A generative AAM has two PCA-based models for shape and texture, respectively. AAM is capable of generating arbitrary face instances by adjusting model parameters. The texture model of an AAM is usually constructed from raw pixel intensities. Hence an AAM fitting algorithm based on the optimisation of a cost function using texture information is sensitive to appearance variations. To eliminate this dependence, the use of variation-invariant features has been suggested, such as multi-band Value-Hue-Edge @cite_12 , image gradients @cite_60 , Haar-Like features @cite_70 and image gradient orientations @cite_22 @cite_59 . In addition, local features have been successfully used to fit a shape model to faces, such as local profiles in ASM and local patches in CLM. More recently, variation-invariant local descriptors have been very popular in CR-based facial landmark detection. To extract local features, we usually apply a local descriptor, such as HOG @cite_18 @cite_38 @cite_35 , SIFT @cite_58 , local pixel difference @cite_68 @cite_20 or Sparse Auto-Encoder @cite_49 @cite_19 , in the neighbourhood of facial landmarks. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_35",
"@cite_22",
"@cite_60",
"@cite_70",
"@cite_68",
"@cite_19",
"@cite_59",
"@cite_49",
"@cite_58",
"@cite_20",
"@cite_12"
],
"mid": [
"1487748529",
"",
"1832881114",
"1976079775",
"2151426910",
"2149918784",
"1990937109",
"",
"1604643028",
"2015229219",
"2157285372",
"1998294030",
"2097046107"
],
"abstract": [
"In this paper, we propose a novel fitting method that uses local image features to fit a 3D Morphable Face Model to 2D images. To overcome the obstacle of optimising a cost function that contains a non-differentiable feature extraction operator, we use a learning-based cascaded regression method that learns the gradient direction from data. The method allows to simultaneously solve for shape and pose parameters. Our method is thoroughly evaluated on Morphable Model generated data and first results on real data are presented. Compared to traditional fitting methods, which use simple raw features like pixel colour or edge maps, local features have been shown to be much more robust against variations in imaging conditions. Our approach is unique in that we are the first to use local features to fit a 3D Morphable Model. Because of the speed of our method, it is applicable for real-time applications. Our cascaded regression framework is available as an open source library at github.com patrikhuber superviseddescent.",
"",
"A large amount of training data is usually crucial for successful supervised learning. However, the task of providing training samples is often time-consuming, involving a considerable amount of tedious manual work. In addition, the amount of training data available is often limited. As an alternative, in this paper, we discuss how best to augment the available data for the application of automatic facial landmark detection. We propose the use of a 3D morphable face model to generate synthesized faces for a regression-based detector training. Benefiting from the large synthetic training data, the learned detector is shown to exhibit a better capability to detect the landmarks of a face with pose variations. Furthermore, the synthesized training data set provides accurate and consistent landmarks automatically as compared to the landmarks annotated manually, especially for occluded facial parts. The synthetic data and real data are from different domains; hence the detector trained using only synthesized faces does not generalize well to real faces. To deal with this problem, we propose a cascaded collaborative regression algorithm, which generates a cascaded shape updater that has the ability to overcome the difficulties caused by pose variations, as well as achieving better accuracy when applied to real faces. The training is based on a mix of synthetic and real image data with the mixing controlled by a dynamic mixture weighting schedule. Initially, the training uses heavily the synthetic data, as this can model the gross variations between the various poses. As the training proceeds, progressively more of the natural images are incorporated, as these can model finer detail. To improve the performance of the proposed algorithm further, we designed a dynamic multi-scale local feature extraction method, which captures more informative local features for detector training. An extensive evaluation on both controlled and uncontrolled face data sets demonstrates the merit of the proposed algorithm.",
"We present Active Orientation Models (AOMs), generative models of facial shape and appearance, which extend the well-known paradigm of Active Appearance Models (AAMs) for the case of generic face alignment under unconstrained conditions. Robustness stems from the fact that the proposed AOMs employ a statistically robust appearance model based on the principal components of image gradient orientations. We show that when incorporated within standard optimization frameworks for AAM learning and fitting, this kernel Principal Component Analysis results in robust algorithms for model fitting. At the same time, the resulting optimization problems maintain the same computational cost. As a result, the main similarity of AOMs with AAMs is the computational complexity. In particular, the project-out version of AOMs is as computationally efficient as the standard project-out inverse compositional algorithm, which is admittedly one of the fastest algorithms for fitting AAMs. We verify experimentally that: 1) AOMs generalize well to unseen variations and 2) outperform all other state-of-the-art AAM methods considered by a large margin. This performance improvement brings AOMs at least in par with other contemporary methods for face alignment. Finally, we provide MATLAB code at http: ibug.doc.ic.ac.uk resources.",
"The Active Appearance Model (AAM) algorithm matches statistical models of shape and texture to images rapidly by assuming a linear relationship between the texture residual and changes in the model parameters. When the texture is represented as raw intensity values, this has been shown to be a reasonable approximation in many cases. However, models built on them are sensitive to changes in illumination conditions. This paper examines the effect of using different representations of image texture to improve the accuracy and robustness of the AAM search. We show that normalising the gradient images by non-linear techniques can give much improved matching with higher accuracy and a wider effective range of convergence.",
"This paper proposes a discriminative framework for efficiently aligning images. Although conventional Active Appearance Models (AAMs)-based approaches have achieved some success, they suffer from the generalization problem, i.e., how to align any image with a generic model. We treat the iterative image alignment problem as a process of maximizing the score of a trained two-class classifier that is able to distinguish correct alignment (positive class) from incorrect alignment (negative class). During the modeling stage, given a set of images with ground truth landmarks, we train a conventional Point Distribution Model (PDM) and a boosting-based classifier, which acts as an appearance model. When tested on an image with the initial landmark locations, the proposed algorithm iteratively updates the shape parameters of the PDM via the gradient ascent method such that the classification score of the warped image is maximized. We use the term Boosted Appearance Models (BAMs) to refer to the learned shape and appearance models, as well as our specific alignment method. The proposed framework is applied to the face alignment problem. Using extensive experimentation, we show that, compared to the AAM-based approach, this framework greatly improves the robustness, accuracy, and efficiency of face alignment by a large margin, especially for unseen data.",
"We present a very efficient, highly accurate, \"Explicit Shape Regression\" approach for face alignment. Unlike previous regression-based approaches, we directly learn a vectorial regression function to infer the whole facial shape (a set of facial landmarks) from the image and explicitly minimize the alignment errors over the training data. The inherent shape constraint is naturally encoded into the regressor in a cascaded learning framework and applied from coarse to fine during the test, without using a fixed parametric shape model as in most previous methods. To make the regression more effective and efficient, we design a two-level boosted regression, shape indexed features and a correlation-based feature selection method. This combination enables us to learn accurate models from large training data in a short time (20 min for 2,000 training images), and run regression extremely fast in test (15 ms for a 87 landmarks shape). Experiments on challenging data show that our approach significantly outperforms the state-of-the-art in terms of both accuracy and efficiency.",
"",
"Lucas–Kanade and active appearance models are among the most commonly used methods for image alignment and facial fitting, respectively. They both utilize nonlinear gradient descent, which is usually applied on intensity values. In this paper, we propose the employment of highly descriptive, densely sampled image features for both problems. We show that the strategy of warping the multichannel dense feature image at each iteration is more beneficial than extracting features after warping the intensity image at each iteration. Motivated by this observation, we demonstrate robust and accurate alignment and fitting performance using a variety of powerful feature descriptors. Especially with the employment of histograms of oriented gradient and scale-invariant feature transform features, our method significantly outperforms the current state-of-the-art results on in-the-wild databases.",
"In this letter, we present a random cascaded-regression copse (R-CR-C) for robust facial landmark detection. Its key innovations include a new parallel cascade structure design, and an adaptive scheme for scale-invariant shape update and local feature extraction. Evaluation on two challenging benchmarks shows the superiority of the proposed algorithm to state-of-the-art methods. © 1994-2012 IEEE.",
"Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu intraface.",
"This paper presents a highly efficient, very accurate regression approach for face alignment. Our approach has two novel components: a set of local binary features, and a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. Our approach achieves the state-of-the-art results when tested on the current most challenging benchmarks. Furthermore, because extracting and regressing local binary features is computationally very cheap, our system is much faster than previous methods. It achieves over 3, 000 fps on a desktop or 300 fps on a mobile phone for locating a few dozens of landmarks.",
"Earlier work has demonstrated generative models capable of synthesising near photo-realistic grey-scale images of objects. These models have been augmented with colour information, and recently with edge information. This paper extends the active appearance model framework by modelling the appearance of both derived feature bands and an intensity band. As a special case of feature-band augmented appearance modelling we propose a dedicated representation with applications to face segmentation. The representation addresses a major problem within face recognition by lowering the sensitivity to lighting conditions. Results show that the localisation accuracy of facial features is considerably increased using this appearance representation under diffuse and directional lighting and at multiple scales."
]
} |
1612.09548 | 2567890977 | Appearance variations result in many difficulties in face image analysis. To deal with this challenge, we present a Unified Tensor-based Active Appearance Model (UT-AAM) for jointly modelling the geometry and texture information of 2D faces. For each type of face information, namely shape and texture, we construct a unified tensor model capturing all relevant appearance variations. This contrasts with the variation-specific models of the classical tensor AAM. To achieve the unification across pose variations, a strategy for dealing with self-occluded faces is proposed to obtain consistent shape and texture representations of pose-varied faces. In addition, our UT-AAM is capable of constructing the model from an incomplete training dataset, using tensor completion methods. Last, we use an effective cascaded-regression-based method for UT-AAM fitting. With these advancements, the utility of UT-AAM in practice is considerably enhanced. As an example, we demonstrate the improvements in training facial landmark detectors through the use of UT-AAM to synthesise a large number of virtual samples. Experimental results obtained on a number of well-known face datasets demonstrate the merits of the proposed approach. | Another way to modify the model structure is to rely on different underlying methods. The representation capacity of a PCA-based AAM, constructed from a small number of training samples, is limited. For unseen faces, the PCA-based model may miss some details and in consequence it is not able to represent complex faces faithfully. To cope with this issue, more advanced techniques, kernel methods @cite_25 @cite_23 @cite_27 and Deep Boltzmann Machines @cite_13 @cite_36 , have been suggested for model construction. Note that this limitation can also be addressed by using more PCA components trained from more samples with a wide range of appearance variations, as demonstrated by @cite_4 @cite_65 . Besides the representation capacity of AAM, a more important issue is how to construct a compact and structural model. A common way to do this is to use multi-view models, the View-based AAM (V-AAM) @cite_10 @cite_23 . However, this strategy is resource- and time-consuming because we have to construct, store and fit multiple models to a face. As an alternative, Bilinear AAM (B-AAM) constructs a unified model across pose variations @cite_66 . Nevertheless, both V-AAM and B-AAM can only deal with a single variation type among pose, expression and illumination modes. | {
"cite_N": [
"@cite_4",
"@cite_36",
"@cite_10",
"@cite_65",
"@cite_66",
"@cite_27",
"@cite_23",
"@cite_13",
"@cite_25"
],
"mid": [
"",
"2513385807",
"",
"2431101926",
"2100533104",
"2543689546",
"1587712869",
"1919814523",
"2094570674"
],
"abstract": [
"",
"The \"interpretation through synthesis\" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the generalizability of PCA subspaces. This paper presents a novel Deep Appearance Models (DAMs) approach, an efficient replacement for AAMs, to accurately capture both shape and texture of face images under large variations. In this approach, three crucial components represented in hierarchical layers are modeled using the Deep Boltzmann Machines (DBM) to robustly capture the variations of facial shapes and appearances. DAMs are therefore superior to AAMs in inferencing a representation for new face images under various challenging conditions. The proposed approach is evaluated in various applications to demonstrate its robustness and capabilities, i.e. facial super-resolution reconstruction, facial off-angle reconstruction or face frontalization, facial occlusion removal and age estimation using challenging face databases, i.e. Labeled Face Parts in the Wild (LFPW), Helen and FG-NET. Comparing to AAMs and other deep learning based approaches, the proposed DAMs achieve competitive results in those applications, thus this showed their advantages in handling occlusions, facial representation, and reconstruction.",
"",
"We present Large Scale Facial Model (LSFM) — a 3D Morphable Model (3DMM) automatically constructed from 9,663 distinct facial identities. To the best of our knowledge LSFM is the largest-scale Morphable Model ever constructed, containing statistical information from a huge variety of the human population. To build such a large model we introduce a novel fully automated and robust Morphable Model construction pipeline. The dataset that LSFM is trained on includes rich demographic information about each subject, allowing for the construction of not only a global 3DMM but also models tailored for specific age, gender or ethnicity groups. As an application example, we utilise the proposed model to perform age classification from 3D shape alone. Furthermore, we perform a systematic analysis of the constructed 3DMMs that showcases their quality and descriptive power. The presented extensive qualitative and quantitative evaluations reveal that the proposed 3DMM achieves state-of-the-art results, outperforming existing models by a large margin. Finally, for the benefit of the research community, we make publicly available the source code of the proposed automatic 3DMM construction pipeline. In addition, the constructed global 3DMM and a variety of bespoke models tailored by age, gender and ethnicity are available on application to researchers involved in medically oriented research.",
"Appearance models have been applied to model the space of human faces over the last two decades. In particular, active appearance models (AAMs) have been successfully used for face tracking, synthesis and recognition, and they are one of the state-of-the-art approaches due to its efficiency and representational power. Although widely employed, AAMs suffer from a few drawbacks, such as the inability to isolate pose, identity and expression changes. This paper proposes Bilinear Active Appearance Models (BAAMs), an extension of AAMs, that effectively decouple changes due to pose and expression identity. We derive a gradient-descent algorithm to efficiently fit BAAMs to new images. Experimental results show how BAAMs improve generalization and convergence with respect to the linear model. In addition, we illustrate decoupling benefits of BAAMs in face recognition across pose. We show how the pose normalization provided by BAAMs increase the recognition performance of commercial systems.",
"2D Active Appearance Models (AAM) and 3D Morphable Models (3DMM) are widely used techniques. AAM provide a fast fitting process, but may represent unwanted 3D transformations unless strictly constrained not to do so. The reverse is true for 3DMM. The two approaches also require of a pre-alignment of their 2D or 3D shapes before the modeling can be carried out which may lead to errors. Furthermore, current models are insufficient to represent nonlinear shape and texture variations. In this paper, we derive a new approach that can model nonlinear changes in examples without the need of a pre-alignment step. In addition, we show how the proposed approach carries the above mentioned advantages of AAM and 3DMM. To achieve this goal, we take advantage of the inherent properties of complex spherical distributions, which provide invariance to translation, scale and rotation. To reduce the complexity of parameter estimation we take advantage of a recent result that shows how to estimate spherical distributions using their Euclidean counterpart, e.g., the Gaussians. This leads to the definition of Rotation Invariant Kernels (RIK) for modeling nonlinear shape changes. We show the superiority of our algorithm to AAM in several face datasets. We also show how the derived algorithm can be used to model complex 3D facial expression changes observed in American Sign Language (ASL).",
"In principle, the recovery and reconstruction of a 3D object from its 2D view projections require the parameterisation of its shape structure and surface reflectance properties. Explicit representation and recovery of such 3D information is notoriously difficult to achieve. Alternatively, a linear combination of 2D views can be used which requires the establishment of dense correspondence between views. This in general, is difficult to compute and necessarily expensive. In this paper we examine the use of affine and local feature-based transformations in establishing correspondences between very large pose variations. In doing so, we utilise a generic-view template, a generic 3D surface model and Kernel PCA for modelling shape and texture nonlinearities across views. The abilities of both approaches to reconstruct and recover faces from any 2D image are evaluated and compared.",
"The “interpretation through synthesis”, i.e. Active Appearance Models (AAMs) method, has received considerable attention over the past decades. It aims at “explaining” face images by synthesizing them via a parameterized model of appearance. It is quite challenging due to appearance variations of human face images, e.g. facial poses, occlusions, lighting, low resolution, etc. Since these variations are mostly non-linear, it is impossible to represent them in a linear model, such as Principal Component Analysis (PCA). This paper presents a novel Deep Appearance Models (DAMs) approach, an efficient replacement for AAMs, to accurately capture both shape and texture of face images under large variations. In this approach, three crucial components represented in hierarchical layers are modeled using the Deep Boltzmann Machines (DBM) to robustly capture the variations of facial shapes and appearances. DAMs are therefore superior to AAMs in inferring a representation for new face images under various challenging conditions. In addition, DAMs have ability to generate a compact set of parameters in higher level representation that can be used for classification, e.g. face recognition and facial age estimation. The proposed approach is evaluated in facial image reconstruction, facial super-resolution on two databases, i.e. LFPW and Helen. It is also evaluated on FG-NET database for the problem of age estimation.",
"Recovering the shape of any 3D object using multiple 2D views requires establishing correspondence between feature points at different views. However changes in viewpoint introduce self-occlusions, resulting nonlinear variations in the shape and inconsistent 2D features between views. Here we introduce a multi-view nonlinear shape model utilising 2D view-dependent constraint without explicit reference to 3D structures. For nonlinear model transformation, we adopt Kernel PCA based on Support Vector Machines."
]
} |
1612.08789 | 2564161974 | Composition and parametrisation of multicomponent predictive systems (MCPSs) consisting of chains of data transformation steps is a challenging task. This paper is concerned with theoretical considerations and extensive experimental analysis for automating the task of building such predictive systems. In the theoretical part of the paper, we first propose to adopt the Well-handled and Acyclic Workflow (WA-WF) Petri net as a formal representation of MCPSs. We then define the optimisation problem in which the search space consists of suitably parametrised directed acyclic graphs (i.e. WA-WFs) forming the sought MCPS solutions. In the experimental analysis we focus on examining the impact of considerably extending the search space resulting from incorporating multiple sequential data cleaning and preprocessing steps in the process of composing optimised MCPSs, and the quality of the solutions found. In a range of extensive experiments three different optimisation strategies are used to automatically compose MCPSs for 21 publicly available datasets and 7 datasets from real chemical processes. The diversity of the composed MCPSs found is an indication that fully and automatically exploiting different combinations of data cleaning and preprocessing techniques is possible and highly beneficial for different predictive models. Our findings can have a major impact on development of high quality predictive models as well as their maintenance and scalability aspects needed in modern applications and deployment scenarios. | The Combined Algorithm Selection and Hyperparameter configuration (CASH) problem @cite_4 consists of finding the best combination of learning algorithm @math and hyperparameters @math that optimise an objective function @math (e.g. Eq. minimises the @math -fold CV error) for a given dataset @math . Formally, CASH problem is given by: where @math is a set of algorithms with associated hyperparameter spaces @math . The loss function @math takes as arguments an algorithm configuration @math (i.e. an instance of a learning algorithm and hyperparameters), a training set @math and a validation set @math . | {
"cite_N": [
"@cite_4"
],
"mid": [
"2192203593"
],
"abstract": [
"Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications."
]
} |
1612.08789 | 2564161974 | Composition and parametrisation of multicomponent predictive systems (MCPSs) consisting of chains of data transformation steps is a challenging task. This paper is concerned with theoretical considerations and extensive experimental analysis for automating the task of building such predictive systems. In the theoretical part of the paper, we first propose to adopt the Well-handled and Acyclic Workflow (WA-WF) Petri net as a formal representation of MCPSs. We then define the optimisation problem in which the search space consists of suitably parametrised directed acyclic graphs (i.e. WA-WFs) forming the sought MCPS solutions. In the experimental analysis we focus on examining the impact of considerably extending the search space resulting from incorporating multiple sequential data cleaning and preprocessing steps in the process of composing optimised MCPSs, and the quality of the solutions found. In a range of extensive experiments three different optimisation strategies are used to automatically compose MCPSs for 21 publicly available datasets and 7 datasets from real chemical processes. The diversity of the composed MCPSs found is an indication that fully and automatically exploiting different combinations of data cleaning and preprocessing techniques is possible and highly beneficial for different predictive models. Our findings can have a major impact on development of high quality predictive models as well as their maintenance and scalability aspects needed in modern applications and deployment scenarios. | The CASH problem can be approached in different ways. One example is a grid search, i.e. an exhaustive search over all the possible combinations of discretized parameters. Such technique can however be computationally prohibitive in large search spaces or with big datasets. Instead, a simpler mechanism like random search, where the search space is randomly explored in a limited amount of time, has been shown to be more effective in high-dimensional spaces @cite_41 . | {
"cite_N": [
"@cite_41"
],
"mid": [
"2097998348"
],
"abstract": [
"Grid search and manual search are the most widely used strategies for hyper-parameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid. Empirical evidence comes from a comparison with a large previous study that used grid search and manual search to configure neural networks and deep belief networks. Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time. Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration space. Compared with deep belief networks configured by a thoughtful combination of manual search and grid search, purely random search over the same 32-dimensional configuration space found statistically equal performance on four of seven data sets, and superior performance on one of seven. A Gaussian process analysis of the function from hyper-parameters to validation set performance reveals that for most data sets only a few of the hyper-parameters really matter, but that different hyper-parameters are important on different data sets. This phenomenon makes grid search a poor choice for configuring algorithms for new data sets. Our analysis casts some light on why recent \"High Throughput\" methods achieve surprising success--they appear to search through a large number of hyper-parameters because most hyper-parameters do not matter much. We anticipate that growing interest in large hierarchical models will place an increasing burden on techniques for hyper-parameter optimization; this work shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper-parameter optimization algorithms."
]
} |
1612.08994 | 2949863871 | One of the major goals in automated argumentation mining is to uncover the argument structure present in argumentative text. In order to determine this structure, one must understand how different individual components of the overall argument are linked. General consensus in this field dictates that the argument components form a hierarchy of persuasion, which manifests itself in a tree structure. This work provides the first neural network-based approach to argumentation mining, focusing on the two tasks of extracting links between argument components, and classifying types of argument components. In order to solve this problem, we propose to use a joint model that is based on a Pointer Network architecture. A Pointer Network is appealing for this task for the following reasons: 1) It takes into account the sequential nature of argument components; 2) By construction, it enforces certain properties of the tree structure present in argument relations; 3) The hidden representations can be applied to auxiliary tasks. In order to extend the contribution of the original Pointer Network model, we construct a joint model that simultaneously attempts to learn the type of argument component, as well as continuing to predict links between argument components. The proposed joint model achieves state-of-the-art results on two separate evaluation corpora, achieving far superior performance than a regular Pointer Network model. Our results show that optimizing for both tasks, and adding a fully-connected layer prior to recurrent neural network input, is crucial for high performance. | Recurrent neural networks have previously been proposed to model tree graph structures in a linear manner. @cite_7 use a sequence-to-sequence model for the task of syntactic parsing. @cite_21 experiment on an artificial entailment dataset that is specifically engineered to capture recursive logic @cite_10 . Standard recurrent neural networks can take in complete sentence sequencesand perform competitively with a recursive neural network. Multi-task learning for sequence-to-sequence has also been proposed @cite_5 , though none of the models used a PN for prediction. | {
"cite_N": [
"@cite_5",
"@cite_21",
"@cite_10",
"@cite_7"
],
"mid": [
"2172589779",
"2132547887",
"1786845855",
"1869752048"
],
"abstract": [
"Sequence to sequence learning has recently emerged as a new paradigm in supervised learning. To date, most of its applications focused on only one task and not much work explored this framework for multiple tasks. This paper examines three multi-task learning (MTL) settings for sequence to sequence models: (a) the oneto-many setting - where the encoder is shared between several tasks such as machine translation and syntactic parsing, (b) the many-to-one setting - useful when only the decoder can be shared, as in the case of translation and image caption generation, and (c) the many-to-many setting - where multiple encoders and decoders are shared, which is the case with unsupervised objectives and translation. Our results show that training on a small amount of parsing and image caption data can improve the translation quality between English and German by up to 1.5 BLEU points over strong single-task baselines on the WMT benchmarks. Furthermore, we have established a new state-of-the-art result in constituent parsing with 93.0 F1. Lastly, we reveal interesting properties of the two unsupervised learning objectives, autoencoder and skip-thought, in the MTL context: autoencoder helps less in terms of perplexities but more on BLEU scores compared to skip-thought.",
"Tree-structured neural networks encode a particular tree geometry for a sentence in the network design. However, these models have at best only slightly outperformed simpler sequence-based models. We hypothesize that neural sequence models like LSTMs are in fact able to discover and implicitly use recursive compositional structure, at least for tasks with clear cues to that structure in the data. We demonstrate this possibility using an artificial data task for which recursive compositional structure is crucial, and find an LSTM-based sequence model can indeed learn to exploit the underlying tree structure. However, its performance consistently lags behind that of tree models, even on large training sets, suggesting that tree-structured models are more effective at exploiting recursive structure.",
"Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such models---plain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)---can correctly learn to identify logical relationships such as entailment and contradiction using these representations. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models' ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language.",
"Syntactic constituency parsing is a fundamental problem in natural language processing and has been the subject of intensive research and engineering for decades. As a result, the most accurate parsers are domain specific, complex, and inefficient. In this paper we show that the domain agnostic attention-enhanced sequence-to-sequence model achieves state-of-the-art results on the most widely used syntactic constituency parsing dataset, when trained on a large synthetic corpus that was annotated using existing parsers. It also matches the performance of standard parsers when trained only on a small human-annotated dataset, which shows that this model is highly data-efficient, in contrast to sequence-to-sequence models without the attention mechanism. Our parser is also fast, processing over a hundred sentences per second with an unoptimized CPU implementation."
]
} |
1612.08994 | 2949863871 | One of the major goals in automated argumentation mining is to uncover the argument structure present in argumentative text. In order to determine this structure, one must understand how different individual components of the overall argument are linked. General consensus in this field dictates that the argument components form a hierarchy of persuasion, which manifests itself in a tree structure. This work provides the first neural network-based approach to argumentation mining, focusing on the two tasks of extracting links between argument components, and classifying types of argument components. In order to solve this problem, we propose to use a joint model that is based on a Pointer Network architecture. A Pointer Network is appealing for this task for the following reasons: 1) It takes into account the sequential nature of argument components; 2) By construction, it enforces certain properties of the tree structure present in argument relations; 3) The hidden representations can be applied to auxiliary tasks. In order to extend the contribution of the original Pointer Network model, we construct a joint model that simultaneously attempts to learn the type of argument component, as well as continuing to predict links between argument components. The proposed joint model achieves state-of-the-art results on two separate evaluation corpora, achieving far superior performance than a regular Pointer Network model. Our results show that optimizing for both tasks, and adding a fully-connected layer prior to recurrent neural network input, is crucial for high performance. | In the field of discourse parsing, the work of is the only work, to our knowledge, that incorporates attention into the network architecture. However, the attention is only used in the process of creating representations of the text itself. Attention is used to predict the overall discourse structure. In fact, the model still relies on a binary classifier to determine if textual components should have a link. Arguably the most similar approach to ours is in the field of dependency parsing @cite_19 . The authors propose a model that performs queries' between word representations in order to determine a distribution over potential headwords. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2518668950"
],
"abstract": [
"We develop a novel bi-directional attention model for dependency parsing, which learns to agree on headword predictions from the forward and backward parsing directions. The parsing procedure for each direction is formulated as sequentially querying the memory component that stores continuous headword embeddings. The proposed parser makes use of soft headword embeddings, allowing the model to implicitly capture high-order parsing history without dramatically increasing the computational complexity. We conduct experiments on English, Chinese, and 12 other languages from the CoNLL 2006 shared task, showing that the proposed model achieves state-of-the-art unlabeled attachment scores on 6 languages."
]
} |
1612.08828 | 2560888454 | The Dulmage--Mendelsohn decomposition (or the DM-decomposition) gives a unique partition of the vertex set of a bipartite graph reflecting the structure of all the maximum matchings therein. A bipartite graph is said to be DM-irreducible if its DM-decomposition consists of a single component. In this paper, we focus on the problem of making a given bipartite graph DM-irreducible by adding edges. When the input bipartite graph is balanced (i.e., both sides have the same number of vertices) and has a perfect matching, this problem is equivalent to making a directed graph strongly connected by adding edges, for which the minimum number of additional edges was characterized by Eswaran and Tarjan (1976). We give a general solution to this problem, which is divided into three parts. We first show that our problem can be formulated as a special case of a general framework of covering supermodular functions, which was introduced by Frank and Jord 'an (1995) to investigate the directed connectivity augmentation problem. Secondly, when the input graph is not balanced, the problem is solved via matroid intersection. This result can be extended to the minimum cost version in which the addition of an edge gives rise to an individual cost. Thirdly, for balanced input graphs, we devise a combinatorial algorithm that finds a minimum number of additional edges to attain the DM-irreducibility, while the minimum cost version of this problem is NP-hard. These results also lead to min-max characterizations of the minimum number, which generalize the result of Eswaran and Tarjan. | For the directed @math -connectivity augmentation, which is also within the Frank--Jord 'an framework, Frank and V 'egh @cite_6 gave a much simpler combinatorial algorithm when a given directed graph is already @math -connected. Since @math -connected'' enforces no constraint and @math -connected'' is equivalent to strongly connected,'' this special case also generalizes the strong connectivity augmentation. The direction of generalization is, however, different from our problem. The Frank--V 'egh setting is translated in terms of bipartite graphs as follows: for a given @math -elementary balanced bipartite graph @math , to make @math @math -elementary by adding a minimum number of edges, where @math -elementary'' and @math -elementary'' are equivalent to perfectly matchable'' and to DM-irreducible,'' respectively, and @math -elementary'' is strictly stronger than DM-irreducible'' when @math . In our problem, we are required to make a balanced bipartite graph @math @math -elementary even when @math is not @math -elementary. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2106199941"
],
"abstract": [
"We develop a combinatorial polynomial-time algorithm to make a (k-1)-connected digraph k-connected by adding a minimum number of new edges."
]
} |
1612.08828 | 2560888454 | The Dulmage--Mendelsohn decomposition (or the DM-decomposition) gives a unique partition of the vertex set of a bipartite graph reflecting the structure of all the maximum matchings therein. A bipartite graph is said to be DM-irreducible if its DM-decomposition consists of a single component. In this paper, we focus on the problem of making a given bipartite graph DM-irreducible by adding edges. When the input bipartite graph is balanced (i.e., both sides have the same number of vertices) and has a perfect matching, this problem is equivalent to making a directed graph strongly connected by adding edges, for which the minimum number of additional edges was characterized by Eswaran and Tarjan (1976). We give a general solution to this problem, which is divided into three parts. We first show that our problem can be formulated as a special case of a general framework of covering supermodular functions, which was introduced by Frank and Jord 'an (1995) to investigate the directed connectivity augmentation problem. Secondly, when the input graph is not balanced, the problem is solved via matroid intersection. This result can be extended to the minimum cost version in which the addition of an edge gives rise to an individual cost. Thirdly, for balanced input graphs, we devise a combinatorial algorithm that finds a minimum number of additional edges to attain the DM-irreducibility, while the minimum cost version of this problem is NP-hard. These results also lead to min-max characterizations of the minimum number, which generalize the result of Eswaran and Tarjan. | The DM-decomposition is known to be a useful tool in numerical linear algebra (see, e.g., @cite_14 ). A bipartite graph associated with a matrix is naturally defined by its nonzero entries, and its DM-decomposition gives the finest block-triangularization, which helps us to solve the system of linear equations efficiently. The finer decomposed, the finer from computational point of view. Hence the DM-irreducibility is not a desirable property in this context. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2052602889"
],
"abstract": [
"The subject of sparse matrices has its roots in such diverse fields as management science, power systems analysis, surveying, circuit theory, and structural analysis. Mathematical models in all these areas give rise to very large systems of linear equations which could not be solved were it not for the fact that the matrices contain relatively few non-zero entries. Only comparatively recently, in the last fifteen years or so, has it become apparent that the equations can be solved even when the pattern is irregular, and it is primarily the solution of such problems that is considered in this book. The subject is intensely practical and this book is written with practicalities ever in mind. Whenever two methods are applicable, their rival merits are considered, and conclusions are based on practical experience where theoretical comparison is not possible. Non-numeric computing techniques have been included, as well as frequent illustrations, in an attempt to bridge the usually wide gap between the printed page and the working computer code. Despite this practical bias, it is recognized that many aspects of the subject are of interest in their own right, and the book aims to be suitable also for a student course, probably at MSc level. Exercises have been included to illustrate and strengthen understanding of the material, as well as to extend it. Efficient use of sparsity is a key to solving large problems in many fields. This book will supply both insight and answers for those attempting to solve these problems."
]
} |
1612.08927 | 2565450588 | Color transfer between images uses the statistics information of image effectively. We present a novel approach of local color transfer between images based on the simple statistics and locally linear embedding. A sketching interface is proposed for quickly and easily specifying the color correspondences between target and source image. The user can specify the correspondences of local region using scribes, which more accurately transfers the target color to the source image while smoothly preserving the boundaries, and exhibits more natural output results. Our algorithm is not restricted to one-to-one image color transfer and can make use of more than one target images to transfer the color in different regions in the source image. Moreover, our algorithm does not require to choose the same color style and image size between source and target images. We propose the sub-sampling to reduce the computational load. Comparing with other approaches, our algorithm is much better in color blending in the input data. Our approach preserves the other color details in the source image. Various experimental results show that our approach specifies the correspondences of local color region in source and target images. And it expresses the intention of users and generates more actual and natural results of visual effect. | Other approaches tried to counter above mentioned shortcomings by introducing rough control in image editing. Various distance measures and feature spaces were also considered in the literature. To cross texture and fragment distinct regions, geodesic distance @cite_32 and diffusion distance @cite_5 were applied. @cite_6 championed the use of pixel classification based on user input. Locally linear embedding propagation preserved the manifold structure @cite_26 @cite_8 to tackle color blending. | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_32",
"@cite_6",
"@cite_5"
],
"mid": [
"2117493512",
"2092410970",
"1997701791",
"2148520845",
""
],
"abstract": [
"We propose a novel edit propagation algorithm for interactive image and video manipulations. Our approach uses the locally linear embedding (LLE) to represent each pixel as a linear combination of its neighbors in a feature space. While previous methods require similar pixels to have similar results, we seek to maintain the manifold structure formed by all pixels in the feature space. Specifically, we require each pixel to be the same linear combination of its neighbors in the result. Compared with previous methods, our proposed algorithm is more robust to color blending in the input data. Furthermore, since every pixel is only related to a few nearest neighbors, our algorithm easily achieves good runtime efficiency. We demonstrate our manifold preserving edit propagation on various applications.",
"We propose a new method for interactive image color replacement that creates smooth and naturally looking results with minimal user interaction. Our system expects as input a source image and rawly scribbled target color values and generates high quality results in interactive rates. To achieve this goal we introduce an algorithm that preserves pairwise distances of the signatures in the original image and simultaneously maps the color to the user defined target values. We propose efficient sub-sampling in order to reduce the computational load and adapt semi-supervised locally linear embedding to optimize the constraints in one objective function. We show the application of the algorithm on typical photographs and compare the results to other color replacement methods.",
"This article presents a new, unified technique to perform general edge-sensitive editing operations on n-dimensional images and videos efficiently. The first contribution of the article is the introduction of a Generalized Geodesic Distance Transform (GGDT), based on soft masks. This provides a unified framework to address several edge-aware editing operations. Diverse tasks such as denoising and nonphotorealistic rendering are all dealt with fundamentally the same, fast algorithm. Second, a new Geodesic Symmetric Filter (GSF) is presented which imposes contrast-sensitive spatial smoothness into segmentation and segmentation-based editing tasks (cutout, object highlighting, colorization, panorama stitching). The effect of the filter is controlled by two intuitive, geometric parameters. In contrast to existing techniques, the GSF filter is applied to real-valued pixel likelihoods (soft masks), thanks to GGDTs and it can be used for both interactive and automatic editing. Complex object topologies are dealt with effortlessly. Finally, the parallelism of GGDTs enables us to exploit modern multicore CPU architectures as well as powerful new GPUs, thus providing great flexibility of implementation and deployment. Our technique operates on both images and videos, and generalizes naturally to n-dimensional data. The proposed algorithm is validated via quantitative and qualitative comparisons with existing, state-of-the-art approaches. Numerous results on a variety of image and video editing tasks further demonstrate the effectiveness of our method.",
"One of the most common tasks in image and video editing is the local adjustment of various properties (e.g., saturation or brightness) of regions within an image or video. Edge-aware interpolation of user-drawn scribbles offers a less effort-intensive approach to this problem than traditional region selection and matting. However, the technique suffers a number of limitations, such as reduced performance in the presence of texture contrast, and the inability to handle fragmented appearances. We significantly improve the performance of edge-aware interpolation for this problem by adding a boosting-based classification step that learns to discriminate between the appearance of scribbled pixels. We show that this novel data term in combination with an existing edge-aware optimization technique achieves substantially better results for the local image and video adjustment problem than edge-aware interpolation techniques without classification, or related methods such as matting techniques or graph cut segmentation.",
""
]
} |
1612.08927 | 2565450588 | Color transfer between images uses the statistics information of image effectively. We present a novel approach of local color transfer between images based on the simple statistics and locally linear embedding. A sketching interface is proposed for quickly and easily specifying the color correspondences between target and source image. The user can specify the correspondences of local region using scribes, which more accurately transfers the target color to the source image while smoothly preserving the boundaries, and exhibits more natural output results. Our algorithm is not restricted to one-to-one image color transfer and can make use of more than one target images to transfer the color in different regions in the source image. Moreover, our algorithm does not require to choose the same color style and image size between source and target images. We propose the sub-sampling to reduce the computational load. Comparing with other approaches, our algorithm is much better in color blending in the input data. Our approach preserves the other color details in the source image. Various experimental results show that our approach specifies the correspondences of local color region in source and target images. And it expresses the intention of users and generates more actual and natural results of visual effect. | To prevent the problem relating to color region mixing, @cite_27 suggested a 3D histogram matching technique to transfer color components in the hue (H), saturation (S), and intensity (I), respectively, in the HSI color space. Albeit, the color information transfer between target and source image can be achieved by this method, the result usually contains notable spatial artifacts and is also dependent on the resolution of input image. @cite_24 @cite_31 resolved the same problem by utilizing an N-dimensional probability density function which matches the 3D color histogram of input images. They used an recursive, nonlinear method that was able to estimate the transformation function by employing a one-dimensional marginal distribution. This technique is potent enough to match the color pallet of target and source images, but it often demands further processing to get rid of noise and spatial visual artifacts. | {
"cite_N": [
"@cite_24",
"@cite_27",
"@cite_31"
],
"mid": [
"2141015396",
"1841033533",
"2006957355"
],
"abstract": [
"This article proposes an original method to estimate a continuous transformation that maps one N-dimensional distribution to another. The method is iterative, non-linear, and is shown to converge. Only 1D marginal distribution is used in the estimation process, hence involving low computation costs. As an illustration this mapping is applied to color transfer between two images of different contents. The paper also serves as a central focal point for collecting together the research activity in this area and relating it to the important problem of automated color grading",
"We present new methods which transfer the color style of a source image into an arbitrary given target image having a different 3D color distribution. The color transfer has a high importance ensuring a wide area of applications from artistic transformation of the color atmosphere of images until different scientific visualizations using special gamut mappings. Our technique use a permissive, or optionally strict, 3D histogram matching, similarly to the sampling of multivariable functions applying a sequential chain of conditional probability density functions. We work by order of hue, hue dependent lightness and from both dependent saturation histograms of source and target images, respectively. We apply different histogram transformations, like smoothing or contrast limitation, in order to avoid some unexpected high gradients and other artifacts. Furthermore, we use dominance suppression optionally, by applying sub-linear functions for the histograms in order to get well balanced color distributions, or an overall appearance reflecting the memory color distribution better. Forward and inverse operations on the corresponding cumulative histograms ensure a continuous mapping of perceptual attributes correlating to limited derivatives. Sampling an appropriate fraction of the pixels and using perceptually accurate and continuous histograms with minimal size as well as other gems make this method robust and fast.",
"This article proposes an original method for grading the colours between different images or shots. The first stage of the method is to find a one-to-one colour mapping that transfers the palette of an example target picture to the original picture. This is performed using an original and parameter free algorithm that is able to transform any N-dimensional probability density function into another one. The proposed algorithm is iterative, non-linear and has a low computational cost. Applying the colour mapping on the original picture allows reproducing the same 'feel' as the target picture, but can also increase the graininess of the original picture, especially if the colour dynamic of the two pictures is very different. The second stage of the method is to reduce this grain artefact through an efficient post-processing algorithm that intends to preserve the gradient field of the original picture."
]
} |
1612.08927 | 2565450588 | Color transfer between images uses the statistics information of image effectively. We present a novel approach of local color transfer between images based on the simple statistics and locally linear embedding. A sketching interface is proposed for quickly and easily specifying the color correspondences between target and source image. The user can specify the correspondences of local region using scribes, which more accurately transfers the target color to the source image while smoothly preserving the boundaries, and exhibits more natural output results. Our algorithm is not restricted to one-to-one image color transfer and can make use of more than one target images to transfer the color in different regions in the source image. Moreover, our algorithm does not require to choose the same color style and image size between source and target images. We propose the sub-sampling to reduce the computational load. Comparing with other approaches, our algorithm is much better in color blending in the input data. Our approach preserves the other color details in the source image. Various experimental results show that our approach specifies the correspondences of local color region in source and target images. And it expresses the intention of users and generates more actual and natural results of visual effect. | Color region mixing problem resolution and transference of colors to a local region of an image are problems for which the segmentation-based techniques have been developed. @cite_7 @cite_4 attempted to provide a solution to these problems by employing a method for soft color segmentation which is based on a mixture of Gaussian approximation and allows indirect user control. To resolve the color region mixing problem, some researchers tried to segment input images by employing a fixed number of color classes @cite_25 @cite_2 which uses a similar approach by @cite_7 @cite_4 . But when the color styles of input images differ considerably from the reference color classes, these approaches lack somewhat in providing appropriate segmentation results. @cite_35 also used soft segmentation for local color transfer between images. They have tried to solve color region mixing problem, but one drawback of their method is that while transferring the color to a local region, the color of other regions also get effected. Now, we intend to solve this problem by using our algorithm to transfer the color from the target image to the source image without effecting the colors in other regions. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_7",
"@cite_2",
"@cite_25"
],
"mid": [
"1972353991",
"2125683338",
"2160530465",
"",
"1557161157"
],
"abstract": [
"A color-transfer method that can transfer colors of an image to another for the local regions using their dominant colors is proposed. In order to naturally transfer the colors and moods of a target image to a source image, we need to find the local regions of colors that need to be modified in the image. Since the dominant colors of each image can be used for the estimation of color regions, we develop a grid-based mode detection, which can efficiently estimate dominant colors of an image. Based on these dominant colors, our proposed method performs a consistent segmentation of source and target images by using the cost-volume filtering. Through the segmentation procedure, we can estimate complex color characteristics and transfer colors to the local regions of an image. For an intuitive and natural color transfer, region matching is also crucial. Therefore, we use the visually prominent colors in the image to meaningfully connect each segmented region in which modified color-transfer method is applied to balance the overall luminance of the final result. In the Experimental Result section, various convincing results are demonstrated through the proposed method.",
"We propose an automatic approach to soft color segmentation, which produces soft color segments with an appropriate amount of overlapping and transparency essential to synthesizing natural images for a wide range of image-based applications. Although many state-of-the-art and complex techniques are excellent at partitioning an input image to facilitate deriving a semantic description of the scene, to achieve seamless image synthesis, we advocate a segmentation approach designed to maintain spatial and color coherence among soft segments while preserving discontinuities by assigning to each pixel a set of soft labels corresponding to their respective color distributions. We optimize a global objective function, which simultaneously exploits the reliability given by global color statistics and flexibility of local image compositing, leading to an image model where the global color statistics of an image is represented by a Gaussian mixture model (GMM), whereas the color of a pixel is explained by a local color mixture model where the weights are defined by the soft labels to the elements of the converged GMM. Transparency is naturally introduced in our probabilistic framework, which infers an optimal mixture of colors at an image pixel. To adequately consider global and local information in the same framework, an alternating optimization scheme is proposed to iteratively solve for the global and local model parameters. Our method is fully automatic and is shown to converge to a good optimal solution. We perform extensive evaluation and comparison and demonstrate that our method achieves good image synthesis results for image-based applications such as image matting, color transfer, image deblurring, and image colorization.",
"We address the problem of regional color transfer between two natural images by probabilistic segmentation. We use a new expectation-maximization (EM) scheme to impose both spatial and color smoothness to infer natural connectivity among pixels. Unlike previous work, our method takes local color information into consideration, and segment image with soft region boundaries for seamless color transfer and compositing. Our modified EM method has two advantages in color manipulation: first, subject to different levels of color smoothness in image space, our algorithm produces an optimal number of regions upon convergence, where the color statistics in each region can be adequately characterized by a component of a Gaussian mixture model (GMM). Second, we allow a pixel to fall in several regions according to our estimated probability distribution in the EM step, resulting in a transparency-like ratio for compositing different regions seamlessly. Hence, natural color transition across regions can be achieved, where the necessary intra-region and inter-region smoothness are enforced without losing original details. We demonstrate results on a variety of applications including image deblurring, enhanced color transfer, and colorizing gray scale images. Comparisons with previous methods are also presented.",
"",
"In this paper, we propose the method which transfers the color style of a source image into an arbitrary given reference image. Misidentification problem of color cause wrong indexing in low saturation. Therefore, the proposed method does indexing after image separating chromatic and achromatic color from saturation. The proposed method is composed of the following four steps. In the first step, using threshold, pixels in image are separated chromatic and achromatic color components from saturation. In the second step, separated pixels are indexed using cylindrical metric. In the third step, the number and positional dispersion of pixel decide the order of priority for each index color. And average and standard deviation of each index color be calculated. In the final step, color be transferred in Lab color space, and post processing to removal noise and pseudo-contour. Experimental results show that the proposed method is effective on indexing and color transfer."
]
} |
1612.08927 | 2565450588 | Color transfer between images uses the statistics information of image effectively. We present a novel approach of local color transfer between images based on the simple statistics and locally linear embedding. A sketching interface is proposed for quickly and easily specifying the color correspondences between target and source image. The user can specify the correspondences of local region using scribes, which more accurately transfers the target color to the source image while smoothly preserving the boundaries, and exhibits more natural output results. Our algorithm is not restricted to one-to-one image color transfer and can make use of more than one target images to transfer the color in different regions in the source image. Moreover, our algorithm does not require to choose the same color style and image size between source and target images. We propose the sub-sampling to reduce the computational load. Comparing with other approaches, our algorithm is much better in color blending in the input data. Our approach preserves the other color details in the source image. Various experimental results show that our approach specifies the correspondences of local color region in source and target images. And it expresses the intention of users and generates more actual and natural results of visual effect. | Moreover, some researchers have exhibited a keen interest in transformation of colors among distinct color spaces. Color transfer technique warrants the use of a color space where major elements of a color are mutually independent. Since, in the RGB color space, the colors are correlated , the decorrelated CIELab color space is usually employed. This requires a method to effectuate color transfer transformation of the color space, RGB to CIELab and vice versa. @cite_19 proposed an improved solution that circumvents the transformation process between the correlated color spaces and uses translation, rotation, and scaling to transfer colors of a target image in the RGB color space. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2138467836"
],
"abstract": [
"In this paper we present a process called color transfer which can borrow one image's color characteristics from another. Recently Reinhard and his colleagues reported a pioneering work of color transfer. Their technology can produce very believable results, but has to transform pixel values from RGB to lαβ. Inspired by their work, we advise an approach which can directly deal with the color transfer in any 3D space.From the view of statistics, we consider pixel's value as a three-dimension stochastic variable and an image as a set of samples, so the correlations between three components can be measured by covariance. Our method imports covariance between three components of pixel values while calculate the mean along each of the three axes. Then we decompose the covariance matrix using SVD algorithm and get a rotation matrix. Finally we can scale, rotate and shift pixel data of target image to fit data points' cluster of source image in the current color space and get resultant image which takes on source image's look and feel. Besides the global processing, a swatch-based method is introduced in order to manipulate images' color more elaborately. Experimental results confirm the validity and usefulness of our method."
]
} |
1612.08927 | 2565450588 | Color transfer between images uses the statistics information of image effectively. We present a novel approach of local color transfer between images based on the simple statistics and locally linear embedding. A sketching interface is proposed for quickly and easily specifying the color correspondences between target and source image. The user can specify the correspondences of local region using scribes, which more accurately transfers the target color to the source image while smoothly preserving the boundaries, and exhibits more natural output results. Our algorithm is not restricted to one-to-one image color transfer and can make use of more than one target images to transfer the color in different regions in the source image. Moreover, our algorithm does not require to choose the same color style and image size between source and target images. We propose the sub-sampling to reduce the computational load. Comparing with other approaches, our algorithm is much better in color blending in the input data. Our approach preserves the other color details in the source image. Various experimental results show that our approach specifies the correspondences of local color region in source and target images. And it expresses the intention of users and generates more actual and natural results of visual effect. | The method of @cite_26 is based on the local linear embedding @cite_33 @cite_13 so is our approach. The main difference of our algorithm to their's is the difference in methodology of local color transfer between images based on the simple statistics and locally linear embedding (LLE) to optimize the constraints in a defined objective function of images. Our algorithm that preserves pairwise distances between pixels in the original image. It also simultaneously maps the color to the user defined target values and then use the sub-sampling to reduce the computational load. | {
"cite_N": [
"@cite_26",
"@cite_13",
"@cite_33"
],
"mid": [
"2117493512",
"2063532964",
"2053186076"
],
"abstract": [
"We propose a novel edit propagation algorithm for interactive image and video manipulations. Our approach uses the locally linear embedding (LLE) to represent each pixel as a linear combination of its neighbors in a feature space. While previous methods require similar pixels to have similar results, we seek to maintain the manifold structure formed by all pixels in the feature space. Specifically, we require each pixel to be the same linear combination of its neighbors in the result. Compared with previous methods, our proposed algorithm is more robust to color blending in the input data. Furthermore, since every pixel is only related to a few nearest neighbors, our algorithm easily achieves good runtime efficiency. We demonstrate our manifold preserving edit propagation on various applications.",
"The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. Here we describe locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of high dimensional data. The data, assumed to be sampled from an underlying manifold, are mapped into a single global coordinate system of lower dimensionality. The mapping is derived from the symmetries of locally linear reconstructions, and the actual computation of the embedding reduces to a sparse eigenvalue problem. Notably, the optimizations in LLE---though capable of generating highly nonlinear embeddings---are simple to implement, and they do not involve local minima. In this paper, we describe the implementation of the algorithm in detail and discuss several extensions that enhance its performance. We present results of the algorithm applied to data sampled from known manifolds, as well as to collections of images of faces, lips, and handwritten digits. These examples are used to provide extensive illustrations of the algorithm's performance---both successes and failures---and to relate the algorithm to previous and ongoing work in nonlinear dimensionality reduction.",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in"
]
} |
1612.08633 | 2951516071 | AUC (Area under the ROC curve) is an important performance measure for applications where the data is highly imbalanced. Learning to maximize AUC performance is thus an important research problem. Using a max-margin based surrogate loss function, AUC optimization problem can be approximated as a pairwise rankSVM learning problem. Batch learning methods for solving the kernelized version of this problem suffer from scalability and may not result in sparse classifiers. Recent years have witnessed an increased interest in the development of online or single-pass online learning algorithms that design a classifier by maximizing the AUC performance. The AUC performance of nonlinear classifiers, designed using online methods, is not comparable with that of nonlinear classifiers designed using batch learning algorithms on many real-world datasets. Motivated by these observations, we design a scalable algorithm for maximizing AUC performance by greedily adding the required number of basis functions into the classifier model. The resulting sparse classifiers perform faster inference. Our experimental results show that the level of sparsity achievable can be order of magnitude smaller than the Kernel RankSVM model without affecting the AUC performance much. | Joachims @cite_7 presented a structural SVM framework for optimizing AUC in a batch mode. By formulating ) as a 1-slack structural SVM problem, Joachims @cite_7 solved its dual problem by a cutting plane method. The method, though initially designed for linear classifiers, can be easily extended to nonlinear classifiers. Numerical experiments showed that, for ranking learning problems, this method is slower than others state-of-the-art methods that solve ) directly @cite_8 @cite_10 . | {
"cite_N": [
"@cite_10",
"@cite_7",
"@cite_8"
],
"mid": [
"",
"2070771761",
"2400628046"
],
"abstract": [
"",
"This paper presents a Support Vector Method for optimizing multivariate nonlinear performance measures like the F 1 -score. Taking a multivariate prediction approach, we give an algorithm with which such multivariate SVMs can be trained in polynomial time for large classes of potentially non-linear performance measures, in particular ROCArea and all measures that can be computed from the contingency table. The conventional classification SVM arises as a special case of our method.",
"Learning to rank is an important task for recommendation systems, online advertisement and web search. Among those learning to rank methods, rankSVM is a widely used model. Both linear and nonlinear (kernel) rankSVM have been extensively studied, but the lengthy training time of kernel rankSVM remains a challenging issue. In this paper, after discussing difficulties of training kernel rankSVM, we propose an efficient method to handle these problems. The idea is to reduce the number of variables from quadratic to linear with respect to the number of training instances, and efficiently evaluate the pairwise losses. Our setting is applicable to a variety of loss functions. Further, general optimization methods can be easily applied to solve the reformulated problem. Implementation issues are also carefully considered. Experiments show that our method is faster than state-of-the-art methods for training kernel rankSVM."
]
} |
1612.08633 | 2951516071 | AUC (Area under the ROC curve) is an important performance measure for applications where the data is highly imbalanced. Learning to maximize AUC performance is thus an important research problem. Using a max-margin based surrogate loss function, AUC optimization problem can be approximated as a pairwise rankSVM learning problem. Batch learning methods for solving the kernelized version of this problem suffer from scalability and may not result in sparse classifiers. Recent years have witnessed an increased interest in the development of online or single-pass online learning algorithms that design a classifier by maximizing the AUC performance. The AUC performance of nonlinear classifiers, designed using online methods, is not comparable with that of nonlinear classifiers designed using batch learning algorithms on many real-world datasets. Motivated by these observations, we design a scalable algorithm for maximizing AUC performance by greedily adding the required number of basis functions into the classifier model. The resulting sparse classifiers perform faster inference. Our experimental results show that the level of sparsity achievable can be order of magnitude smaller than the Kernel RankSVM model without affecting the AUC performance much. | Learning to rank is an important supervised learning problem and has application in a variety of domains such as information retrieval and online advertising. Treating the all instances query number same, the set of preference pairs will be same as T and the Kernel rankSVM problem, discussed in @cite_8 is same as ). Kuo et. al. @cite_8 used trust region Newton method to solve this problem. This method stores the full kernel matrix as repeated kernel evaluations are bottleneck in Kernel rankSVM. This method has two drawbacks: 1) it is not scalable as the memory requirement is prohibitively high for large datasets, and 2) the learned model is not sparse resulting in computationally expensive predictions. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2400628046"
],
"abstract": [
"Learning to rank is an important task for recommendation systems, online advertisement and web search. Among those learning to rank methods, rankSVM is a widely used model. Both linear and nonlinear (kernel) rankSVM have been extensively studied, but the lengthy training time of kernel rankSVM remains a challenging issue. In this paper, after discussing difficulties of training kernel rankSVM, we propose an efficient method to handle these problems. The idea is to reduce the number of variables from quadratic to linear with respect to the number of training instances, and efficiently evaluate the pairwise losses. Our setting is applicable to a variety of loss functions. Further, general optimization methods can be easily applied to solve the reformulated problem. Implementation issues are also carefully considered. Experiments show that our method is faster than state-of-the-art methods for training kernel rankSVM."
]
} |
1612.08871 | 2950668551 | Semantic video segmentation is challenging due to the sheer amount of data that needs to be processed and labeled in order to construct accurate models. In this paper we present a deep, end-to-end trainable methodology to video segmentation that is capable of leveraging information present in unlabeled data in order to improve semantic estimates. Our model combines a convolutional architecture and a spatio-temporal transformer recurrent layer that are able to temporally propagate labeling information by means of optical flow, adaptively gated based on its locally estimated uncertainty. The flow, the recognition and the gated temporal propagation modules can be trained jointly, end-to-end. The temporal, gated recurrent flow propagation component of our model can be plugged into any static semantic segmentation architecture and turn it into a weakly supervised video processing one. Our extensive experiments in the challenging CityScapes and Camvid datasets, and based on multiple deep architectures, indicate that the resulting model can leverage unlabeled temporal frames, next to a labeled one, in order to improve both the video segmentation accuracy and the consistency of its temporal labeling, at no additional annotation cost and with little extra computation. | Video segmentation has received significant attention starting from early methodologies based on temporal extensions to normalized cuts @cite_3 , random field models and tracking @cite_40 @cite_21 , motion segmentation @cite_16 or efficient hierarchical graph-based formulations @cite_26 @cite_39 . More recently proposal methods where multiple figure-ground estimates or multipart superpixel segmentations are generated at each time-step, then linked through time using optical flow @cite_18 @cite_41 @cite_10 , have become popular. @cite_1 used the dense CRF of @cite_20 for semantic video segmentation by using pairwise potentials based on aligning the frames using optical flow. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_41",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_39",
"@cite_40",
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"",
"",
"2087570216",
"2068994826",
"2461677039",
"2121947440",
"159595522",
"2062563118",
"2076756823",
"2113708607",
"2161236525"
],
"abstract": [
"",
"",
"We present a model for video segmentation, applicable to RGB (and if available RGB-D) information that constructs multiple plausible partitions corresponding to the static and the moving objects in the scene: i) we generate multiple figure-ground segmentations, in each frame, parametrically, based on boundary and optical flow cues, then track, link and refine the salient segment chains corresponding to the different objects, over time, using long-range temporal constraints, ii) a video partition is obtained by composing segment chains into consistent tilings, where the different individual object chains explain the video and do not overlap. Saliency metrics based on figural and motion cues, as well as measures learned from human eye movements are exploited, with substantial gain, at the level of segment generation and chain construction, in order to produce compact sets of hypotheses which correctly reflect the qualities of the different configurations. The model makes it possible to compute multiple hypotheses over both individual object segmentations tracked over time, and for complete video partitions. We report quantitative, state of the art results in the SegTrack single object benchmark, and promising qualitative and quantitative results in clips filming multiple static and moving objects collected from Hollywood movies and from the MIT dataset.",
"Video provides not only rich visual cues such as motion and appearance, but also much less explored long-range temporal interactions among objects. We aim to capture such interactions and to construct a powerful intermediate-level video representation for subsequent recognition. Motivated by this goal, we seek to obtain spatio-temporal oversegmentation of a video into regions that respect object boundaries and, at the same time, associate object pixels over many video frames. The contributions of this paper are two-fold. First, we develop an efficient spatiotemporal video segmentation algorithm, which naturally incorporates long-range motion cues from the past and future frames in the form of clusters of point tracks with coherent motion. Second, we devise a new track clustering cost function that includes occlusion reasoning, in the form of depth ordering constraints, as well as motion similarity along the tracks. We evaluate the proposed approach on a challenging set of video sequences of office scenes from feature length movies.",
"We present an approach to long-range spatio-temporal regularization in semantic video segmentation. Temporal regularization in video is challenging because both the camera and the scene may be in motion. Thus Euclidean distance in the space-time volume is not a good proxy for correspondence. We optimize the mapping of pixels to a Euclidean feature space so as to minimize distances between corresponding points. Structured prediction is performed by a dense CRF that operates on the optimized features. Experimental results demonstrate that the presented approach increases the accuracy and temporal consistency of semantic video segmentation.",
"We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.",
"The use of video segmentation as an early processing step in video analysis lags behind the use of image segmentation for image analysis, despite many available video segmentation methods. A major reason for this lag is simply that videos are an order of magnitude bigger than images; yet most methods require all voxels in the video to be loaded into memory, which is clearly prohibitive for even medium length videos. We address this limitation by proposing an approximation framework for streaming hierarchical video segmentation motivated by data stream algorithms: each video frame is processed only once and does not change the segmentation of previous frames. We implement the graph-based hierarchical segmentation method within our streaming framework; our method is the first streaming hierarchical video segmentation method proposed. We perform thorough experimental analysis on a benchmark video data set and longer videos. Our results indicate the graph-based streaming hierarchical method outperforms other streaming video segmentation methods and performs nearly as well as the full-video hierarchical graph-based method.",
"We present a novel off-line algorithm for target segmentation and tracking in video. In our approach, video data is represented by a multi-label Markov Random Field model, and segmentation is accomplished by finding the minimum energy label assignment. We propose a novel energy formulation which incorporates both segmentation and motion estimation in a single framework. Our energy functions enforce motion coherence both within and across frames. We utilize state-of-the-art methods to efficiently optimize over a large number of discrete labels. In addition, we introduce a new ground-truth dataset, called Georgia Tech Segmentation and Tracking Dataset (GT-SegTrack), for the evaluation of segmentation accuracy in video tracking. We compare our method with several recent on-line tracking algorithms and provide quantitative and qualitative performance comparisons.",
"Motion is a strong cue for unsupervised object-level grouping. In this paper, we demonstrate that motion will be exploited most effectively, if it is regarded over larger time windows. Opposed to classical two-frame optical flow, point trajectories that span hundreds of frames are less susceptible to short-term variations that hinder separating different objects. As a positive side effect, the resulting groupings are temporally consistent over a whole video shot, a property that requires tedious post-processing in the vast majority of existing approaches. We suggest working with a paradigm that starts with semi-dense motion cues first and that fills up textureless areas afterwards based on color. This paper also contributes the Freiburg-Berkeley motion segmentation (FBMS) dataset, a large, heterogeneous benchmark with 59 sequences and pixel-accurate ground truth annotation of moving objects.",
"We present a technique for separating foreground objects from the background in a video. Our method is fast, fully automatic, and makes minimal assumptions about the video. This enables handling essentially unconstrained settings, including rapidly moving background, arbitrary object motion and appearance, and non-rigid deformations and articulations. In experiments on two datasets containing over 1400 video shots, our method outperforms a state-of-the-art background subtraction technique [4] as well as methods based on clustering point tracks [6, 18, 19]. Moreover, it performs comparably to recent video object segmentation methods based on object proposals [14, 16, 27], while being orders of magnitude faster.",
"Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy."
]
} |
1612.08871 | 2950668551 | Semantic video segmentation is challenging due to the sheer amount of data that needs to be processed and labeled in order to construct accurate models. In this paper we present a deep, end-to-end trainable methodology to video segmentation that is capable of leveraging information present in unlabeled data in order to improve semantic estimates. Our model combines a convolutional architecture and a spatio-temporal transformer recurrent layer that are able to temporally propagate labeling information by means of optical flow, adaptively gated based on its locally estimated uncertainty. The flow, the recognition and the gated temporal propagation modules can be trained jointly, end-to-end. The temporal, gated recurrent flow propagation component of our model can be plugged into any static semantic segmentation architecture and turn it into a weakly supervised video processing one. Our extensive experiments in the challenging CityScapes and Camvid datasets, and based on multiple deep architectures, indicate that the resulting model can leverage unlabeled temporal frames, next to a labeled one, in order to improve both the video segmentation accuracy and the consistency of its temporal labeling, at no additional annotation cost and with little extra computation. | Temporal modeling using deep architectures is also a timely topic in action recognition. One deep learning method @cite_42 uses two streams to predict actions. The authors use images in one stream and optical flow in the other stream, merged at a later stage in order to predict the final action label. This has also been implemented as a recurrent neural network in @cite_33 where the output of the two streams is sent into a recurrent neural network, where, at later stages the features are pooled to produce a final labelling. @cite_37 use frames at different timesteps and merge them as input to a neural network. @cite_43 attempt to make the problem of learning spatio-temporal features easier by using separable filters. Specifically, they first apply spatial filters and then temporal filters in order to avoid learning different temporal combinations for every spatial filter. @cite_35 learn spatio-temporal filters by stacking GRUs at different locations in a deep neural network and iterating through video frames. In a similar spirit @cite_8 use an LSTM and an attention mechanism to predict actions. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_33",
"@cite_8",
"@cite_42",
"@cite_43"
],
"mid": [
"2180092181",
"2308045930",
"",
"2464235600",
"2952186347",
"2190635018"
],
"abstract": [
"We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call \"percepts\" using Gated-Recurrent-Unit Recurrent Networks (GRUs).Our method relies on percepts that are extracted from all level of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts can leads to high-dimensionality video representations. To mitigate this effect and control the model number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler text-decoder model and without extra 3D CNN features.",
"",
"",
"We present a new architecture for end-to-end sequence learning of actions in video, we call VideoLSTM. Rather than adapting the video to the peculiarities of established recurrent or convolutional architectures, we adapt the architecture to fit the requirements of the video medium. Starting from the soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video has a spatial layout. To exploit the spatial correlation we hardwire convolutions in the soft-Attention LSTM architecture. Second, motion not only informs us about the action content, but also guides better the attention towards the relevant spatio-temporal locations. We introduce motion-based attention. And finally, we demonstrate how the attention from VideoLSTM can be used for action localization by relying on just the action class label. Experiments and comparisons on challenging datasets for action classification and localization support our claims.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FstCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FstCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FstCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos."
]
} |
1612.08820 | 2562424983 | The author proposes a method for simultaneous registration and segmentation of multi-source images, using the multivariate mixture model (MvMM) and maximum of log-likelihood (LL) framework. Specifically, the method is applied to the myocardial segmentation combining the complementary information from multi-sequence (MS) cardiac magnetic resonance (CMR) images. For the image misalignment and incongruent data, the MvMM is formulated with transformations and is further generalized for dealing with the hetero-coverage multi-modality images (HC-MMIs). The segmentation of MvMM is performed in a virtual common space, to which all the images and misaligned slices are simultaneously registered. Furthermore, this common space can be divided into a number of sub-regions, each of which contains congruent data, thus the HC-MMIs can be modeled using a set of conventional MvMMs. Results show that MvMM obtained significantly better performance compared to the conventional approaches and demonstrated good potential for scar quantification as well as myocardial segmentation. The generalized MvMM has also demonstrated better robustness in the incongruent data, where some images may not fully cover the region of interest, and the full coverage can only be reconstructed combining the images from multiple sources. | In the literature, there is a research topic referred to as multivariate image analysis, which has been proposed to deal with images that have more than one measurement per pixel @cite_7 @cite_3 . This research has particularly focused on the three-channel (RGB) color images or multispectral and hyperspectral images, and the methods have been designed to efficiently compress the highly correlated data and project them onto a reduced dimensional subspace, for example using the principal component analysis @cite_1 . Multivariate image analysis considers only data, meaning for each pixel in one image there should be a corresponding pixel in the other images @cite_3 . However, in medical imaging the multi-source images commonly do not have this congruency, as the images acquired from different sources can be misaligned to each other, and the imaging field-of-views and resolution can vary greatly across different acquisitions. The images having such differences are referred to as images in this work. | {
"cite_N": [
"@cite_3",
"@cite_1",
"@cite_7"
],
"mid": [
"2068324025",
"2031736256",
"2006043357"
],
"abstract": [
"Abstract Nowadays, image analysis is becoming more important because of its ability to perform fast and non-invasive low-cost analysis on products and processes. Image analysis is a wide denomination that encloses classical studies on gray scale or RGB images, analysis of images collected using few spectral channels (sometimes called multispectral images) or, most recently, data treatments to deal with hyperspectral images, where the spectral direction is exploited in its full extension. Pioneering data treatments in image analysis were applied to simple images mainly for defect detection, segmentation and classification by the Computer Science community. From the late 80s, the chemometric community joined this field introducing powerful tools for image analysis, which were already in use for the study of classical spectroscopic data sets and were appropriately modified to fit the particular characteristics of image structures. These chemometric approaches adapt to images of all kinds, from the simplest to the hyperspectral images, and have provided new insights on the spatial and spectroscopic information of this kind of data sets. New fields open by the introduction of chemometrics on image analysis are exploratory image analysis, multivariate statistical process control (monitoring), multivariate image regression or image resolution. This paper reviews the different techniques developed in image analysis and shows the evolution in the information provided by the different methodologies, which has been heavily pushed by the increasing complexity of the image measurements in the spatial and, particularly, in the spectral direction.",
"Information from on-line imaging sensors has great potential for the monitoring and control of spatially distributed systems. The major difficulty lies in the efficient extraction of information from the images in real-time, information such as the frequencies of occurrence of specific features and their locations in the process or product space. This paper uses multivariate image analysis (MIA) methods based on multiway principal component analysis to decompose the highly correlated data present in multispectral images. The frequencies of occurrence of certain features in the image, regardless of their spatial locations, can be easily monitored in the space of the principal components (PC). The spatial locations of these features in the original image space can then be obtained by transposing highlighted pixels from the PC space into the original image space. In this manner it is possible to easily detect and locate (even very subtle) features from real-time imaging sensors for the purpose of performing ...",
"Multivariate image analysis (MIA) is a methodology for analyzing multivariate images, where the image coordinates are position (two- or three-dimensions) and variable number. Multivariate images ca ..."
]
} |
1612.08153 | 2564518778 | Human identification remains to be one of the challenging tasks in computer vision community due to drastic changes in visual features across different viewpoints, lighting conditions, occlusion, etc. Most of the literature has been focused on exploring human re-identification across viewpoints that are not too drastically different in nature. Cameras usually capture oblique or side views of humans, leaving room for a lot of geometric and visual reasoning. Given the recent popularity of egocentric and top-view vision, re-identification across these two drastically different views can now be explored. Having an egocentric and a top view video, our goal is to identify the cameraman in the content of the top-view video, and also re-identify the people visible in the egocentric video, by matching them to the identities present in the top-view video. We propose a CRF-based method to address the two problems. Our experimental results demonstrates the efficiency of the proposed approach over a variety of video recorded from two views. | Visual analysis of egocentric videos has recently became a hot research topic in computer vision @cite_17 @cite_14 , from recognizing daily activities @cite_21 @cite_30 to object detection @cite_1 , video summarization @cite_6 , and predicting gaze behavior @cite_16 @cite_41 @cite_18 . Some studies have addressed relationships between moving and static cameras. Interesting works reported in @cite_42 @cite_13 have explored the relationship between mobile and static cameras for the purpose of improving object detection accuracy. @cite_34 fuses information from egocentric and exocentric vision (third-person static cameras in the environment) with laser range data to improve depth perception and 3D reconstruction. Park @cite_7 predict gaze behavior in social scenes using first-person and third-person cameras. Soran , @cite_15 have addressed action recognition in presence of an egocentric video and multiple static videos. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_41",
"@cite_21",
"@cite_1",
"@cite_42",
"@cite_6",
"@cite_34",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"2149276562",
"2129053638",
"",
"",
"",
"2212494831",
"2031688197",
"",
"2120645068",
"1980296915",
"2253138976",
"2136668269",
"",
""
],
"abstract": [
"We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.",
"Several visual attention models have been proposed for describing eye movements over simple stimuli and tasks such as free viewing or visual search. Yet, to date, there exists no computational framework that can reliably mimic human gaze behavior in more complex environments and tasks such as urban driving. In addition, benchmark datasets, scoring techniques, and top-down model architectures are not yet well understood. In this paper, we describe new task-dependent approaches for modeling top-down overt visual attention based on graphical models for probabilistic inference and reasoning. We describe a dynamic Bayesian network that infers probability distributions over attended objects and spatial locations directly from observed data. Probabilistic inference in our model is performed over object-related functions that are fed from manual annotations of objects in video scenes or by state-of-the-art object detection recognition algorithms. Evaluating over approximately 3 h (approximately 315 000 eye fixations and 12 000 saccades) of observers playing three video games (time-scheduling, driving, and flight combat), we show that our approach is significantly more predictive of eye fixations compared to: 1) simpler classifier-based models also developed here that map a signature of a scene (multimodal information from gist, bottom-up saliency, physical actions, and events) to eye positions; 2) 14 state-of-the-art bottom-up saliency models; and 3) brute-force algorithms such as mean eye position. Our results show that the proposed model is more effective in employing and reasoning over spatio-temporal visual data compared with the state-of-the-art.",
"",
"",
"",
"We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze.",
"This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.",
"",
"We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.",
"The user interface is the central element of a telepresence robotic system and its visualization modalities greatly affect the operator's situation awareness, and thus its performance. Depending on the task at hand and the operator's preferences, going from ego- and exocentric viewpoints and improving the depth representation can provide better perspectives of the operation environment. Our system, which combines a 3D reconstruction of the environment using laser range finder readings with two video projection methods, allows the operator to easily switch from ego- to exocentric viewpoints. This paper presents the interface developed and demonstrates its capabilities by having 13 operators teleoperate a mobile robot in a navigation task.",
"In this paper, we study the problem of recognizing human actions in the presence of a single egocentric camera and multiple static cameras. Some actions are better presented in static cameras, where the whole body of an actor and the context of actions are visible. Some other actions are better recognized in egocentric cameras, where subtle movements of hands and complex object interactions are visible. In this paper, we introduce a model that can benefit from the best of both worlds by learning to predict the importance of each camera in recognizing actions in each frame. By joint discriminative learning of latent camera importance variables and action classifiers, our model achieves successful results in the challenging CMU-MMAC dataset. Our experimental results show significant gain in learning to use the cameras according to their predicted importance. The learned latent variables provide a level of understanding of a scene that enables automatic cinematography by smoothly switching between cameras in order to maximize the amount of relevant information in each frame.",
"We present a model for gaze prediction in egocentric video by leveraging the implicit cues that exist in camera wearer's behaviors. Specifically, we compute the camera wearer's head motion and hand location from the video and combine them to estimate where the eyes look. We further model the dynamic behavior of the gaze, in particular fixations, as latent variables to improve the gaze prediction. Our gaze prediction results outperform the state-of-the-art algorithms by a large margin on publicly available egocentric vision datasets. In addition, we demonstrate that we get a significant performance boost in recognizing daily actions and segmenting foreground objects by plugging in our gaze predictions into state-of-the-art methods.",
"",
""
]
} |
1612.08220 | 2562979205 | While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model's decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models. | Efforts to understand neural vector space models in natural language processing (NLP) occur in the earliest work, in which embeddings were visualizing by low-dimensional projection . Recent work includes visualizing state activation @cite_5 @cite_13 , interpreting semantic dimensions by presenting humans with a list of words and asking them to choose outliers @cite_33 @cite_9 , linking dimensions with semantic lexicons or word properties @cite_19 @cite_30 , learning sparse interpretable word vectors @cite_26 , and empirical studies of LSTM components @cite_38 @cite_11 . | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_26",
"@cite_33",
"@cite_9",
"@cite_19",
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"1689711448",
"2949291345",
"",
"",
"2952828476",
"",
"1951216520",
"1924770834"
],
"abstract": [
"",
"Several variants of the long short-term memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful functional ANalysis Of VAriance framework. In total, we summarize the results of 5400 experimental runs ( @math years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment.",
"Current distributed representations of words show little resemblance to theories of lexical semantics. The former are dense and uninterpretable, the latter largely based on familiar, discrete classes (e.g., supersenses) and relations (e.g., synonymy and hypernymy). We propose methods that transform word vectors into sparse (and optionally binary) vectors. The resulting representations are more similar to the interpretable features typically used in NLP, though they are discovered automatically from raw corpora. Because the vectors are highly sparse, they are computationally easy to work with. Most importantly, we find that they outperform the original vectors on benchmark tasks.",
"",
"",
"Vector space word representations are learned from distributional information of words in large corpora. Although such statistics are semantically informative, they disregard the valuable information that is contained in semantic lexicons such as WordNet, FrameNet, and the Paraphrase Database. This paper proposes a method for refining vector space representations using relational information from semantic lexicons by encouraging linked words to have similar vector representations, and it makes no assumptions about how the input vectors were constructed. Evaluated on a battery of standard lexical semantic evaluation tasks in several languages, we obtain substantial improvements starting with a variety of word vector models. Our refinement method outperforms prior techniques for incorporating semantic lexicons into the word vector training algorithms.",
"",
"Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study.",
"In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM."
]
} |
1612.08220 | 2562979205 | While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model's decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models. | Attention @cite_6 @cite_40 @cite_28 @cite_22 @cite_0 provides an important way to explain the workings of neural models, at least for tasks with an alignment modeled between inputs and outputs, like machine translation or summarization. Representation erasure can be applied to attention-based models as well, and can also be used for tasks that aren't well-modeled by attention. | {
"cite_N": [
"@cite_22",
"@cite_28",
"@cite_6",
"@cite_0",
"@cite_40"
],
"mid": [
"1843891098",
"",
"2133564696",
"2255577267",
"2949335953"
],
"abstract": [
"Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.",
"",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. Our Spatial Memory Network stores neuron activations from different spatial regions of the image in its memory, and uses the question to choose relevant regions for computing the answer, a process of which constitutes a single \"hop\" in the network. We propose a novel spatial attention architecture that aligns words with image patches in the first hop, and obtain improved results by adding a second attention hop which considers the whole question to choose visual evidence based on the results of the first hop. To better understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR [1] and VQA [2], and obtain improved results compared to a strong deep baseline model (iBOWIMG) which concatenates image and question features to predict the answer [3].",
"An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker."
]
} |
1612.07843 | 2561412020 | Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications. | In the context of neural networks for text classification @cite_1 proposed to extract salient sentences from text documents using loss gradient magnitudes. In order to validate the pertinence of the sentences extracted via the neural network classifier, the latter work proposed to subsequently use these sentences as an input to an external classifier and compare the resulting classification performance to random and heuristic sentence selection. The work by @cite_6 also employs gradient magnitudes to identify salient words within sentences, analogously to the method proposed in computer vision by @cite_25 . However their analysis is based on qualitative interpretation of saliency heatmaps for exemplary sentences. In addition to the heatmap visualizations, we provide a classifier-intrinsic quantitative validation of the word-level relevances. We furthermore extend previous work from @cite_29 by adding a BoW SVM baseline to the experiments and proposing a new criterion for assessing model explanatory power. | {
"cite_N": [
"@cite_29",
"@cite_1",
"@cite_25",
"@cite_6"
],
"mid": [
"2964178496",
"1902674502",
"2962851944",
"2964159778"
],
"abstract": [
"Layer-wise relevance propagation (LRP) is a recently proposed technique for explaining predictions of complex non-linear classifiers in terms of input variables. In this paper, we apply LRP for the first time to natural language processing (NLP). More precisely, we use it to explain the predictions of a convolutional neural network (CNN) trained on a topic categorization task. Our analysis highlights which words are relevant for a specific prediction of the CNN. We compare our technique to standard sensitivity analysis, both qualitatively and quantitatively, using a “word deleting” perturbation experiment, a PCA analysis, and various visualizations. All experiments validate the suitability of LRP for explaining the CNN predictions, which is also in line with results reported in recent image classification studies.",
"We present a hierarchical convolutional document model with an architecture designed to support introspection of the document structure. Using this model, we show how to use visualisation techniques from the computer vision literature to identify and extract topic-relevant sentences. We also introduce a new scalable evaluation technique for automatic sentence extraction systems that avoids the need for time consuming human annotation of validation data.",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].",
"While neural networks have been successfully applied to many NLP tasks the resulting vectorbased models are very difficult to interpret. For example it’s not clear how they achieve compositionality, building sentence meaning from the meanings of words and phrases. In this paper we describe strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allowing us to see wellknown markedness asymmetries in negation. We then introduce methods for visualizing a unit’s salience, the amount that it contributes to the final composed meaning from first-order derivatives. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks."
]
} |
1612.07766 | 2563758362 | Blockchain protocols are inherently limited in transaction throughput and latency. Recent efforts to address performance and scale blockchains have focused on off-chain payment channels. While such channels can achieve low latency and high throughput, deploying them securely on top of the Bitcoin blockchain has been difficult, partly because building a secure implementation requires changes to the underlying protocol and the ecosystem. We present Teechan, a full-duplex payment channel framework that exploits trusted execution environments. Teechan can be deployed securely on the existing Bitcoin blockchain without having to modify the protocol. It: (i) achieves a higher transaction throughput and lower transaction latency than prior solutions; (ii) enables unlimited full-duplex payments as long as the balance does not exceed the channel's credit; (iii) requires only a single message to be sent per payment in any direction; and (iv) places at most two transactions on the blockchain under any execution scenario. We have built and deployed the Teechan framework using Intel SGX on the Bitcoin network. Our experiments show that, not counting network latencies, Teechan can achieve 2,480 transactions per second on a single channel, with sub-millisecond latencies. | Direct payments were proposed by Chaum @cite_33 in ecash to achieve privacy. The ecash assumptions are significantly weaker than those offered by payment channels. Mainly, cheating is enforced in retrospect, through external punishment mechanisms. | {
"cite_N": [
"@cite_33"
],
"mid": [
"1601001795"
],
"abstract": [
"Automation of the way we pay for goods and services is already underway, as can be seen by the variety and growth of electronic banking services available to consumers. The ultimate structure of the new electronic payments system may have a substantial impact on personal privacy as well as on the nature and extent of criminal use of payments. Ideally a new payments system should address both of these seemingly conflicting sets of concerns."
]
} |
1612.07766 | 2563758362 | Blockchain protocols are inherently limited in transaction throughput and latency. Recent efforts to address performance and scale blockchains have focused on off-chain payment channels. While such channels can achieve low latency and high throughput, deploying them securely on top of the Bitcoin blockchain has been difficult, partly because building a secure implementation requires changes to the underlying protocol and the ecosystem. We present Teechan, a full-duplex payment channel framework that exploits trusted execution environments. Teechan can be deployed securely on the existing Bitcoin blockchain without having to modify the protocol. It: (i) achieves a higher transaction throughput and lower transaction latency than prior solutions; (ii) enables unlimited full-duplex payments as long as the balance does not exceed the channel's credit; (iii) requires only a single message to be sent per payment in any direction; and (iv) places at most two transactions on the blockchain under any execution scenario. We have built and deployed the Teechan framework using Intel SGX on the Bitcoin network. Our experiments show that, not counting network latencies, Teechan can achieve 2,480 transactions per second on a single channel, with sub-millisecond latencies. | Several proposals address the performance issues of the Bitcoin network, from the GHOST protocol and alternatives to the chain structure @cite_14 @cite_30 @cite_12 , to alternative block generation techniques @cite_25 @cite_8 @cite_15 . Others @cite_23 @cite_19 @cite_3 build on classical consensus protocols @cite_22 or operate in permissioned settings. While they all improve on the Nakamoto blockchain performance, none can reach the performance offered by direct channels that do not require global system consensus for each transaction. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_22",
"@cite_8",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_25",
"@cite_12"
],
"mid": [
"2295318496",
"2397661077",
"2126087831",
"2963517786",
"2534313446",
"",
"",
"",
"2181397138",
""
],
"abstract": [
"Distributed cryptographic protocols such as Bitcoin and Ethereum use a data structure known as the block chain to synchronize a global log of events between nodes in their network. Blocks, which are batches of updates to the log, reference the parent they are extending, and thus form the structure of a chain. Previous research has shown that the mechanics of the block chain and block propagation are constrained: if blocks are created at a high rate compared to their propagation time in the network, many conflicting blocks are created and performance suffers greatly. As a result of the low block creation rate required to keep the system within safe parameters, transactions take long to securely confirm, and their throughput is greatly limited.",
"Bitcoin is a potentially disruptive new crypto-currency based on a decentralized opensource protocol which is gradually gaining popularity. Perhaps the most important question that will affect Bitcoin’s success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the restrictions on the rate of transaction processing in Bitcoin as a function of both the bandwidth available to nodes and the network delay, both of which lower the efficiency of Bitcoin’s transaction processing. The security analysis done by Bitcoin’s creator Satoshi Nakamoto [12] assumes that block propagation delays are negligible compared to the time between blocks—an assumption that does not hold when the protocol is required to process transactions at high rates. We improve upon the original analysis and remove this assumption. Using our results, we are able to give bounds on the number of transactions per second the protocol can handle securely. Building on previously published measurements by Decker and Wattenhofer [5], we show these bounds are currently more restrictive by an order of magnitude than the bandwidth needed to stream all transactions. We additionally show how currently planned improvements to the protocol, namely the use of transaction hashes in blocks (instead of complete transaction records), will dramatically alleviate these restrictions. Finally, we present an easily implementable modification to the way Bitcoin constructs its main data structure, the blockchain, that immensely improves security from attackers, especially when the network operates at high rates. This improvement allows for further increases in the number of transactions processed per second. We show that with our proposed modification, significant speedups can be gained in confirmation time of transactions as well. The block generation rate can be securely increased to more than one block per second – a 600 fold speedup compared to today’s rate, while still allowing the network to processes many transactions per second.",
"This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that our service is only 3 slower than a standard unreplicated NFS.",
"While showing great promise, Bitcoin requires users to wait tens of minutes for transactions to commit, and even then, offering only probabilistic guarantees. This paper introduces ByzCoin, a novel Byzantine consensus protocol that leverages scalable collective signing to commit Bitcoin transactions irreversibly within seconds. ByzCoin achieves Byzantine consensus while preserving Bitcoin's open membership by dynamically forming hash power-proportionate consensus groups that represent recently-successful block miners. ByzCoin employs communication trees to optimize transaction commitment and verification under normal operation while guaranteeing safety and liveness under Byzantine faults, up to a near-optimal tolerance of f faulty group members among 3f + 2 total. ByzCoin mitigates double spending and selfish mining attacks by producing collectively signed transaction blocks within one minute of transaction submission. Tree-structured communication further reduces this latency to less than 30 seconds. Due to these optimizations, ByzCoin achieves a throughput higher than PayPal currently handles, with a confirmation latency of 15-20 seconds.",
"The surprising success of cryptocurrencies has led to a surge of interest in deploying large scale, highly robust, Byzantine fault tolerant (BFT) protocols for mission-critical applications, such as financial transactions. Although the conventional wisdom is to build atop a (weakly) synchronous protocol such as PBFT (or a variation thereof), such protocols rely critically on network timing assumptions, and only guarantee liveness when the network behaves as expected. We argue these protocols are ill-suited for this deployment scenario. We present an alternative, HoneyBadgerBFT, the first practical asynchronous BFT protocol, which guarantees liveness without making any timing assumptions. We base our solution on a novel atomic broadcast protocol that achieves optimal asymptotic efficiency. We present an implementation and experimental results to show our system can achieve throughput of tens of thousands of transactions per second, and scales to over a hundred nodes on a wide area network. We even conduct BFT experiments over Tor, without needing to tune any parameters. Unlike the alternatives, HoneyBadgerBFT simply does not care about the underlying network.",
"",
"",
"",
"Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15 the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
""
]
} |
1612.07766 | 2563758362 | Blockchain protocols are inherently limited in transaction throughput and latency. Recent efforts to address performance and scale blockchains have focused on off-chain payment channels. While such channels can achieve low latency and high throughput, deploying them securely on top of the Bitcoin blockchain has been difficult, partly because building a secure implementation requires changes to the underlying protocol and the ecosystem. We present Teechan, a full-duplex payment channel framework that exploits trusted execution environments. Teechan can be deployed securely on the existing Bitcoin blockchain without having to modify the protocol. It: (i) achieves a higher transaction throughput and lower transaction latency than prior solutions; (ii) enables unlimited full-duplex payments as long as the balance does not exceed the channel's credit; (iii) requires only a single message to be sent per payment in any direction; and (iv) places at most two transactions on the blockchain under any execution scenario. We have built and deployed the Teechan framework using Intel SGX on the Bitcoin network. Our experiments show that, not counting network latencies, Teechan can achieve 2,480 transactions per second on a single channel, with sub-millisecond latencies. | Decker and Wattenhofer @cite_17 were the first to realize Duplex Micropayment Channels (DMC), improving the exhaustion limit. In DMC, two parties form a pair of channels, one in each direction, and re-balance them as needed, that is, when the credit in one direction is depleted but after there have been transactions in the opposite direction. However, the number of resets possible is limited at channel construction, depending on the time allotted for the refund timeout and the bound on the time to place a transaction on the blockchain. Therefore the total amount that can be sent on the channel in one direction is bounded by the deposit amount times the maximal number of resets. DMC also requires changes to Bitcoin. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2288007454"
],
"abstract": [
"Bitcoin does not scale, because its synchronization mechanism, the blockchain, limits the maximum rate of transactions the network can process. However, using off-blockchain transactions it is possible to create long-lived channels over which an arbitrary number of transfers can be processed locally between two users, without any burden to the Bitcoin network. These channels may form a network of payment service providers PSPs. Payments can be routed between any two users in real time, without any confirmation delay. In this work we present a protocol for duplex micropayment channels, which guarantees end-to-end security and allow instant transfers, laying the foundation of the PSP network."
]
} |
1612.07710 | 2950458457 | We consider the problem of approximate set similarity search under Braun-Blanquet similarity @math . The @math -approximate Braun-Blanquet similarity search problem is to preprocess a collection of sets @math such that, given a query set @math , if there exists @math with @math , then we can efficiently return @math with @math . We present a simple data structure that solves this problem with space usage @math and query time @math where @math and @math . Making use of existing lower bounds for locality-sensitive hashing by O' (TOCT 2014) we show that this value of @math is tight across the parameter space, i.e., for every choice of constants @math . In the case where all sets have the same size our solution strictly improves upon the value of @math that can be obtained through the use of state-of-the-art data-independent techniques in the Indyk-Motwani locality-sensitive hashing framework (STOC 1998) such as Broder's MinHash (CCS 1997) for Jaccard similarity and 's cross-polytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is data-independent, for a large part of the parameter space we outperform the currently best data-dependent method by Andoni and Razenshteyn (STOC 2015). | Similarity search under set similarity and the batched version often referred to as @cite_2 @cite_37 have also been studied extensively in the information retrieval and database literature, but mostly without providing theoretical guarantees on performance. Recently the notion of containment search, where the similarity measure is the (unnormalized) intersection size, was studied in the LSH framework @cite_32 . This is a special case of search @cite_32 @cite_22 . However, these techniques do not give improvements in our setting. | {
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_32",
"@cite_2"
],
"mid": [
"2097776316",
"2218762779",
"1455310343",
""
],
"abstract": [
"Given a large collection of sparse vector data in a high dimensional space, we investigate the problem of finding all pairs of vectors whose similarity score (as determined by a function such as cosine distance) is above a given threshold. We propose a simple algorithm based on novel indexing and optimization strategies that solves this problem without relying on approximation methods or extensive parameter tuning. We show the approach efficiently handles a variety of datasets across a wide setting of similarity thresholds, with large speedups over previous state-of-the-art approaches.",
"A number of tasks in classification, information retrieval, recommendation systems, and record linkage reduce to the core problem of inner product similarity join (IPS join): identifying pairs of vectors in a collection that have a sufficiently large inner product. IPS join is well understood when vectors are normalized and some approximation of inner products is allowed. However, the general case where vectors may have any length appears much more challenging. Recently, new upper bounds based on asymmetric locality-sensitive hashing (ALSH) and asymmetric embeddings have emerged, but little has been known on the lower bound side. In this paper we initiate a systematic study of inner product similarity join, showing new lower and upper bounds. Our main results are: Approximation hardness of IPS join in subquadratic time, assuming the strong exponential time hypothesis. New upper and lower bounds for (A)LSH-based algorithms. In particular, we show that asymmetry can be avoided by relaxing the LSH definition to only consider the collision probability of distinct elements. A new indexing method for IPS based on linear sketches, implying that our hardness results are not far from being tight. Our technical contributions include new asymmetric embeddings that may be of independent interest. At the conceptual level we strive to provide greater clarity, for example by distinguishing among signed and unsigned variants of IPS join and shedding new light on the effect of asymmetry.",
"Minwise hashing (Minhash) is a widely popular indexing scheme in practice. Minhash is designed for estimating set resemblance and is known to be suboptimal in many applications where the desired measure is set overlap (i.e., inner product between binary vectors) or set containment. Minhash has inherent bias towards smaller sets, which adversely affects its performance in applications where such a penalization is not desirable. In this paper, we propose asymmetric minwise hashing ( MH-ALSH ), to provide a solution to this well-known problem. The new scheme utilizes asymmetric transformations to cancel the bias of traditional minhash towards smaller sets, making the final collision probability'' monotonic in the inner product. Our theoretical comparisons show that, for the task of retrieving with binary inner products, asymmetric minhash is provably better than traditional minhash and other recently proposed hashing algorithms for general inner products. Thus, we obtain an algorithmic improvement over existing approaches in the literature. Experimental evaluations on four publicly available high-dimensional datasets validate our claims. The proposed scheme outperforms, often significantly, other hashing algorithms on the task of near neighbor retrieval with set containment. Our proposal is simple and easy to implement in practice.",
""
]
} |
1612.07710 | 2950458457 | We consider the problem of approximate set similarity search under Braun-Blanquet similarity @math . The @math -approximate Braun-Blanquet similarity search problem is to preprocess a collection of sets @math such that, given a query set @math , if there exists @math with @math , then we can efficiently return @math with @math . We present a simple data structure that solves this problem with space usage @math and query time @math where @math and @math . Making use of existing lower bounds for locality-sensitive hashing by O' (TOCT 2014) we show that this value of @math is tight across the parameter space, i.e., for every choice of constants @math . In the case where all sets have the same size our solution strictly improves upon the value of @math that can be obtained through the use of state-of-the-art data-independent techniques in the Indyk-Motwani locality-sensitive hashing framework (STOC 1998) such as Broder's MinHash (CCS 1997) for Jaccard similarity and 's cross-polytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is data-independent, for a large part of the parameter space we outperform the currently best data-dependent method by Andoni and Razenshteyn (STOC 2015). | Similarity Estimation. Finally, we mention that another application of MinHash @cite_7 @cite_35 is the (easier) problem of , where the task is to condense each vector @math into a short signature @math in such a way that the similarity @math can be estimated from @math and @math . A related similarity estimation technique was independently discovered by Cohen @cite_19 . Thorup @cite_38 has shown how to perform similarity estimation using just a small amount of randomness in the definition of the function @math . In another direction, @cite_14 showed that it is possible to improve the performance of MinHash for similarity estimation when the Jaccard similarity is close to 1, but for smaller similarities it is known that succinct encodings of MinHash such as the one in @cite_16 comes within a constant factor of the optimal space for storing @math @cite_33 . Curiously, our improvement to MinHash in the context of similarity comes when the similarity is neither too large nor too small. Our techniques do not seem to yield any improvement for the similarity problem. | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_14",
"@cite_33",
"@cite_7",
"@cite_19",
"@cite_16"
],
"mid": [
"2152565070",
"2134212491",
"",
"2011737794",
"2132069633",
"1965996575",
""
],
"abstract": [
"We have developed an efficient way to determine the syntactic similarity of files and have applied it to every document on the World Wide Web. Using this mechanism, we built a clustering of all the documents that are syntactically similar. Possible applications include a \"Lost and Found\" service, filtering the results of Web searches, updating widely distributed web-pages, and identifying violations of intellectual property rights.",
"We consider bottom-k sampling for a set X, picking a sample Sk(X) consisting of the k elements that are smallest according to a given hash function h. With this sample we can estimate the relative size f=|Y| |X| of any subset Y as |Sk(X) intersect Y| k. A standard application is the estimation of the Jaccard similarity f=|A intersect B| |A union B| between sets A and B. Given the bottom-k samples from A and B, we construct the bottom-k sample of their union as Sk(A union B)=Sk(Sk(A) union Sk(B)), and then the similarity is estimated as |Sk(A union B) intersect Sk(A) intersect Sk(B)| k. We show here that even if the hash function is only 2-independent, the expected relative error is O(1√(fk)). For fk=Omega(1) this is within a constant factor of the expected relative error with truly random hashing. For comparison, consider the classic approach of kxmin-wise where we use k hash independent functions h1,...,hk, storing the smallest element with each hash function. For kxmin-wise there is an at least constant bias with constant independence, and it is not reduced with larger k. Recently showed that bottom-k circumvents the bias if the hash function is 8-independent and k is sufficiently large. We get down to 2-independence for any k. Our result is based on a simply union bound, transferring generic concentration bounds for the hashing scheme to the bottom-k sample, e.g., getting stronger probability error bounds with higher independence. For weighted sets, we consider priority sampling which adapts efficiently to the concrete input weights, e.g., benefiting strongly from heavy-tailed input. This time, the analysis is much more involved, but again we show that generic concentration bounds can be applied.",
"",
"Min-wise hashing is an important method for estimating the size of the intersection of sets, based on a succinct summary (a \"min-hash\") of each set. One application is estimation of the number of data points that satisfy the conjunction of m >= 2 simple predicates, where a min-hash is available for the set of points satisfying each predicate. This has application in query optimization and for approximate computation of COUNT aggregates. In this paper we address the question: How many bits is it necessary to allocate to each summary in order to get an estimate with (1 + - epsilon)-relative error? The state-of-the-art technique for minimizing the encoding size, for any desired estimation error, is b-bit min-wise hashing due to Li and Konig (Communications of the ACM, 2011). We give new lower and upper bounds: Using information complexity arguments, we show that b-bit min-wise hashing is em space optimal for m=2 predicates in the sense that the estimator's variance is within a constant factor of the smallest possible among all summaries with the given space usage. But for conjunctions of m>2 predicates we show that the performance of b-bit min-wise hashing (and more generally any method based on \"k-permutation\" min-hash) deteriorates as m grows. We describe a new summary that nearly matches our lower bound for m >= 2. It asymptotically outperform all k-permutation schemes (by around a factor Omega(m log m)), as well as methods based on subsampling (by a factor Omega(log n_max), where n_max is the maximum set size).",
"Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of \"roughly the same\" and \"roughly contained.\" The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.",
"Computing the transitive closure in directed graphs is a fundamental graph problem. We consider the more restricted problem of computing the number of nodes reachable from every node and the size of the transitive closure. The fastest known transitive closure algorithms run inO(min mn,n2.38 ) time, wherenis the number of nodes andmthe number of edges in the graph. We present anO(m) time randomized (Monte Carlo) algorithm that estimates, with small relative error, the sizes of all reachability sets and the transitive closure. Another ramification of our estimation scheme is a O(m) time algorithm for estimating sizes of neighborhoods in directed graphs with nonnegative edge lengths. Our size-estimation algorithms are much faster than performing the respective explicit computations.",
""
]
} |
1612.07936 | 2565891322 | We introduce a model concerning radiational gaseous stars and establish the existence theory of stationary solutions to the free boundary problem of hydrostatic equations describing the radiative equilibrium. We also concern the local well-posedness of the moving boundary problem of the corresponding Navier-Stokes-Fourier-Poisson system and construct a prior estimates of strong and classical solutions. Our results explore the vacuum behaviour of density and temperature near the free boundary for the equilibrium and capture such degeneracy in the evolutionary problem. | It shall be emphasised that the physical vacuum causes big challenges in the study of the evolutionary problem of . That is to study the following Euler-Poisson system Indeed, the physical vacuum boundary implies that the sound speed @math is only @math -H "older continuous rather than Lipschitz continuous across the boundary. As pointed out in @cite_30 , this singularity makes the standard hyperbolic method fail. It is only recently that the local well-posedness theory is studied by Coutand, Lindblad and Shkoller @cite_13 @cite_32 @cite_23 , Jang and Masmoudi @cite_19 @cite_9 , Gu and Lei @cite_3 @cite_15 , Luo, Xin and Zeng @cite_5 in the setting of one spatial dimension, three spatial dimension and spherical symmetry with or without self-gravitation. See @cite_24 for a viscous flow. We refer to @cite_30 @cite_10 @cite_26 @cite_8 for other discussions on the physical vacuum problem and the references therein for other vacuum problems. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_8",
"@cite_10",
"@cite_9",
"@cite_32",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_15",
"@cite_13"
],
"mid": [
"",
"2064683888",
"",
"2011819936",
"2019680912",
"",
"",
"2080448000",
"2964059356",
"",
"2163734752",
"2963373228",
"2133010631"
],
"abstract": [
"",
"In this paper we establish the global existence of smooth solutions to vacuum free boundary problems of the one-dimensional compressible isentropic Navier–Stokes equations for which the smoothness extends all the way to the boundaries. The results obtained in this work include the physical vacuum for which the sound speed is C1 2-Holder continuous near the vacuum boundaries when 1 < γ < 3. The novelty of this result is its global-in-time regularity which is in contrast to the previous main results of global weak solutions in the literature. Moreover, in previous studies of the one-dimensional free boundary problems of compressible Navier–Stokes equations, the Lagrangian mass coordinates method has often been used, but in the present work the particle path (flow trajectory) method is adopted, which has the advantage that the particle paths and, in particular, the free boundaries can be traced.",
"",
"For the physical vacuum free boundary problem with the sound speed being C1 2-Holder continuous near vacuum boundaries of the one-dimensional compressible Euler equations with damping, the global existence of the smooth solution is proved, which is shown to converge to the Barenblatt self-similar solution for the porous media equation with the same total mass when the initial datum is a small perturbation of the Barenblatt solution. The pointwise convergence with a rate of density, the convergence rate of velocity in the supremum norm, and the precise expanding rate of the physical vacuum boundaries are also given. The proof is based on a construction of higher-order weighted functionals with both space and time weights capturing the behavior of solutions both near vacuum states and in large time, an introduction of a new ansatz, higher-order nonlinear energy estimates, and elliptic estimates.© 2016 Wiley Periodicals, Inc.",
"An important problem in gas and fluid dynamics is to understand the behavior of vacuum states, namely the behavior of the system in the presence of a vacuum. In particular, physical vacuum, in which the boundary moves with a nontrivial finite normal acceleration, naturally arises in the study of the motion of gaseous stars or shallow water. Despite its importance, there are only a few mathematical results available near a vacuum. The main difficulty lies in the fact that the physical systems become degenerate along the vacuum boundary. In this paper, we establish the local-in-time well-posedness of three-dimensional compressible Euler equations for polytropic gases with a physical vacuum by considering the problem as a free boundary problem. © 2015 Wiley Periodicals, Inc.",
"",
"",
"We establish the local-in-time well-posedness of strong solutions to the vacuum free boundary problem of the compressible Navier–Stokes–Poisson system in the spherically symmetric and isentropic motion. Our result captures the physical vacuum boundary behavior of the Lane–Emden star configurations for all adiabatic exponents ( < 6 5 ) .",
"Abstract This paper is concerned with the 1-D compressible Euler–Poisson equations with moving physical vacuum boundary condition. It is usually used to describe the motion of a self-gravitating inviscid gaseous star. The local well-posedness of classical solutions is established in the case of the adiabatic index 1 γ 3 .",
"",
"This paper concerns the well-posedness theory of the motion of a physical vacuum for the compressible Euler equations with or without self-gravitation. First, a general uniqueness theorem of classical solutions is proved for the three dimensional general motion. Second, for the spherically symmetric motions, without imposing the compatibility condition of the first derivative being zero at the center of symmetry, a new local-in-time existence theory is established in a functional space involving less derivatives than those constructed for three-dimensional motions in (, Commun Math Phys 296:559–587, 2010; Coutand and Shkoller, Arch Ration Mech Anal 206:515–616, 2012; Jang and Masmoudi, Well-posedness of compressible Euler equations in a physical vacuum, 2008) by constructing suitable weights and cutoff functions featuring the behavior of solutions near both the center of the symmetry and the moving vacuum boundary.",
"Abstract This paper is concerned with the three dimensional compressible Euler–Poisson equations with moving physical vacuum boundary condition. This fluid system is usually used to describe the motion of a self-gravitating inviscid gaseous star. The local existence of classical solutions for initial data in certain weighted Sobolev spaces is established in the case that the adiabatic index satisfies 1 γ 3 .",
"We prove a priori estimates for the three-dimensional compressible Euler equations with moving physical vacuum boundary, with an equation of state given by p(ρ) = C γ ρ γ for γ > 1. The vacuum condition necessitates the vanishing of the pressure, and hence density, on the dynamic boundary, which creates a degenerate and characteristic hyperbolic free-boundary system to which standard methods of symmetrizable hyperbolic equations cannot be applied."
]
} |
1612.08082 | 2963806098 | This paper proposes a novel approach for constructing effective personalized policies when the observed data lacks counter-factual information, is biased and possesses many features. The approach is applicable in a wide variety of settings from healthcare to advertising to education to finance. These settings have in common that the decision maker can observe, for each previous instance, an array of features of the instance, the action taken in that instance, and the reward realized—but not the rewards of actions that were not taken: the counterfactual information. Learning in such settings is made even more difficult because the observed data is typically biased by the existing policy (that generated the data) and because the array of features that might affect the reward in a particular instance—and hence should be taken into account in deciding on an action in each particular instance—is often vast. The approach presented here estimates propensity scores for the observed data, infers counterfactuals, identifies a (relatively small) number of features that are (most) relevant for each possible action and instance, and prescribes a policy to be followed. Comparison of the proposed algorithm against state-of-art algorithms on actual datasets demonstrates that the proposed algorithm achieves a significant improvement in performance. | From a technical point of view, our work is perhaps most closely related to @cite_4 in that we use similar methods (IPS-estimates and empirical Bernstein inequalities) to learn counterfactuals. However, it does not treat observational data in which the bias is unknown and does not learn identify relevant features. Another similar work on policy optimization from observational data is @cite_17 . | {
"cite_N": [
"@cite_4",
"@cite_17"
],
"mid": [
"1835900096",
"2951403958"
],
"abstract": [
"We develop a learning principle and an efficient algorithm for batch learning from logged bandit feedback. This learning setting is ubiquitous in online systems (e.g., ad placement, web search, recommendation), where an algorithm makes a prediction (e.g., ad ranking) for a given input (e.g., query) and observes bandit feedback (e.g., user clicks on presented ads). We first address the counterfactual nature of the learning problem (, 2013) through propensity scoring. Next, we prove generalization error bounds that account for the variance of the propensity-weighted empirical risk estimator. In analogy to the Structural Risk Minimization principle of Wapnik and Tscherwonenkis (1979), these constructive bounds give rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM can be used to derive a new learning method--called Policy Optimizer for Exponential Models (POEM)--for learning stochastic linear rules for structured output prediction. We present a decomposition of the POEM objective that enables efficient stochastic gradient optimization. The effectiveness and efficiency of POEM is evaluated on several simulated multi-label classification problems, as well as on a real-world information retrieval problem. The empirical results show that the CRM objective implemented in POEM provides improved robustness and generalization performance compared to the state-of-the-art.",
"We provide a sound and consistent foundation for the use of exploration data in \"contextual bandit\" or \"partially labeled\" settings where only the value of a chosen action is learned. The primary challenge in a variety of settings is that the exploration policy, in which \"offline\" data is logged, is not explicitly known. Prior solutions here require either control of the actions during the learning process, recorded random exploration, or actions chosen obliviously in a repeated manner. The techniques reported here lift these restrictions, allowing the learning of a policy for choosing actions given features from historical data where no randomization occurred or was logged. We empirically verify our solution on two reasonably sized sets of real-world data obtained from Yahoo!."
]
} |
1612.08082 | 2963806098 | This paper proposes a novel approach for constructing effective personalized policies when the observed data lacks counter-factual information, is biased and possesses many features. The approach is applicable in a wide variety of settings from healthcare to advertising to education to finance. These settings have in common that the decision maker can observe, for each previous instance, an array of features of the instance, the action taken in that instance, and the reward realized—but not the rewards of actions that were not taken: the counterfactual information. Learning in such settings is made even more difficult because the observed data is typically biased by the existing policy (that generated the data) and because the array of features that might affect the reward in a particular instance—and hence should be taken into account in deciding on an action in each particular instance—is often vast. The approach presented here estimates propensity scores for the observed data, infers counterfactuals, identifies a (relatively small) number of features that are (most) relevant for each possible action and instance, and prescribes a policy to be followed. Comparison of the proposed algorithm against state-of-art algorithms on actual datasets demonstrates that the proposed algorithm achieves a significant improvement in performance. | With respect to learning, feature selection methods can be divided into three categories -- filter models, wrapper models, and embedded models @cite_20 . Our method is most similar to filter techniques in which features are ranked according to a selected criterion such as a Fisher score @cite_33 , correlation based scores @cite_24 , mutual information based scores @cite_3 @cite_6 @cite_29 , Hilbert-Schmidt Independence Criterion (HSIC) @cite_24 and Relief and its variants @cite_21 @cite_16 ) etc., and the features having the highest ranks are labeled as relevant. However, these existing methods are developed for classification problems and they cannot easily handle datasets in which the rewards of actions not taken are missing. | {
"cite_N": [
"@cite_33",
"@cite_29",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_24",
"@cite_16",
"@cite_20"
],
"mid": [
"",
"2154053567",
"1583700199",
"",
"1661871015",
"2127658298",
"1500895378",
"2911681169"
],
"abstract": [
"",
"Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy.",
"In real-world concept learning problems, the representation of data often uses many features, only a few of which may be related to the target concept. In this situation, feature selection is important both to speed up learning and to improve concept quality. A new feature selection algorithm Relief uses a statistical method and avoids heuristic search. Relief requires linear time in the number of given features and the number of training instances regardless of the target concept to be learned. Although the algorithm does not necessarily find the smallest subset of features, the size tends to be small because only statistically relevant features are selected. This paper focuses on empirical test results in two artificial domains; the LED Display domain and the Parity domain with and without noise. Comparison with other feature selection algorithms shows Relief's advantages in terms of learning time and the accuracy of the learned concept, suggesting Relief's practicality.",
"",
"Feature selection, as a preprocessing step to machine learning, is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection methods with respect to efficiency and effectiveness. In this work, we introduce a novel concept, predominant correlation, and propose a fast filter method which can identify relevant features as well as redundancy among relevant features without pairwise correlation analysis. The efficiency and effectiveness of our method is demonstrated through extensive comparisons with other methods using real-world data of high dimensionality",
"We introduce a framework for feature selection based on dependence maximization between the selected features and the labels of an estimation problem, using the Hilbert-Schmidt Independence Criterion. The key idea is that good features should be highly dependent on the labels. Our approach leads to a greedy procedure for feature selection. We show that a number of existing feature selectors are special cases of this framework. Experiments on both artificial and real-world data show that our feature selector works well in practice.",
"Relief algorithms are general and successful attribute estimators. They are able to detect conditional dependencies between attributes and provide a unified view on the attribute estimation in regression and classification. In addition, their quality estimates have a natural interpretation. While they have commonly been viewed as feature subset selection methods that are applied in prepossessing step before a model is learned, they have actually been used successfully in a variety of settings, e.g., to select splits or to guide constructive induction in the building phase of decision or regression tree learning, as the attribute weighting method and also in the inductive logic programming. A broad spectrum of successful uses calls for especially careful investigation of various features Relief algorithms have. In this paper we theoretically and empirically investigate and discuss how and why they work, their theoretical and practical properties, their parameters, what kind of dependencies they detect, how do they scale up to large number of examples and features, how to sample data for them, how robust are they regarding the noise, how irrelevant and redundant attributes influence their output and how different metrics influences them.",
"Nowadays, the growth of the high-throughput technologies has resulted in exponential growth in the harvested data with respect to both dimensionality and sample size. The trend of this growth of the UCI machine learning repository is shown in Figure 1. Efficient and effective management of these data becomes increasing challenging. Traditionally manual management of these datasets to be impractical. Therefore, data mining and machine learning techniques were developed to automatically discover knowledge and recognize ..."
]
} |
1612.08082 | 2963806098 | This paper proposes a novel approach for constructing effective personalized policies when the observed data lacks counter-factual information, is biased and possesses many features. The approach is applicable in a wide variety of settings from healthcare to advertising to education to finance. These settings have in common that the decision maker can observe, for each previous instance, an array of features of the instance, the action taken in that instance, and the reward realized—but not the rewards of actions that were not taken: the counterfactual information. Learning in such settings is made even more difficult because the observed data is typically biased by the existing policy (that generated the data) and because the array of features that might affect the reward in a particular instance—and hence should be taken into account in deciding on an action in each particular instance—is often vast. The approach presented here estimates propensity scores for the observed data, infers counterfactuals, identifies a (relatively small) number of features that are (most) relevant for each possible action and instance, and prescribes a policy to be followed. Comparison of the proposed algorithm against state-of-art algorithms on actual datasets demonstrates that the proposed algorithm achieves a significant improvement in performance. | The literature on counterfactual inference can be categorized into three groups: direct, inverse propensity re-weighting and doubly robust methods. The direct methods compute counterfactuals by learning a function mapping from feature-action pair to rewards @cite_22 @cite_30 . The inverse propensity re-weighting methods compute unbiased estimates by weighting the instances by their inverse propensity scores @cite_4 @cite_19 . The doubly robust methods compute the counterfactuals by combining direct and inverse propensity score reweighing methods to compute more robust estimates @cite_0 @cite_1 . With respect to this categorization, our techniques might be view as falling into doubly robust methods. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_22",
"@cite_1",
"@cite_0",
"@cite_19"
],
"mid": [
"1523201922",
"1835900096",
"2202585678",
"2234859443",
"2964297722",
"2463677609"
],
"abstract": [
"",
"We develop a learning principle and an efficient algorithm for batch learning from logged bandit feedback. This learning setting is ubiquitous in online systems (e.g., ad placement, web search, recommendation), where an algorithm makes a prediction (e.g., ad ranking) for a given input (e.g., query) and observes bandit feedback (e.g., user clicks on presented ads). We first address the counterfactual nature of the learning problem (, 2013) through propensity scoring. Next, we prove generalization error bounds that account for the variance of the propensity-weighted empirical risk estimator. In analogy to the Structural Risk Minimization principle of Wapnik and Tscherwonenkis (1979), these constructive bounds give rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM can be used to derive a new learning method--called Policy Optimizer for Exponential Models (POEM)--for learning stochastic linear rules for structured output prediction. We present a decomposition of the POEM objective that enables efficient stochastic gradient optimization. The effectiveness and efficiency of POEM is evaluated on several simulated multi-label classification problems, as well as on a real-world information retrieval problem. The empirical results show that the CRM objective implemented in POEM provides improved robustness and generalization performance compared to the state-of-the-art.",
"A logistic regression model for application in retrospective studies is described and a study of the association of postmenopausal estrogen therapy with the incidence of endometrial cancer is given as an illustration. The method accounts for factors that may confound the association between a dichotomous exposure variable and a disease and allows for a quantitative study of the influence of factors which are related to the strength of the association.",
"We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy. This problem is often a critical step when applying RL in real-world problems. Despite its importance, existing general methods either have uncontrolled bias or suffer high variance. In this work, we extend the doubly robust estimator for bandits to sequential decision-making problems, which gets the best of both worlds: it is guaranteed to be unbiased and can have a much lower variance than the popular importance sampling estimators. We demonstrate the estimator's accuracy in several benchmark problems, and illustrate its use as a subroutine in safe policy improvement. We also provide theoretical results on the hardness of the problem, and show that our estimator can match the lower bound in certain scenarios.",
"We study decision making in environments where the reward is only partially observed, but can be modeled as a function of an action and an observed context. This setting, known as contextual bandits, encompasses a wide variety of applications including health-care policy and Internet advertising. A central task is evaluation of a new policy given historic data consisting of contexts, actions and received rewards. The key challenge is that the past data typically does not faithfully represent proportions of actions taken by a new policy. Previous approaches rely either on models of rewards or models of the past policy. The former are plagued by a large bias whereas the latter have a large variance. In this work, we leverage the strength and overcome the weaknesses of the two approaches by applying the doubly robust technique to the problems of policy evaluation and optimization. We prove that this approach yields accurate value estimates when we have either a good (but not necessarily consistent) model of rewards or a good (but not necessarily consistent) model of past policy. Extensive empirical comparison demonstrates that the doubly robust approach uniformly improves over existing techniques, achieving both lower variance in value estimation and better policies. As such, we expect the doubly robust approach to become common practice.",
"Online metrics measured through A B tests have become the gold standard for many evaluation questions. But can we get the same results as A B tests without actually fielding a new system? And can we train systems to optimize online metrics without subjecting users to an online learning algorithm? This tutorial summarizes and unifies the emerging body of methods on counterfactual evaluation and learning. These counterfactual techniques provide a well-founded way to evaluate and optimize online metrics by exploiting logs of past user interactions. In particular, the tutorial unifies the causal inference, information retrieval, and machine learning view of this problem, providing the basis for future research in this emerging area of great potential impact. Supplementary material and resources are available online at http: www.cs.cornell.edu adith CfactSIGIR2016."
]
} |
1612.08082 | 2963806098 | This paper proposes a novel approach for constructing effective personalized policies when the observed data lacks counter-factual information, is biased and possesses many features. The approach is applicable in a wide variety of settings from healthcare to advertising to education to finance. These settings have in common that the decision maker can observe, for each previous instance, an array of features of the instance, the action taken in that instance, and the reward realized—but not the rewards of actions that were not taken: the counterfactual information. Learning in such settings is made even more difficult because the observed data is typically biased by the existing policy (that generated the data) and because the array of features that might affect the reward in a particular instance—and hence should be taken into account in deciding on an action in each particular instance—is often vast. The approach presented here estimates propensity scores for the observed data, infers counterfactuals, identifies a (relatively small) number of features that are (most) relevant for each possible action and instance, and prescribes a policy to be followed. Comparison of the proposed algorithm against state-of-art algorithms on actual datasets demonstrates that the proposed algorithm achieves a significant improvement in performance. | Our work can be seen as building on and extending the work of @cite_4 @cite_5 , which learn linear stochastic policies. We go much further by learning a non-linear stochastic policy. Our work can also be seen as an off-line variant of the on-line REINFORCE algorithm @cite_25 . | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_25"
],
"mid": [
"2188353343",
"1835900096",
"2119717200"
],
"abstract": [
"This paper identifies a severe problem of the counterfactual risk estimator typically used in batch learning from logged bandit feedback (BLBF), and proposes the use of an alternative estimator that avoids this problem. In the BLBF setting, the learner does not receive full-information feedback like in supervised learning, but observes feedback only for the actions taken by a historical policy. This makes BLBF algorithms particularly attractive for training online systems (e.g., ad placement, web search, recommendation) using their historical logs. The Counterfactual Risk Minimization (CRM) principle [1] offers a general recipe for designing BLBF algorithms. It requires a counterfactual risk estimator, and virtually all existing works on BLBF have focused on a particular unbiased estimator. We show that this conventional estimator suffers from a propensity overfitting problem when used for learning over complex hypothesis spaces. We propose to replace the risk estimator with a self-normalized estimator, showing that it neatly avoids this problem. This naturally gives rise to a new learning algorithm - Normalized Policy Optimizer for Exponential Models (Norm-POEM) - for structured output prediction using linear rules. We evaluate the empirical effectiveness of Norm-POEM on several multi-label classification problems, finding that it consistently outperforms the conventional estimator.",
"We develop a learning principle and an efficient algorithm for batch learning from logged bandit feedback. This learning setting is ubiquitous in online systems (e.g., ad placement, web search, recommendation), where an algorithm makes a prediction (e.g., ad ranking) for a given input (e.g., query) and observes bandit feedback (e.g., user clicks on presented ads). We first address the counterfactual nature of the learning problem (, 2013) through propensity scoring. Next, we prove generalization error bounds that account for the variance of the propensity-weighted empirical risk estimator. In analogy to the Structural Risk Minimization principle of Wapnik and Tscherwonenkis (1979), these constructive bounds give rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM can be used to derive a new learning method--called Policy Optimizer for Exponential Models (POEM)--for learning stochastic linear rules for structured output prediction. We present a decomposition of the POEM objective that enables efficient stochastic gradient optimization. The effectiveness and efficiency of POEM is evaluated on several simulated multi-label classification problems, as well as on a real-world information retrieval problem. The empirical results show that the CRM objective implemented in POEM provides improved robustness and generalization performance compared to the state-of-the-art.",
"This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms."
]
} |
1612.08082 | 2963806098 | This paper proposes a novel approach for constructing effective personalized policies when the observed data lacks counter-factual information, is biased and possesses many features. The approach is applicable in a wide variety of settings from healthcare to advertising to education to finance. These settings have in common that the decision maker can observe, for each previous instance, an array of features of the instance, the action taken in that instance, and the reward realized—but not the rewards of actions that were not taken: the counterfactual information. Learning in such settings is made even more difficult because the observed data is typically biased by the existing policy (that generated the data) and because the array of features that might affect the reward in a particular instance—and hence should be taken into account in deciding on an action in each particular instance—is often vast. The approach presented here estimates propensity scores for the observed data, infers counterfactuals, identifies a (relatively small) number of features that are (most) relevant for each possible action and instance, and prescribes a policy to be followed. Comparison of the proposed algorithm against state-of-art algorithms on actual datasets demonstrates that the proposed algorithm achieves a significant improvement in performance. | We should also note two papers that were written after the current paper was originally submitted. The work of @cite_28 extends the earlier work of @cite_4 @cite_5 to non-linear policies. Our own (preliminary) work @cite_9 propose a different approach for learning a representation function and a policy. Unlike the present paper, our more recent work uses a loss function that embodies both a policy loss (similar to, but slightly different than, the policy loss used in the present paper) and a domain loss (which quantifies the divergence between the logging policy and the uniform policy under the representation function). The advantage of these changes is that they make it possible to learn the representation function and the policy in an end-to-end fashion. | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_9",
"@cite_4"
],
"mid": [
"2785875001",
"2188353343",
"2788316095",
"1835900096"
],
"abstract": [
"We propose a new output layer for deep networks that permits the use of logged contextual bandit feedback for training. Such contextual bandit feedback can be available in huge quantities (e.g., logs of search engines, recommender systems) at little cost, opening up a path for training deep networks on orders of magnitude more data. To this effect, we propose a counterfactual risk minimization approach for training deep networks using an equivariant empirical risk estimator with variance regularization, BanditNet, and show how the resulting objective can be decomposed in a way that allows stochastic gradient descent training. We empirically demonstrate the effectiveness of the method in two scenarios. First, we show how deep networks -- ResNets in particular -- can be trained for object recognition without conventionally labeled images. Second, we learn to place banner ads based on propensity-logged click logs, where BanditNet substantially improves on the state-of-the-art.",
"This paper identifies a severe problem of the counterfactual risk estimator typically used in batch learning from logged bandit feedback (BLBF), and proposes the use of an alternative estimator that avoids this problem. In the BLBF setting, the learner does not receive full-information feedback like in supervised learning, but observes feedback only for the actions taken by a historical policy. This makes BLBF algorithms particularly attractive for training online systems (e.g., ad placement, web search, recommendation) using their historical logs. The Counterfactual Risk Minimization (CRM) principle [1] offers a general recipe for designing BLBF algorithms. It requires a counterfactual risk estimator, and virtually all existing works on BLBF have focused on a particular unbiased estimator. We show that this conventional estimator suffers from a propensity overfitting problem when used for learning over complex hypothesis spaces. We propose to replace the risk estimator with a self-normalized estimator, showing that it neatly avoids this problem. This naturally gives rise to a new learning algorithm - Normalized Policy Optimizer for Exponential Models (Norm-POEM) - for structured output prediction using linear rules. We evaluate the empirical effectiveness of Norm-POEM on several multi-label classification problems, finding that it consistently outperforms the conventional estimator.",
"Choosing optimal (or at least better) policies is an important problem in domains from medicine to education to finance and many others. One approach to this problem is through controlled experiments trials - but controlled experiments are expensive. Hence it is important to choose the best policies on the basis of observational data. This presents two difficult challenges: (i) missing counterfactuals, and (ii) selection bias. This paper presents theoretical bounds on estimation errors of counterfactuals from observational data by making connections to domain adaptation theory. It also presents a principled way of choosing optimal policies using domain adversarial neural networks. We illustrate the effectiveness of domain adversarial training together with various features of our algorithm on a semi-synthetic breast cancer dataset and a supervised UCI dataset (Statlog).",
"We develop a learning principle and an efficient algorithm for batch learning from logged bandit feedback. This learning setting is ubiquitous in online systems (e.g., ad placement, web search, recommendation), where an algorithm makes a prediction (e.g., ad ranking) for a given input (e.g., query) and observes bandit feedback (e.g., user clicks on presented ads). We first address the counterfactual nature of the learning problem (, 2013) through propensity scoring. Next, we prove generalization error bounds that account for the variance of the propensity-weighted empirical risk estimator. In analogy to the Structural Risk Minimization principle of Wapnik and Tscherwonenkis (1979), these constructive bounds give rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM can be used to derive a new learning method--called Policy Optimizer for Exponential Models (POEM)--for learning stochastic linear rules for structured output prediction. We present a decomposition of the POEM objective that enables efficient stochastic gradient optimization. The effectiveness and efficiency of POEM is evaluated on several simulated multi-label classification problems, as well as on a real-world information retrieval problem. The empirical results show that the CRM objective implemented in POEM provides improved robustness and generalization performance compared to the state-of-the-art."
]
} |
1612.07796 | 2951841226 | We address the problem of incrementally modeling and forecasting long-term goals of a first-person camera wearer: what the user will do, where they will go, and what goal they seek. In contrast to prior work in trajectory forecasting, our algorithm, DARKO, goes further to reason about semantic states (will I pick up an object?), and future goal states that are far in terms of both space and time. DARKO learns and forecasts from first-person visual observations of the user's daily behaviors via an Online Inverse Reinforcement Learning (IRL) approach. Classical IRL discovers only the rewards in a batch setting, whereas DARKO discovers the states, transitions, rewards, and goals of a user from streaming data. Among other results, we show DARKO forecasts goals better than competing methods in both noisy and ideal settings, and our approach is theoretically and empirically no-regret. | Wearable cameras have been used for various human behavior understanding tasks @cite_4 @cite_30 @cite_19 @cite_15 @cite_21 because they give direct access to detailed visual information about a person's actions. Leveraging this feature of FPV, recent work has shown that it is possible to predict where people will look during actions @cite_19 and how people will use the environment @cite_35 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_4",
"@cite_21",
"@cite_19",
"@cite_15"
],
"mid": [
"2106229755",
"2346513076",
"2149276562",
"2167626157",
"1947050545",
""
],
"abstract": [
"We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.",
"When people observe and interact with physical spaces, they are able to associate functionality to regions in the environment. Our goal is to automate dense functional understanding of large spaces by leveraging sparse activity demonstrations recorded from an ego-centric viewpoint. The method we describe enables functionality estimation in large scenes where people have behaved, as well as novel scenes where no behaviors are observed. Our method learns and predicts \"Action Maps\", which encode the ability for a user to perform activities at various locations. With the usage of an egocentric camera to observe human activities, our method scales with the size of the scene without the need for mounting multiple static surveillance cameras and is well-suited to the task of observing activities up-close. We demonstrate that by capturing appearance-based attributes of the environment and associating these attributes with activity demonstrations, our proposed mathematical framework allows for the prediction of Action Maps in new environments. Additionally, we offer a preliminary glance of the applicability of Action Maps by demonstrating a proof-of concept application in which they are used in concert with activity detections to perform localization.",
"We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.",
"This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects to the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos reliably.",
"We address the challenging problem of recognizing the camera wearer's actions from videos captured by an egocentric camera. Egocentric videos encode a rich set of signals regarding the camera wearer, including head movement, hand pose and gaze information. We propose to utilize these mid-level egocentric cues for egocentric action recognition. We present a novel set of egocentric features and show how they can be combined with motion and object features. The result is a compact representation with superior performance. In addition, we provide the first systematic evaluation of motion, object and egocentric cues in egocentric action recognition. Our benchmark leads to several surprising findings. These findings uncover the best practices for egocentric actions, with a significant performance boost over all previous state-of-the-art methods on three publicly available datasets.",
""
]
} |
1612.07796 | 2951841226 | We address the problem of incrementally modeling and forecasting long-term goals of a first-person camera wearer: what the user will do, where they will go, and what goal they seek. In contrast to prior work in trajectory forecasting, our algorithm, DARKO, goes further to reason about semantic states (will I pick up an object?), and future goal states that are far in terms of both space and time. DARKO learns and forecasts from first-person visual observations of the user's daily behaviors via an Online Inverse Reinforcement Learning (IRL) approach. Classical IRL discovers only the rewards in a batch setting, whereas DARKO discovers the states, transitions, rewards, and goals of a user from streaming data. Among other results, we show DARKO forecasts goals better than competing methods in both noisy and ideal settings, and our approach is theoretically and empirically no-regret. | Given agent demonstrations, the task of (IRL) is to recover a of an underlying Markov Decision Process (MDP) @cite_26 . IRL has been used to model taxi driver behavior @cite_2 and pedestrian behavior @cite_34 @cite_14 . In contrast to previous work, we go beyond physical trajectory forecasting by reasoning over future object interactions and predicting future goals in terms of scene types. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_34",
"@cite_2"
],
"mid": [
"",
"1999874108",
"2101821104",
"2098774185"
],
"abstract": [
"",
"We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.",
"We present a novel approach for determining robot movements that efficiently accomplish the robot's tasks while not hindering the movements of people within the environment. Our approach models the goal-directed trajectories of pedestrians using maximum entropy inverse optimal control. The advantage of this modeling approach is the generality of its learned cost function to changes in the environment and to entirely different environments. We employ the predictions of this model of pedestrian trajectories in a novel incremental planner and quantitatively show the improvement in hindrance-sensitive robot trajectory planning provided by our approach.",
"Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling real-world navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories."
]
} |
1612.07796 | 2951841226 | We address the problem of incrementally modeling and forecasting long-term goals of a first-person camera wearer: what the user will do, where they will go, and what goal they seek. In contrast to prior work in trajectory forecasting, our algorithm, DARKO, goes further to reason about semantic states (will I pick up an object?), and future goal states that are far in terms of both space and time. DARKO learns and forecasts from first-person visual observations of the user's daily behaviors via an Online Inverse Reinforcement Learning (IRL) approach. Classical IRL discovers only the rewards in a batch setting, whereas DARKO discovers the states, transitions, rewards, and goals of a user from streaming data. Among other results, we show DARKO forecasts goals better than competing methods in both noisy and ideal settings, and our approach is theoretically and empirically no-regret. | The theory of learning to making optimal predictions from streaming data is well studied @cite_9 but is rarely used in computer vision, compared to the more prevalent application of supervised learning theory. We believe, however, that the utility of online learning theory is likely to increase as the amount of available data for processing is ever increasing. While the concept of online learning has been applied to inverse reinforcement learning @cite_23 , the work was primarily theoretic in nature and has found limited application. | {
"cite_N": [
"@cite_9",
"@cite_23"
],
"mid": [
"2077723394",
"2169498096"
],
"abstract": [
"Online learning is a well established learning paradigm which has both theoretical and practical appeals. The goal of online learning is to make a sequence of accurate predictions given knowledge of the correct answer to previous prediction tasks and possibly additional available information. Online learning has been studied in several research fields including game theory, information theory, and machine learning. It also became of great interest to practitioners due the recent emergence of large scale applications such as online advertisement placement and online web ranking. In this survey we provide a modern overview of online learning. Our goal is to give the reader a sense of some of the interesting ideas and in particular to underscore the centrality of convexity in deriving efficient online learning algorithms. We do not mean to be comprehensive but rather to give a high-level, rigorous yet easy to follow, survey.",
"Imitation learning of sequential, goal-directed behavior by standard supervised techniques is often difficult. We frame learning such behaviors as a maximum margin structured prediction problem over a space of policies. In this approach, we learn mappings from features to cost so an optimal policy in an MDP with these cost mimics the expert's behavior. Further, we demonstrate a simple, provably efficient approach to structured maximum margin learning, based on the subgradient method, that leverages existing fast algorithms for inference. Although the technique is general, it is particularly relevant in problems where A* and dynamic programming approaches make learning policies tractable in problems beyond the limitations of a QP formulation. We demonstrate our approach applied to route planning for outdoor mobile robots, where the behavior a designer wishes a planner to execute is often clear, while specifying cost functions that engender this behavior is a much more difficult task."
]
} |
1612.07796 | 2951841226 | We address the problem of incrementally modeling and forecasting long-term goals of a first-person camera wearer: what the user will do, where they will go, and what goal they seek. In contrast to prior work in trajectory forecasting, our algorithm, DARKO, goes further to reason about semantic states (will I pick up an object?), and future goal states that are far in terms of both space and time. DARKO learns and forecasts from first-person visual observations of the user's daily behaviors via an Online Inverse Reinforcement Learning (IRL) approach. Classical IRL discovers only the rewards in a batch setting, whereas DARKO discovers the states, transitions, rewards, and goals of a user from streaming data. Among other results, we show DARKO forecasts goals better than competing methods in both noisy and ideal settings, and our approach is theoretically and empirically no-regret. | Physical trajectory forecasting has received much attention from the vision community. Multiple human trajectory forecasting from a surveillance camera was investigated by @cite_29 . Other trajectory forecasting approaches use demonstrations observed from a bird's-eye view; @cite_18 infers latent goal locations and @cite_17 employ LSTMs to jointly reason about trajectories of multiple humans. In @cite_20 , the model forecasted short-term future trajectories of a first-person camera wearer by retrieving the nearest neighbors from a dataset of first-person trajectories under an obstacle-avoidance cost function, with each trajectory representing predictions of where the user will move in view of the frame; in @cite_16 , a similar model with learned cost function is extended to multiple users. | {
"cite_N": [
"@cite_18",
"@cite_29",
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"2158839502",
"2336387144",
"2558623550",
"2443846596",
"2424778531"
],
"abstract": [
"This paper presents an approach to localizing functional objects in surveillance videos without domain knowledge about semantic object classes that may appear in the scene. Functional objects do not have discriminative appearance and shape, but they affect behavior of people in the scene. For example, they \"attract\" people to approach them for satisfying certain needs (e.g., vending machines could quench thirst), or \"repel\" people to avoid them (e.g., grass lawns). Therefore, functional objects can be viewed as \"dark matter\", emanating \"dark energy\" that affects people's trajectories in the video. To detect \"dark matter\" and infer their \"dark energy\" field, we extend the Lagrangian mechanics. People are treated as particle-agents with latent intents to approach \"dark matter\" and thus satisfy their needs, where their motions are subject to a composite \"dark energy\" field of all functional objects in the scene. We make the assumption that people take globally optimal paths toward the intended \"dark matter\" while avoiding latent obstacles. A Bayesian framework is used to probabilistically model: people's trajectories and intents, constraint map of the scene, and locations of functional objects. A data-driven Markov Chain Monte Carlo (MCMC) process is used for inference. Our evaluation on videos of public squares and courtyards demonstrates our effectiveness in localizing functional objects and predicting people's trajectories in unobserved parts of the video footage.",
"We develop predictive models of pedestrian dynamics by encoding the coupled nature of multi-pedestrian interaction using game theory, and deep learning-based visual analysis to estimate person-specific behavior parameters. Building predictive models for multi-pedestrian interactions however, is very challenging due to two reasons: (1) the dynamics of interaction are complex interdependent processes, where the predicted behavior of one pedestrian can affect the actions taken by others and (2) dynamics are variable depending on an individuals physical characteristics (e.g., an older person may walk slowly while the younger person may walk faster). To address these challenges, we (1) utilize concepts from game theory to model the interdependent decision making process of multiple pedestrians and (2) use visual classifiers to learn a mapping from pedestrian appearance to behavior parameters. We evaluate our proposed model on several public multiple pedestrian interaction video datasets. Results show that our strategic planning model explains human interactions 25 better when compared to state-of-the-art methods.",
"This paper presents a method to predict the future movements (location and gaze direction) of basketball players as a whole from their first person videos. The predicted behaviors reflect an individual physical space that affords to take the next actions while conforming to social behaviors by engaging to joint attention. Our key innovation is to use the 3D reconstruction of multiple first person cameras to automatically annotate each other's the visual semantics of social configurations. We leverage two learning signals uniquely embedded in first person videos. Individually, a first person video records the visual semantics of a spatial and social layout around a person that allows associating with past similar situations. Collectively, first person videos follow joint attention that can link the individuals to a group. We learn the egocentric visual semantics of group movements using a Siamese neural network to retrieve future trajectories. We consolidate the retrieved trajectories from all players by maximizing a measure of social compatibility---the gaze alignment towards joint attention predicted by their social formation, where the dynamics of joint attention is learned by a long-term recurrent convolutional network. This allows us to characterize which social configuration is more plausible and predict future group trajectories.",
"We presents a method for future localization: to predict plausible future trajectories of ego-motion in egocentric stereo images. Our paths avoid obstacles, move between objects, even turn around a corner into space behind objects. As a byproduct of the predicted trajectories, we discover the empty space occluded by foreground objects. One key innovation is the creation of an EgoRetinal map, akin to an illustrated tourist map, that 'rearranges' pixels taking into accounts depth information, the ground plane, and body motion direction, so that it allows motion planning and perception of objects on one image space. We learn to plan trajectories directly on this EgoRetinal map using first person experience of walking around in a variety of scenes. In a testing phase, given an novel scene, we find multiple hypotheses of future trajectories from the learned experience. We refine them by minimizing a cost function that describes compatibility between the obstacles in the EgoRetinal map and trajectories. We quantitatively evaluate our method to show predictive validity and apply to various real world daily activities including walking, shopping, and social interactions.",
"Pedestrians follow different trajectories to avoid obstacles and accommodate fellow pedestrians. Any autonomous vehicle navigating such a scene should be able to foresee the future positions of pedestrians and accordingly adjust its path to avoid collisions. This problem of trajectory prediction can be viewed as a sequence generation task, where we are interested in predicting the future trajectory of people based on their past positions. Following the recent success of Recurrent Neural Network (RNN) models for sequence prediction tasks, we propose an LSTM model which can learn general human movement and predict their future trajectories. This is in contrast to traditional approaches which use hand-crafted functions such as Social forces. We demonstrate the performance of our method on several public datasets. Our model outperforms state-of-the-art methods on some of these datasets. We also analyze the trajectories predicted by our model to demonstrate the motion behaviour learned by our model."
]
} |
1612.07796 | 2951841226 | We address the problem of incrementally modeling and forecasting long-term goals of a first-person camera wearer: what the user will do, where they will go, and what goal they seek. In contrast to prior work in trajectory forecasting, our algorithm, DARKO, goes further to reason about semantic states (will I pick up an object?), and future goal states that are far in terms of both space and time. DARKO learns and forecasts from first-person visual observations of the user's daily behaviors via an Online Inverse Reinforcement Learning (IRL) approach. Classical IRL discovers only the rewards in a batch setting, whereas DARKO discovers the states, transitions, rewards, and goals of a user from streaming data. Among other results, we show DARKO forecasts goals better than competing methods in both noisy and ideal settings, and our approach is theoretically and empirically no-regret. | In @cite_24 @cite_28 , the tasks are to recognize an unfinished event or activity. In @cite_24 , the model predicts the onset for a single facial action primitive, the completion of a smile, which may take less than a second. Similarly, @cite_27 predicts the completion of short human to human interactions. In @cite_10 , a hierarchical structured SVM is employed to forecast actions about a second in the future, and @cite_1 demonstrates a semi-supervised approach for forecasting human actions a second into the future. Other works predict actions several seconds into the future @cite_33 @cite_31 @cite_32 @cite_5 . In contrast, we focus on high-level transitions over a sequence of future actions that may occur outside the frame of view, and take a longer time to complete (in our dataset, the mean time to completion is @math seconds). | {
"cite_N": [
"@cite_33",
"@cite_28",
"@cite_1",
"@cite_32",
"@cite_24",
"@cite_27",
"@cite_5",
"@cite_31",
"@cite_10"
],
"mid": [
"2118527252",
"",
"2951242004",
"",
"2016776918",
"2147615062",
"2056120433",
"",
"2185953016"
],
"abstract": [
"Recognizing human activities in partially observed videos is a challenging problem and has many practical applications. When the unobserved subsequence is at the end of the video, the problem is reduced to activity prediction from unfinished activity streaming, which has been studied by many researchers. However, in the general case, an unobserved subsequence may occur at any time by yielding a temporal gap in the video. In this paper, we propose a new method that can recognize human activities from partially observed videos in the general case. Specifically, we formulate the problem into a probabilistic framework: 1) dividing each activity into multiple ordered temporal segments, 2) using spatiotemporal features of the training video samples in each segment as bases and applying sparse coding (SC) to derive the activity likelihood of the test video sample at each segment, and 3) finally combining the likelihood at each segment to achieve a global posterior for the activities. We further extend the proposed method to include more bases that correspond to a mixture of segments with different temporal lengths (MSSC), which can better represent the activities with large intra-class variations. We evaluate the proposed methods (SC and MSSC) on various real videos. We also evaluate the proposed methods on two special cases: 1) activity prediction where the unobserved subsequence is at the end of the video, and 2) human activity recognition on fully observed videos. Experimental results show that the proposed methods outperform existing state-of-the-art comparison methods.",
"",
"Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.",
"",
"The need for early detection of temporal events from sequential data arises in a wide spectrum of applications ranging from human-robot interaction to video security. While temporal event detection has been extensively studied, early detection is a relatively unexplored problem. This paper proposes a maximum-margin framework for training temporal event detectors to recognize partial events, enabling early detection. Our method is based on Structured Output SVM, but extends it to accommodate sequential data. Experiments on datasets of varying complexity, for detecting facial expressions, hand gestures, and human activities, demonstrate the benefits of our approach.",
"In this paper, we present a novel approach of human activity prediction. Human activity prediction is a probabilistic process of inferring ongoing activities from videos only containing onsets (i.e. the beginning part) of the activities. The goal is to enable early recognition of unfinished activities as opposed to the after-the-fact classification of completed activities. Activity prediction methodologies are particularly necessary for surveillance systems which are required to prevent crimes and dangerous activities from occurring. We probabilistically formulate the activity prediction problem, and introduce new methodologies designed for the prediction. We represent an activity as an integral histogram of spatio-temporal features, efficiently modeling how feature distributions change over time. The new recognition methodology named dynamic bag-of-words is developed, which considers sequential nature of human activities while maintaining advantages of the bag-of-words to handle noisy observations. Our experiments confirm that our approach reliably recognizes ongoing activities from streaming videos with a high accuracy.",
"In this paper we present a conceptually simple but surprisingly powerful method for visual prediction which combines the effectiveness of mid-level visual elements with temporal modeling. Our framework can be learned in a completely unsupervised manner from a large collection of videos. However, more importantly, because our approach models the prediction framework on these mid-level elements, we can not only predict the possible motion in the scene but also predict visual appearances--how are appearances going to change with time. This yields a visual \"hallucination\" of probable events on top of the scene. We show that our method is able to accurately predict and visualize simple future events, we also show that our approach is comparable to supervised methods for event prediction.",
"",
"We consider inferring the future actions of people from a still image or a short video clip. Predicting future actions before they are actually executed is a critical ingredient for enabling us to effectively interact with other humans on a daily basis. However, challenges are two fold: First, we need to capture the subtle details inherent in human movements that may imply a future action; second, predictions usually should be carried out as quickly as possible in the social world, when limited prior observations are available."
]
} |
1612.07866 | 2562116537 | In the tensor completion problem, one seeks to estimate a low-rank tensor based on a random sample of revealed entries. In terms of the required sample size, earlier work revealed a large gap between estimation with unbounded computational resources (using, for instance, tensor nuclear norm minimization) and polynomial-time algorithms. Among the latter, the best statistical guarantees have been proved, for third-order tensors, using the sixth level of the sum-of-squares (SOS) semidefinite programming hierarchy (Barak and Moitra, 2014). However, the SOS approach does not scale well to large problem instances. By contrast, spectral methods --- based on unfolding or matricizing the tensor --- are attractive for their low complexity, but have been believed to require a much larger sample size. This paper presents two main contributions. First, we propose a new unfolding-based method, which outperforms naive ones for symmetric @math -th order tensors of rank @math . For this result we make a study of singular space estimation for partially revealed matrices of large aspect ratio, which may be of independent interest. For third-order tensors, our algorithm matches the SOS method in terms of sample size (requiring about @math revealed entries), subject to a worse rank condition ( @math rather than @math ). We complement this result with a different spectral algorithm for third-order tensors in the overcomplete ( @math ) regime. Under a random model, this second approach succeeds in estimating tensors of rank @math from about @math revealed entries. | Without entering into the details, the tensor completion problem can be naturally phrased as a polynomial optimization problem, to which the framework is particularly suited. It defines a hierarchy of semidefinite programming () relaxations, indexed by a degree @math which is a positive even integer. The degree- @math relaxation requires solving a where the decision variable is a @math matrix; this can be done in time @math by interior-point methods @cite_5 . The hierarchy is the most powerful hierarchy. It has attracted considerable interest because it matches complexity-theoretic lower bounds in many problems @cite_15 . | {
"cite_N": [
"@cite_5",
"@cite_15"
],
"mid": [
"1963753244",
"2146777332"
],
"abstract": [
"This paper studies the semidefinite programming SDP problem, i.e., the optimization problem of a linear function of a symmetric matrix subject to linear equality constraints and the additional condition that the matrix be positive semidefinite. First the classical cone duality is reviewed as it is specialized to SDP is reviewed. Next an interior point algorithm is presented that converges to the optimal solution in polynomial time. The approach is a direct extension of Ye’s projective method for linear programming. It is also argued that many known interior point methods for linear programs can be transformed in a mechanical way to algorithms for SDP with proofs of convergence and polynomial time complexity carrying over in a similar fashion. Finally, the significance of these results is studied in a variety of combinatorial optimization problems including the general 0-1 integer programs, the maximum clique and maximum stable set problems in perfect graphs, the maximum k-partite subgraph problem in graph...",
"In order to obtain the best-known guarantees, algorithms are traditionally tailored to the particular problem we want to solve. Two recent developments, the Unique Games Conjecture (UGC) and the Sum-of-Squares (SOS) method, surprisingly suggest that this tailoring is not necessary and that a single efficient algorithm could achieve best possible guarantees for a wide range of different problems. The Unique Games Conjecture (UGC) is a tantalizing conjecture in computational complexity, which, if true, will shed light on the complexity of a great many problems. In particular this conjecture predicts that a single concrete algorithm provides optimal guarantees among all efficient algorithms for a large class of computational problems. The Sum-of-Squares (SOS) method is a general approach for solving systems of polynomial constraints. This approach is studied in several scientific disciplines, including real algebraic geometry, proof complexity, control theory, and mathematical programming, and has found applications in fields as diverse as quantum information theory, formal verification, game theory and many others. We survey some connections that were recently uncovered between the Unique Games Conjecture and the Sum-of-Squares method. In particular, we discuss new tools to rigorously bound the running time of the SOS method for obtaining approximate solutions to hard optimization problems, and how these tools give the potential for the sum-of-squares method to provide new guarantees for many problems of interest, and possibly to even refute the UGC."
]
} |
1612.07833 | 2566441234 | We introduce a new multi-modal task for computer systems, posed as a combined vision-language comprehension challenge: identifying the most suitable text describing a scene, given several similar options. Accomplishing the task entails demonstrating comprehension beyond just recognizing "keywords" (or key-phrases) and their corresponding visual concepts. Instead, it requires an alignment between the representations of the two modalities that achieves a visually-grounded "understanding" of various linguistic elements and their dependencies. This new task also admits an easy-to-compute and well-studied metric: the accuracy in detecting the true target among the decoys. The paper makes several contributions: an effective and extensible mechanism for generating decoys from (human-created) image captions; an instance of applying this mechanism, yielding a large-scale machine comprehension dataset (based on the COCO images and captions) that we make publicly available; human evaluation results on this dataset, informing a performance upper-bound; and several baseline and competitive learning approaches that illustrate the utility of the proposed task and dataset in advancing both image and language comprehension. We also show that, in a multi-task learning setting, the performance on the proposed task is positively correlated with the end-to-end task of image captioning. | Image understanding is a long-standing challenge in computer vision. There has recently been a great deal of interest in bringing together vision and language understanding. Particularly relevant to our work are image captioning (IC) and visual question-answering (VQA). Both have instigated a large body of publications, a detailed exposition of which is beyond the scope of this paper. Interested readers should refer to two recent surveys @cite_26 @cite_21 . | {
"cite_N": [
"@cite_26",
"@cite_21"
],
"mid": [
"2282219577",
"2496096353"
],
"abstract": [
"Automatic description generation from natural images is a challenging problem that has recently received a large amount of interest from the computer vision and natural language processing communities. In this survey, we classify the existing approaches based on how they conceptualize this problem, viz., models that cast description as either generation problem or as a retrieval problem over a visual or multimodal representational space. We provide a detailed review of existing models, highlighting their advantages and disadvantages. Moreover, we give an overview of the benchmark image datasets and the evaluation measures that have been developed to assess the quality of machine-generated image descriptions. Finally we extrapolate future directions in the area of automatic image description generation.",
"Visual Question Answering (VQA) is a challenging task that has received increasing attention from both the computer vision and the natural language processing communities. Given an image and a question in natural language, it requires reasoning over visual elements of the image and general knowledge to infer the correct answer. In the first part of this survey, we examine the state of the art by comparing modern approaches to the problem. We classify methods by their mechanism to connect the visual and textual modalities. In particular, we examine the common approach of combining convolutional and recurrent neural networks to map images and questions to a common feature space. We also discuss memory-augmented and modular architectures that interface with structured knowledge bases. In the second part of this survey, we review the datasets available for training and evaluating VQA systems. The various datatsets contain questions at different levels of complexity, which require different capabilities and types of reasoning. We examine in depth the question answer pairs from the Visual Genome project, and evaluate the relevance of the structured annotations of images with scene graphs for VQA. Finally, we discuss promising future directions for the field, in particular the connection to structured knowledge bases and the use of natural language processing models."
]
} |
1612.07833 | 2566441234 | We introduce a new multi-modal task for computer systems, posed as a combined vision-language comprehension challenge: identifying the most suitable text describing a scene, given several similar options. Accomplishing the task entails demonstrating comprehension beyond just recognizing "keywords" (or key-phrases) and their corresponding visual concepts. Instead, it requires an alignment between the representations of the two modalities that achieves a visually-grounded "understanding" of various linguistic elements and their dependencies. This new task also admits an easy-to-compute and well-studied metric: the accuracy in detecting the true target among the decoys. The paper makes several contributions: an effective and extensible mechanism for generating decoys from (human-created) image captions; an instance of applying this mechanism, yielding a large-scale machine comprehension dataset (based on the COCO images and captions) that we make publicly available; human evaluation results on this dataset, informing a performance upper-bound; and several baseline and competitive learning approaches that illustrate the utility of the proposed task and dataset in advancing both image and language comprehension. We also show that, in a multi-task learning setting, the performance on the proposed task is positively correlated with the end-to-end task of image captioning. | In IC tasks, systems attempt to generate a fluent and correct sentence describing an input image. IC systems are usually evaluated on how well the generated descriptions align with human-created captions (ground-truth). The language generation model of an IC system plays a crucial role; it is often trained such that the probabilities of the ground-truth captions are maximized (MLE training), though more advanced methods based on techniques borrowed from Reinforcement Learning have been proposed @cite_3 . To provide visual grounding, image features are extracted and injected into the language model. Note that language generation models need to both decipher the information encoded in the visual features, and model natural language generation. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2176263492"
],
"abstract": [
"Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.