aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1604.07211 | 2952212047 | We have developed reduced reference parametric models for estimating perceived quality in audiovisual multimedia services. We have created 144 unique configurations for audiovisual content including various application and network parameters such as bitrates and distortions in terms of bandwidth, packet loss rate and jitter. To generate the data needed for model training and validation we have tasked 24 subjects, in a controlled environment, to rate the overall audiovisual quality on the absolute category rating (ACR) 5-level quality scale. We have developed models using Random Forest and Neural Network based machine learning methods in order to estimate Mean Opinion Scores (MOS) values. We have used information retrieved from the packet headers and side information provided as network parameters for model training. Random Forest based models have performed better in terms of Root Mean Square Error (RMSE) and Pearson correlation coefficient. The side information proved to be very effective in developing the model. We have found that, while the model performance might be improved by replacing the side information with more accurate bit stream level measurements, they are performing well in estimating perceived quality in audiovisual multimedia services. | The Video Quality Experts Group (VQEG) ran subjects tests through the same audiovisual material in six different international laboratories @cite_12 @cite_1 . Each of these six labs conducted the experiment in a controlled environment while four labs also repeated the same experiments in a public environment. They have reported that audiovisual subjective tests are highly repeatable from one laboratory and environment to the next and recommended 24 or more subjects for ACR tests in a lab and 35 and more subjects for the same sensitivity in the public environment. | {
"cite_N": [
"@cite_1",
"@cite_12"
],
"mid": [
"2008903454",
"2004790454"
],
"abstract": [
"In 2011, the Video Quality Experts Group (VQEG) ran subjects through the same audiovisual subjective test at six different international laboratories. That small dataset is now publically available for research and development purposes.",
"Between discrete event simulation and evaluation within real networks, network emulation is a useful tool to study and evaluate the behaviour of applications. Using a real network as a basis to simulate another network's characteristics, it enables researchers to perform experiments in a wide range of conditions. After an overview of the various available network emulators, this paper focuses on three freely available and widely used network link emulators: Dummynet, NIST-Net, and the Linux Traffic Control subsystem. We start by comparing their features, then focus on the accuracy of their latency and bandwidth emulation, and discuss the way they are affected by the time source of the system. We expose several problems that cannot be ignored when using such tools. We also outline differences in their user interfaces, such as the interception point, and discuss possible solutions. This work aims at providing a complete overview of the different solutions for network emulation."
]
} |
1604.07224 | 2344034230 | We estimate the state a noisy robot arm and underactuated hand using an Implicit Manifold Particle Filter (MPF) informed by touch sensors. As the robot touches the world, its state space collapses to a contact manifold that we represent implicitly using a signed distance field. This allows us to extend the MPF to higher (six or more) dimensional state spaces. Earlier work (which explicitly represents the contact manifold) only shows the MPF in two or three dimensions. Through a series of experiments, we show that the implicit MPF converges faster and is more accurate than a conventional particle filter during periods of persistent contact. We present three methods of sampling the implicit contact manifold, and compare them in experiments. | There are a variety of approaches that use feedback from contact sensors for manipulation. One approach is to plan a sequence of move-until-touch actions that are guaranteed to localize an object to the desired accuracy @cite_8 @cite_11 @cite_13 Other approaches formulate the problem as a partially observable Markov decision process @cite_24 and solve for a policy that optimizes expected reward @cite_20 @cite_29 @cite_0 . These algorithms require an efficient implementation of a Bayesian state estimator that can be queried many times during planning. Our approach could be used as a state estimator in one of these planners. | {
"cite_N": [
"@cite_8",
"@cite_29",
"@cite_24",
"@cite_0",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"2112714380",
"2134069397",
"2034725503",
"1883438135",
"2012224615",
"",
""
],
"abstract": [
"Humans are capable of manipulating objects based solely on the sense of touch. For robots to achieve the same feat in unstructured environments, global localization of objects via touch is required. Bayesian approaches provide the means to cope with uncertainties of the real world, but the estimation of the Bayesian posterior for the full six degrees of freedom (6-DOF) global localization problem is computationally prohibitive. We propose an efficient Bayesian approach termed Scaling Series. It is capable of solving the full problem reliably in real time. This is a Monte Carlo approach that performs a series of successive refinements coupled with annealing. We also propose an analytical measurement model, which can be computed efficiently at run time for any object represented as a polygonal mesh. Extensive empirical evaluation shows that Scaling Series drastically outperforms prior approaches. We demonstrate general applicability of the approach on five common solid objects, which are rigidly fixed during the experiments. We also consider 6-DOF localization and tracking of free-standing objects that can move during tactile exploration.",
"This paper develops a technique for an autonomous robot endowed with a manipulator arm, a multi-fingered gripper, and a variety of sensors to manipulate a known, but poorly observable object. We present a novel grasp planning method which not only incorporates the potential collision between object and manipulator, but takes advantage of this interaction. The natural uncertainty and difficulties in observation in such tasks is modeled as a Partially Observable Markov Decision Process (POMDP). Recent advances in point-based methods as well as a novel state space representation specific to the grasping problem are leveraged to overcome state space growth issues. Simulation results are presented for the combined localization, manipulation, and grasping of a small nut on a table.",
"This paper formulates the optimal control problem for a class of mathematical models in which the system to be controlled is characterized by a finite-state discrete-time Markov process. The states of this internal process are not directly observable by the controller; rather, he has available a set of observable outputs that are only probabilistically related to the internal state of the system. The formulation is illustrated by a simple machine-maintenance example, and other specific application areas are also discussed. The paper demonstrates that, if there are only a finite number of control intervals remaining, then the optimal payoff function is a piecewise-linear, convex function of the current state probabilities of the internal Markov process. In addition, an algorithm for utilizing this property to calculate the optimal control policy and payoff function for any finite horizon is outlined. These results are illustrated by a numerical example for the machine-maintenance problem.",
"We consider the problem of using real-time feedback from contact sensors to create closed-loop pushing actions. To do so, we formulate the problem as a partially observable Markov decision process POMDP with a transition model based on a physics simulator and a reward function that drives the robot towards a successful grasp. We demonstrate that it is intractable to solve the full POMDP with traditional techniques and introduce a novel decomposition of the policy into pre- and post-contact stages to reduce the computational complexity. Our method uses an offline point-based solver on a variable-resolution discretization of the state space to solve for a post-contact policy as a pre-computation step. Then, at runtime, we use an A* search to compute a pre-contact trajectory. We prove that the value of the resulting policy is within a bound of the value of the optimal policy and give intuition about when it performs well. Additionally, we show the policy produced by our algorithm achieves a successful grasp more quickly and with higher probability than a baseline QMDP policy on two different objects in simulation. Finally, we validate our simulation results on a real robot using commercially available tactile sensors.",
"This paper introduces a tactile or contact method whereby an autonomous robot equipped with suitable sensors can choose the next sensing action involving touch in order to accurately localize an object in its environment. The method uses an information gain metric based on the uncertainty of the object's pose to determine the next best touching action. Intuitively, the optimal action is the one that is the most informative. The action is then carried out and the state of the object's pose is updated using an estimator. The method is further extended to choose the most informative action to simultaneously localize and estimate the object's model parameter or model class. Results are presented both in simulation and in experiment on the DARPA Autonomous Robotic Manipulation Software (ARM-S) robot.",
"",
""
]
} |
1604.07224 | 2344034230 | We estimate the state a noisy robot arm and underactuated hand using an Implicit Manifold Particle Filter (MPF) informed by touch sensors. As the robot touches the world, its state space collapses to a contact manifold that we represent implicitly using a signed distance field. This allows us to extend the MPF to higher (six or more) dimensional state spaces. Earlier work (which explicitly represents the contact manifold) only shows the MPF in two or three dimensions. Through a series of experiments, we show that the implicit MPF converges faster and is more accurate than a conventional particle filter during periods of persistent contact. We present three methods of sampling the implicit contact manifold, and compare them in experiments. | Recent work has used the conventional particle filter (CPF) @cite_15 to estimate the pose of an object while it is being pushed using visual and tactile observations @cite_33 @cite_28 . Unfortunately, the CPF performs poorly because the set of feasible configurations lie on lower-dimensional contact manifold. The manifold particle filter (MPF) avoids particle deprivation by sampling from an approximate representation of this manifold @cite_7 . However, building an explicit representation of the contact manifold is only feasible for low-dimensional---typically planar---problems. | {
"cite_N": [
"@cite_28",
"@cite_15",
"@cite_33",
"@cite_7"
],
"mid": [
"",
"2098613108",
"2119104235",
"2619378249"
],
"abstract": [
"",
"An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters. The required density of the state vector is represented as a set of random samples, which are updated and propagated by the algorithm. The method is not restricted by assumptions of linear- ity or Gaussian noise: it may be applied to any state transition or measurement model. A simula- tion example of the bearings only tracking problem is presented. This simulation includes schemes for improving the efficiency of the basic algorithm. For this example, the performance of the bootstrap filter is greatly superior to the standard extended Kalman filter.",
"Advanced grasp control algorithms could benefit greatly from accurate tracking of the object as well as an accurate all-around knowledge of the system when the robot attempts a grasp. This motivates our study of the G-SL(AM)2 problem, in which two goals are simultaneously pursued: object tracking relative to the hand and estimation of parameters of the dynamic model. We view the G-SL(AM)2 problem as a filtering problem. Because of stick-slip friction and collisions between the object and hand, suitable dynamic models exhibit strong nonlinearities and jump discontinuities. This fact makes Kalman filters (which assume linearity) and extended Kalman filters (which assume differentiability) inapplicable, and leads us to develop a particle filter. An important practical problem that arises during grasping is occlusion of the view of the object by the robot's hand. To combat the resulting loss of visual tracking fidelity, we designed a particle filter that incorporates tactile sensor data. The filter is evaluated off-line with data gathered in advance from grasp acquisition experiments conducted with a planar test rig. The results show that our particle filter performs quite well, especially during periods of visual occlusion, in which it is much better than the same filter without tactile data.",
"We investigate the problem of estimating the state of an object during manipulation. Contact sensors provide valuable information about the object state during actions which involve persistent contact, e.g. pushing. However, contact sensing is very discriminative by nature, and therefore the set of object states that contact a sensor constitutes a lower-dimensional manifold in the state space of the object. This causes stochastic state estimation methods, such as particle filters, to perform poorly when contact sensors are used. We propose a new algorithm, the manifold particle filter, which uses dual particles directly sampled from the contact manifold to avoid this problem. The algorithm adapts to the probability of contact by dynamically changing the number of dual particles sampled from the manifold. We compare our algorithm to the conventional particle filter through extensive experiments and we show that our algorithm is both faster and better at estimating the state. Unlike the conventional particle filter, our algorithm's performance improves with increasing sensor accuracy and the filter's update rate. We implement the algorithm on a real robot using commercially available tactile sensors to track the pose of a pushed object."
]
} |
1604.07224 | 2344034230 | We estimate the state a noisy robot arm and underactuated hand using an Implicit Manifold Particle Filter (MPF) informed by touch sensors. As the robot touches the world, its state space collapses to a contact manifold that we represent implicitly using a signed distance field. This allows us to extend the MPF to higher (six or more) dimensional state spaces. Earlier work (which explicitly represents the contact manifold) only shows the MPF in two or three dimensions. Through a series of experiments, we show that the implicit MPF converges faster and is more accurate than a conventional particle filter during periods of persistent contact. We present three methods of sampling the implicit contact manifold, and compare them in experiments. | The work most similar to our own aims to localize a mobile robot in a known environment using contact with the environment. Prior work has used the CPF to localize the pose of a mobile manipulator by observing where its arm contacts with the environment @cite_32 . The same approach was used to localize a quadruped on known terrain by estimating the stability of the robot given its configuration @cite_9 . Estimating the base pose of a mobile robot is equivalent to solving the problem formulated in this paper with the addition of a single, unconstrained six degree-of-freedom joint that attaches the robot to the world. | {
"cite_N": [
"@cite_9",
"@cite_32"
],
"mid": [
"1971594916",
"2161943423"
],
"abstract": [
"We present a novel method for the localization of a legged robot on known terrain using only proprioceptive sensors such as joint encoders and an inertial measurement unit. In contrast to other proprioceptive pose estimation techniques, this method allows for global localization (i.e., localization with large initial uncertainty) without the use of exteroceptive sensors. This is made possible by establishing a measurement model based on the feasibility of putative poses on known terrain given observed joint angles and attitude measurements. Results are shown that demonstrate that the method performs better than dead-reckoning, and is also able to perform global localization from large initial uncertainty",
"We use a combination of laser data, measurements of joint angles and torques, and stall information to improve localization on a household robotic platform. Our system executes trajectories to collide with its environment and performs probabilistic updates on a distribution of possible robot positions, ordinarily provided by a laser range finder. We find encouraging results both in simulations and in a real-world kitchen environment. Our analysis also suggests further steps in localization through proprioception."
]
} |
1604.07224 | 2344034230 | We estimate the state a noisy robot arm and underactuated hand using an Implicit Manifold Particle Filter (MPF) informed by touch sensors. As the robot touches the world, its state space collapses to a contact manifold that we represent implicitly using a signed distance field. This allows us to extend the MPF to higher (six or more) dimensional state spaces. Earlier work (which explicitly represents the contact manifold) only shows the MPF in two or three dimensions. Through a series of experiments, we show that the implicit MPF converges faster and is more accurate than a conventional particle filter during periods of persistent contact. We present three methods of sampling the implicit contact manifold, and compare them in experiments. | Techniques for estimating the configuration of an articulated body from contact sensors are applicable to humans as well as robots. Researchers in the computer graphics community have instrumented objects with contact sensors @cite_18 and used multi-touch displays @cite_21 to reconstruct the configuration of a human hand from contact observations. Both of these approaches generate natural hand configurations by interpolating between data points collected on the same device. Other work has used machine learning to reconstruct the configuration of a human from the ground reaction forces measured by pressure sensors @cite_6 . We hope that our approach is also useful in these problem domains. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_6"
],
"mid": [
"2139760425",
"2278790458",
"1975956786"
],
"abstract": [
"Modifying motion capture to satisfy the constraints of new animation is difficult when contact is involved, and a critical problem for animation of hands. The compliance with which a character makes contact also reveals important aspects of the movement's purpose. We present a new technique called interaction capture, for capturing these contact phenomena. We capture contact forces at the same time as motion, at a high rate, and use both to estimate a nominal reference trajectory and joint compliance. Unlike traditional methods, our method estimates joint compliance without the need for motorized perturbation devices. New interactions can then be synthesized by physically based simulation. We describe a novel position-based linear complementarity problem formulation that includes friction, breaking contact, and the compliant coupling between contacts at different fingers. The technique is validated using data from previous work and our own perturbation-based estimates.",
"",
"Consumer-grade, real-time motion capture devices are becoming commonplace in every household, thanks to the recent development in depth-camera technologies. We introduce a new approach to capturing and reconstructing freeform, full-body human motion using force sensors, supplementary to existing, consumer-grade mocap systems. Our algorithm exploits the dynamic aspects of human movement, such as linear and angular momentum, to provide key information for full-body motion reconstruction. Using two pressure sensing platforms (Wii Balance Board) and a hand tracking device, we demonstrate that human motion can be largely reconstructed from ground reaction forces along with a small amount of arm movement information."
]
} |
1604.07480 | 2343077198 | Multi-scale deep CNNs have been used successfully for problems mapping each pixel to a label, such as depth estimation and semantic segmentation. It has also been shown that such architectures are reusable and can be used for multiple tasks. These networks are typically trained independently for each task by varying the output layer(s) and training objective. In this work we present a new model for simultaneous depth estimation and semantic segmentation from a single RGB image. Our approach demonstrates the feasibility of training parts of the model for each task and then fine tuning the full, combined model on both tasks simultaneously using a single loss function. Furthermore we couple the deep CNN with fully connected CRF, which captures the contextual relationships and interactions between the semantic and depth cues improving the accuracy of the final results. The proposed model is trained and evaluated on NYUDepth V2 dataset outperforming the state of the art methods on semantic segmentation and achieving comparable results on the task of depth estimation. | One of the first approaches to tackle the semantic segmentation as a problem of learning a pixel-to-pixel mapping using CNNs was the work of @cite_6 . There authors proposed to apply 1x1 convolution label classifiers at features maps from different layers and averaging the results. Another line of approaches to semantic segmentation adopted an auto-encoder style architecture @cite_14 @cite_22 comprised of convolutional and deconvolutional layers. The deconvolutional part consists of unpooling and deconvolution layers where each unpooling layer is connected to its corresponding pooling layer on the encoding side. The convolutional layers remain identical to the architectures of @cite_23 and @cite_20 and the deconvolutional layers are trained. Authors in @cite_14 formulated the problem of semantic segmentation of the whole image as collage of individual object proposals, but also use the deconvolution part to delineate the object shape at higher resolution inside of the proposal window. The object proposal hypotheses are then combined by averaging or picking the maximum to produce the final output. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_6",
"@cite_23",
"@cite_20"
],
"mid": [
"2952637581",
"360623563",
"1903029394",
"",
"1686810756"
],
"abstract": [
"We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network.",
"We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."
]
} |
1604.07360 | 2343924602 | Attributes, or semantic features, have gained popularity in the past few years in domains ranging from activity recognition in video to face verification. Improving the accuracy of attribute classifiers is an important first step in any application which uses these attributes. In most works to date, attributes have been considered to be independent. However, we know this not to be the case. Many attributes are very strongly related, such as heavy makeup and wearing lipstick. We propose to take advantage of attribute relationships in three ways: by using a multi-task deep convolutional neural network (MCNN) sharing the lowest layers amongst all attributes, sharing the higher layers for related attributes, and by building an auxiliary network on top of the MCNN which utilizes the scores from all attributes to improve the final classification of each attribute. We demonstrate the effectiveness of our method by producing results on two challenging publicly available datasets. | Multi-task learning (MTL) is a way of solving several problems at the same time utilizing shared information @cite_26 @cite_32 @cite_13 . MTL has found success in the domains of facial landmark localization, pose estimation, action recognition, face detection, and many more @cite_25 @cite_9 @cite_31 @cite_3 @cite_34 . | {
"cite_N": [
"@cite_26",
"@cite_9",
"@cite_32",
"@cite_3",
"@cite_31",
"@cite_34",
"@cite_13",
"@cite_25"
],
"mid": [
"",
"2156392723",
"2110994494",
"2113120414",
"1955369839",
"2169931729",
"",
"1896424170"
],
"abstract": [
"",
"Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motion patterns instead of full actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoiding falsely forcing all categories to share knowledge. Experimental results on multiple public data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods.",
"Multi-task learning (MTL) improves the prediction performance on multiple, different but related, learning problems through shared parameters or representations. One of the most prominent multi-task learning algorithms is an extension to support vector machines (svm) by [15]. Although very elegant, multi-task svm is inherently restricted by the fact that support vector machines require each class to be addressed explicitly with its own weight vector which, in a multi-task setting, requires the different learning tasks to share the same set of classes. This paper proposes an alternative formulation for multi-task learning by extending the recently published large margin nearest neighbor (1mnn) algorithm to the MTL paradigm. Instead of relying on separating hyperplanes, its decision function is based on the nearest neighbor rule which inherently extends to many classes and becomes a natural fit for multi-task learning. We evaluate the resulting multi-task 1mnn on real-world insurance data and speech classification problems and show that it consistently outperforms single-task kNN under several metrics and state-of-the-art MTL classifiers.",
"Multiview face detection is a challenging problem due to dramatic appearance changes under various pose, illumination and expression conditions. In this paper, we present a multi-task deep learning scheme to enhance the detection performance. More specifically, we build a deep convolutional neural network that can simultaneously learn the face nonface decision, the face pose estimation problem, and the facial landmark localization problem. We show that such a multi-task learning scheme can further improve the classifier's accuracy. On the challenging FDDB data set, our detector achieves over 3 improvement in detection rate at the same false positive rate compared with other state-of-the-art methods.",
"Face recognition under viewpoint and illumination changes is a difficult problem, so many researchers have tried to solve this problem by producing the pose- and illumination- invariant feature. [26] changed all arbitrary pose and illumination images to the frontal view image to use for the invariant feature. In this scheme, preserving identity while rotating pose image is a crucial issue. This paper proposes a new deep architecture based on a novel type of multitask learning, which can achieve superior performance in rotating to a target-pose face image from an arbitrary pose and illumination image while preserving identity. The target pose can be controlled by the user's intention. This novel type of multi-task model significantly improves identity preservation over the single task model. By using all the synthesized controlled pose images, called Controlled Pose Image (CPI), for the pose-illumination-invariant feature and voting among the multiple face recognition results, we clearly outperform the state-of-the-art algorithms by more than 4 6 on the MultiPIE dataset.",
"Recently, deep neural networks have been shown to perform competitively on the task of predicting facial expression from images. Trained by gradient-based methods, these networks are amenable to \"multi-task\" learning via a multiple term objective. In this paper we demonstrate that learning representations to predict the position and shape of facial landmarks can improve expression recognition from images. We show competitive results on two large-scale datasets, the ICML 2013 Facial Expression Recognition challenge, and the Toronto Face Database.",
"",
"Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21]."
]
} |
1604.07360 | 2343924602 | Attributes, or semantic features, have gained popularity in the past few years in domains ranging from activity recognition in video to face verification. Improving the accuracy of attribute classifiers is an important first step in any application which uses these attributes. In most works to date, attributes have been considered to be independent. However, we know this not to be the case. Many attributes are very strongly related, such as heavy makeup and wearing lipstick. We propose to take advantage of attribute relationships in three ways: by using a multi-task deep convolutional neural network (MCNN) sharing the lowest layers amongst all attributes, sharing the higher layers for related attributes, and by building an auxiliary network on top of the MCNN which utilizes the scores from all attributes to improve the final classification of each attribute. We demonstrate the effectiveness of our method by producing results on two challenging publicly available datasets. | In @cite_14 , @cite_0 , and @cite_10 attributes and object classes are learned jointly to improve overall object classification performance. @cite_14 uses Multiple Instance Learning to detect and recognize objects in images by learning attribute-object pairs. @cite_0 uses an undirected graph to model the correlation amongst attributes in order to improve object recognition. In @cite_10 , attributes and objects share a low-dimensional representation allowing for regularization of the object classifier. In our work, all attributes share the lower layers in the CNN, so that information common to all the attributes can be learned. Applying MTL to attribute prediction is very natural given the strong relationships among the facial attributes. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_10"
],
"mid": [
"1597421340",
"2541843346",
"2063386797"
],
"abstract": [
"We present a discriminatively trained model for joint modelling of object class labels (e.g. \"person\", \"dog\", \"chair\", etc.) and their visual attributes (e.g. \"has head\", \"furry\", \"met al\", etc.). We treat attributes of an object as latent variables in our model and capture the correlations among attributes using an undirected graphical model built from training data. The advantage of our model is that it allows us to infer object class labels using the information of both the test image itself and its (latent) attributes. Our model unifies object class prediction and attribute prediction in a principled framework. It is also flexible enough to deal with different performance measurements. Our experimental results provide quantitative evidence that attributes can improve object naming.",
"We present a method to learn visual attributes (eg.“red”, “met al”, “spotted”) and object classes (eg. “car”, “dress”, “umbrella”) together. We assume images are labeled with category, but not location, of an instance. We estimate models with an iterative procedure: the current model is used to produce a saliency score, which, together with a homogeneity cue, identifies likely locations for the object (resp. attribute); then those locations are used to produce better models with multiple instance learning. Crucially, the object and attribute models must agree on the potential locations of an object. This means that the more accurate of the two models can guide the improvement of the less accurate model. Our method is evaluated on two data sets of images of real scenes, one in which the attribute is color and the other in which it is material. We show that our joint learning produces improved detectors. We demonstrate generalization by detecting attribute-object pairs which do not appear in our training data. The iteration gives significant improvement in performance.",
"Visual attributes expose human-defined semantics to object recognition models, but existing work largely restricts their influence to mid-level cues during classifier training. Rather than treat attributes as intermediate features, we consider how learning visual properties in concert with object categories can regularize the models for both. Given a low-level visual feature space together with attribute-and object-labeled image data, we learn a shared lower-dimensional representation by optimizing a joint loss function that favors common sparsity patterns across both types of prediction tasks. We adopt a recent kernelized formulation of convex multi-task feature learning, in which one alternates between learning the common features and learning task-specific classifier parameters on top of those features. In this way, our approach discovers any structure among the image descriptors that is relevant to both tasks, and allows the top-down semantics to restrict the hypothesis space of the ultimate object classifiers. We validate the approach on datasets of animals and outdoor scenes, and show significant improvements over traditional multi-class object classifiers and direct attribute prediction models."
]
} |
1604.07360 | 2343924602 | Attributes, or semantic features, have gained popularity in the past few years in domains ranging from activity recognition in video to face verification. Improving the accuracy of attribute classifiers is an important first step in any application which uses these attributes. In most works to date, attributes have been considered to be independent. However, we know this not to be the case. Many attributes are very strongly related, such as heavy makeup and wearing lipstick. We propose to take advantage of attribute relationships in three ways: by using a multi-task deep convolutional neural network (MCNN) sharing the lowest layers amongst all attributes, sharing the higher layers for related attributes, and by building an auxiliary network on top of the MCNN which utilizes the scores from all attributes to improve the final classification of each attribute. We demonstrate the effectiveness of our method by producing results on two challenging publicly available datasets. | CNNs have been used for attribute classification recently, demonstrating impressive results. Pose Aligned Networks for Deep Attributes (PANDA) achieved state-of-the-art performance by combining part-based models with deep learning to train pose-normalized CNNs for attribute classification @cite_1 . Focusing on age and gender, @cite_33 applied deep CNNs to the Adience dataset. used two deep CNNs - one for face localization and the other for attribute recognition - and achieved impressive results on the new CelebA and LFWA datasets, outperforming PANDA on many attributes @cite_16 . Unlike these methods, our MCNN-AUX requires no pre-training, alignment or part extraction. | {
"cite_N": [
"@cite_16",
"@cite_1",
"@cite_33"
],
"mid": [
"1834627138",
"2305141285",
""
],
"abstract": [
"Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.",
"Technology is disclosed for inferring human attributes from images of people. The attributes can include, for example, gender, age, hair, and or clothing. The technology uses part-based models, e.g., Poselets, to locate multiple normalized part patches from an image. The normalized part patches are provided into trained convolutional neural networks to generate feature data. Each convolution neural network applies multiple stages of convolution operations to one part patch to generate a set of fully connected feature data. The feature data for all part patches are concatenated and then provided into multiple trained classifiers (e.g., linear support vector machines) to predict attributes of the image.",
""
]
} |
1604.07361 | 2342848697 | A persistence lens is a hierarchy of disjoint upper and lower level sets of a continuous luminance image's Reeb graph. The boundary components of a persistence lens's interior components are Jordan curves that serve as a hierarchical segmentation of the image, and may be rendered as vector graphics. A persistence lens determines a varilet basis for the luminance image, in which image simplification is a realized by subspace projection. Image scale space, and image fractal analysis, result from applying a scale measure to each basis function. | Piecewise monotone functions have identical middle space and Reeb graph. The monotone-light factorization entails additional information in the form of the monotone and light factors, as reflected in our notation @math . There is nothing that prevents Reeb graph analysis from recognizing these functions. For example, light factor @math (differently named) was used to show of the Reeb graph under perturbations of @math by @cite_4 and Di @cite_53 . | {
"cite_N": [
"@cite_53",
"@cite_4"
],
"mid": [
"2950032752",
"2087459646"
],
"abstract": [
"Reeb graphs are structural descriptors that capture shape properties of a topological space from the perspective of a chosen function. In this work we define a combinatorial metric for Reeb graphs of orientable surfaces in terms of the cost necessary to transform one graph into another by edit operations. The main contributions of this paper are the stability property and the optimality of this edit distance. More precisely, the stability result states that changes in the functions, measured by the maximum norm, imply not greater changes in the corresponding Reeb graphs, measured by the edit distance. The optimality result states that our edit distance discriminates Reeb graphs better than any other metric for Reeb graphs of surfaces satisfying the stability property.",
"One of the prevailing ideas in geometric and topological data analysis is to provide descriptors that encode useful information about hidden objects from observed data. The Reeb graph is one such descriptor for a given scalar function. The Reeb graph provides a simple yet meaningful abstraction of the input domain, and can also be computed efficiently. Given the popularity of the Reeb graph in applications, it is important to understand its stability and robustness with respect to changes in the input function, as well as to be able to compare the Reeb graphs resulting from different functions. In this paper, we propose a metric for Reeb graphs, called the functional distortion distance. Under this distance measure, the Reeb graph is stable against small changes of input functions. At the same time, it remains discriminative at differentiating input functions. In particular, the main result is that the functional distortion distance between two Reeb graphs is bounded from below by (and thus more discriminative than) the bottleneck distance between both the ordinary and extended persistence diagrams for appropriate dimensions. As an application of our results, we analyze a natural simplification scheme for Reeb graphs, and show that persistent features in Reeb graph remains persistent under simplification. Understanding the stability of important features of the Reeb graph under simplification is an interesting problem on its own right, and critical to the practical usage of Reeb graphs."
]
} |
1604.07361 | 2342848697 | A persistence lens is a hierarchy of disjoint upper and lower level sets of a continuous luminance image's Reeb graph. The boundary components of a persistence lens's interior components are Jordan curves that serve as a hierarchical segmentation of the image, and may be rendered as vector graphics. A persistence lens determines a varilet basis for the luminance image, in which image simplification is a realized by subspace projection. Image scale space, and image fractal analysis, result from applying a scale measure to each basis function. | @cite_4 simplify by removing features having persistence below a threshold. Although varilets use persistence to lens structure, varilets do not use persistence to drive simplification, due to a type'' mismatch: We have assigned persistence to individual extrema, but we measure scale on a basis function. is the luminance image measure (section ) most closely related to persistence; e.g. contrast persistence for extremal persistence regions (section ). | {
"cite_N": [
"@cite_4"
],
"mid": [
"2087459646"
],
"abstract": [
"One of the prevailing ideas in geometric and topological data analysis is to provide descriptors that encode useful information about hidden objects from observed data. The Reeb graph is one such descriptor for a given scalar function. The Reeb graph provides a simple yet meaningful abstraction of the input domain, and can also be computed efficiently. Given the popularity of the Reeb graph in applications, it is important to understand its stability and robustness with respect to changes in the input function, as well as to be able to compare the Reeb graphs resulting from different functions. In this paper, we propose a metric for Reeb graphs, called the functional distortion distance. Under this distance measure, the Reeb graph is stable against small changes of input functions. At the same time, it remains discriminative at differentiating input functions. In particular, the main result is that the functional distortion distance between two Reeb graphs is bounded from below by (and thus more discriminative than) the bottleneck distance between both the ordinary and extended persistence diagrams for appropriate dimensions. As an application of our results, we analyze a natural simplification scheme for Reeb graphs, and show that persistent features in Reeb graph remains persistent under simplification. Understanding the stability of important features of the Reeb graph under simplification is an interesting problem on its own right, and critical to the practical usage of Reeb graphs."
]
} |
1604.07361 | 2342848697 | A persistence lens is a hierarchy of disjoint upper and lower level sets of a continuous luminance image's Reeb graph. The boundary components of a persistence lens's interior components are Jordan curves that serve as a hierarchical segmentation of the image, and may be rendered as vector graphics. A persistence lens determines a varilet basis for the luminance image, in which image simplification is a realized by subspace projection. Image scale space, and image fractal analysis, result from applying a scale measure to each basis function. | @cite_61 define a regularized equation whose solution provides a complete global image segmentation. Their approach has been refined and solutions have been explored; for a review see @cite_54 . | {
"cite_N": [
"@cite_61",
"@cite_54"
],
"mid": [
"2114487471",
"2289242876"
],
"abstract": [
"Abstract : This reprint will introduce and study the most basic properties of three new variational problems which are suggested by applications to computer vision. In computer vision, a fundamental problem is to appropriately decompose the domain R of a function g (x,y) of two variables. This problem starts by describing the physical situation which produces images: assume that a three-dimensional world is observed by an eye or camera from some point P and that g1(rho) represents the intensity of the light in this world approaching the point sub 1 from a direction rho. If one has a lens at P focusing this light on a retina or a film-in both cases a plane domain R in which we may introduce coordinates x, y then let g(x,y) be the strength of the light signal striking R at a point with coordinates (x,y); g(x,y) is essentially the same as sub 1 (rho) -possibly after a simple transformation given by the geometry of the imaging syste. The function g(x,y) defined on the plane domain R will be called an image. What sort of function is g? The light reflected off the surfaces Si of various solid objects O sub i visible from P will strike the domain R in various open subsets R sub i. When one object O1 is partially in front of another object O2 as seen from P, but some of object O2 appears as the background to the sides of O1, then the open sets R1 and R2 will have a common boundary (the 'edge' of object O1 in the image defined on R) and one usually expects the image g(x,y) to be discontinuous along this boundary. (JHD)",
"This is a review paper on the Mumford-Shah functional, with particular interest in blow-up limits, low dimensions (2 and 3), and connection with fracture theory. It includes in particular a sketch proof of the 3D regularity result near minimal cones from my thesis, and a presentation of a few more recent results that appeared in the last two years."
]
} |
1604.07361 | 2342848697 | A persistence lens is a hierarchy of disjoint upper and lower level sets of a continuous luminance image's Reeb graph. The boundary components of a persistence lens's interior components are Jordan curves that serve as a hierarchical segmentation of the image, and may be rendered as vector graphics. A persistence lens determines a varilet basis for the luminance image, in which image simplification is a realized by subspace projection. Image scale space, and image fractal analysis, result from applying a scale measure to each basis function. | Starting with Witkin @cite_20 and Koenderink @cite_5 , has provided a parameterized family of smoothed images. For an overview, see Lindeberg @cite_32 . | {
"cite_N": [
"@cite_5",
"@cite_32",
"@cite_20"
],
"mid": [
"2022735534",
"28876726",
""
],
"abstract": [
"In practice the relevant details of images exist only over a restricted range of scale. Hence it is important to study the dependence of image structure on the level of resolution. It seems clear enough that visual perception treats images on several levels of resolution simultaneously and that this fact must be important for the study of perception. However, no applicable mathematically formulated theory to deal with such problems appers to exist. In this paper it is shown that any image can be embedded in a one-parameter family of derived images (with resolution as the parameter) in essentially only one unique way if the constraint that no spurious detail should be generated when the resolution is diminished, is applied. The structure of this family is governed by the well known diffusion equation (a parabolic, linear, partial differential equation of the second order). As such the structure fits into existing theories that treat the front end of the visual system as a continuous tack of homogeneous layer, characterized by iterated local processing schemes. When resolution is decreased the images becomes less articulated because the extrem (“light and dark blobs”) disappear one after the other. This erosion of structure is a simple process that is similar in every case. As a result any image can be described as a juxtaposed and nested set of light and dark blobs, wherein each blod has a limited range of resolution in which it manifests itself. The structure of the family of derived images permits a derivation of the sampling density required to sample the image at multiple scales of resolution. The natural scale along the resolution axis (leading to an informationally uniform sampling density) is logarithmic, thus the structure is apt for the description of size invariances.",
"A fundamental problem in vision is what types of image operations should be used at the first stages of visual processing. I discuss a principled approach to this problem by describing a generalize ...",
""
]
} |
1604.07361 | 2342848697 | A persistence lens is a hierarchy of disjoint upper and lower level sets of a continuous luminance image's Reeb graph. The boundary components of a persistence lens's interior components are Jordan curves that serve as a hierarchical segmentation of the image, and may be rendered as vector graphics. A persistence lens determines a varilet basis for the luminance image, in which image simplification is a realized by subspace projection. Image scale space, and image fractal analysis, result from applying a scale measure to each basis function. | Varilet scale space different than these references, but shares their basic intent: a sequential removal of detail, where at each stage the removed detail has smaller scale than what remains. Varilet scale space fully embraces the semantics of causality'' @cite_5 , using simplification's monotone quotient map to track identity (section ). | {
"cite_N": [
"@cite_5"
],
"mid": [
"2022735534"
],
"abstract": [
"In practice the relevant details of images exist only over a restricted range of scale. Hence it is important to study the dependence of image structure on the level of resolution. It seems clear enough that visual perception treats images on several levels of resolution simultaneously and that this fact must be important for the study of perception. However, no applicable mathematically formulated theory to deal with such problems appers to exist. In this paper it is shown that any image can be embedded in a one-parameter family of derived images (with resolution as the parameter) in essentially only one unique way if the constraint that no spurious detail should be generated when the resolution is diminished, is applied. The structure of this family is governed by the well known diffusion equation (a parabolic, linear, partial differential equation of the second order). As such the structure fits into existing theories that treat the front end of the visual system as a continuous tack of homogeneous layer, characterized by iterated local processing schemes. When resolution is decreased the images becomes less articulated because the extrem (“light and dark blobs”) disappear one after the other. This erosion of structure is a simple process that is similar in every case. As a result any image can be described as a juxtaposed and nested set of light and dark blobs, wherein each blod has a limited range of resolution in which it manifests itself. The structure of the family of derived images permits a derivation of the sampling density required to sample the image at multiple scales of resolution. The natural scale along the resolution axis (leading to an informationally uniform sampling density) is logarithmic, thus the structure is apt for the description of size invariances."
]
} |
1604.07361 | 2342848697 | A persistence lens is a hierarchy of disjoint upper and lower level sets of a continuous luminance image's Reeb graph. The boundary components of a persistence lens's interior components are Jordan curves that serve as a hierarchical segmentation of the image, and may be rendered as vector graphics. A persistence lens determines a varilet basis for the luminance image, in which image simplification is a realized by subspace projection. Image scale space, and image fractal analysis, result from applying a scale measure to each basis function. | Varilet analysis is complementary to these methods, utilizing a different measurement and counting domain for power law estimation (section ): For any choice of persistence lens, together with any choice of scale measure, the count of varilets by scale is input to Clauset's maximum likelihood estimator for power law exponent @cite_48 (figure ). Again, this approach is simpler and more direct than many, and may therefore be useful. | {
"cite_N": [
"@cite_48"
],
"mid": [
"2000042664"
],
"abstract": [
"Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution—the part of the distribution representing large but rare events—and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data, while in others the power law is ruled out."
]
} |
1604.07361 | 2342848697 | A persistence lens is a hierarchy of disjoint upper and lower level sets of a continuous luminance image's Reeb graph. The boundary components of a persistence lens's interior components are Jordan curves that serve as a hierarchical segmentation of the image, and may be rendered as vector graphics. A persistence lens determines a varilet basis for the luminance image, in which image simplification is a realized by subspace projection. Image scale space, and image fractal analysis, result from applying a scale measure to each basis function. | Varilet image analysis provides vector representation of hierarchical image segmentation, by vectorizing the Jordan lens facets' boundary components. The luminance value is identical for all Jordan boundaries of the same lens region. Image vectorization methods typically use image segmentation and or edge detectors, e.g. Selinger @cite_57 . @cite_1 merge regions of similar color. @cite_15 use heuristics to group pixels into cells. | {
"cite_N": [
"@cite_57",
"@cite_15",
"@cite_1"
],
"mid": [
"",
"2060716992",
"1998683784"
],
"abstract": [
"",
"We describe a novel algorithm for extracting a resolution-independent vector representation from pixel art images, which enables magnifying the results by an arbitrary amount without image degradation. Our algorithm resolves pixel-scale features in the input and converts them into regions with smoothly varying shading that are crisply separated by piecewise-smooth contour curves. In the original image, pixels are represented on a square pixel lattice, where diagonal neighbors are only connected through a single point. This causes thin features to become visually disconnected under magnification by conventional means, and creates ambiguities in the connectedness and separation of diagonal neighbors. The key to our algorithm is in resolving these ambiguities. This enables us to reshape the pixel cells so that neighboring pixels belonging to the same feature are connected through edges, thereby preserving the feature connectivity under magnification. We reduce pixel aliasing artifacts and improve smoothness by fitting spline curves to contours in the image and optimizing their control points.",
"Most segmentation algorithms are composed of several procedures: split and merge, small region elimination, boundary smoothing,..., each depending on several parameters. The introduction of an energy to minimize leads to a drastic reduction of these parameters. The authors prove that the most simple segmentation tool, the “region merging” algorithm, made according to the simplest energy, is enough to compute a local energy minimum belonging to a compact class and to achieve the job of most of the tools mentioned above. The authors explain why “merging” in a variational framework leads to a fast multiscale, multichannel algorithm, with a pyramidal structure. The obtained algorithm is @math , where n is the number of pixels of the picture. This fast algorithm is applied to make grey level and texture segmentation and experimental results are shown."
]
} |
1604.07407 | 2343796309 | Group discussions are essential for organizing every aspect of modern life, from faculty meetings to senate debates, from grant review panels to papal conclaves. While costly in terms of time and organization effort, group discussions are commonly seen as a way of reaching better decisions compared to solutions that do not require coordination between the individuals (e.g. voting)---through discussion, the sum becomes greater than the parts. However, this assumption is not irrefutable: anecdotal evidence of wasteful discussions abounds, and in our own experiments we find that over 30 of discussions are unproductive. We propose a framework for analyzing conversational dynamics in order to determine whether a given task-oriented discussion is worth having or not. We exploit conversational patterns reflecting the flow of ideas and the balance between the participants, as well as their linguistic choices. We apply this framework to conversations naturally occurring in an online collaborative world exploration game developed and deployed to support this research. Using this setting, we show that linguistic cues and conversational patterns extracted from the first 20 seconds of a team discussion are predictive of whether it will be a wasteful or a productive one. | Research on success in groups with more than two members is less common. model the grades of 27 group assignments from a class using measures of average entrainment, finding task-specific words to be a strong cue. shows how the affective balance expressed in correlates with performance on engineering tasks, in 30 teams of up to 4 students. In a related study the balance in the first 5 minutes of an interaction is found predictive of performance @cite_21 . None of the research we are aware of controls for initial skill or potential of the members. | {
"cite_N": [
"@cite_21"
],
"mid": [
"1995708924"
],
"abstract": [
"Inspired by research on the role of affect in marital interactions, the authors examined whether affective interaction dynamics occurring within a 5-minute slice can predict pair programming performance. In a laboratory experiment with professional programmers, Group Hedonic Balance, a measure of the balance between positive and negative expressed affect, accounted for up to 35 of the variance in not only subjective but also objective pair programming performance. Implications include a new set of methods to study pair programming interactions and recommendations to improve pair programming performance."
]
} |
1604.07407 | 2343796309 | Group discussions are essential for organizing every aspect of modern life, from faculty meetings to senate debates, from grant review panels to papal conclaves. While costly in terms of time and organization effort, group discussions are commonly seen as a way of reaching better decisions compared to solutions that do not require coordination between the individuals (e.g. voting)---through discussion, the sum becomes greater than the parts. However, this assumption is not irrefutable: anecdotal evidence of wasteful discussions abounds, and in our own experiments we find that over 30 of discussions are unproductive. We propose a framework for analyzing conversational dynamics in order to determine whether a given task-oriented discussion is worth having or not. We exploit conversational patterns reflecting the flow of ideas and the balance between the participants, as well as their linguistic choices. We apply this framework to conversations naturally occurring in an online collaborative world exploration game developed and deployed to support this research. Using this setting, we show that linguistic cues and conversational patterns extracted from the first 20 seconds of a team discussion are predictive of whether it will be a wasteful or a productive one. | In management science, network analysis reveals that certain subgraphs found in long-term, structured indicate better performance, as rated by senior managers @cite_3 ; controlled experiments show that optimal structures depend on the complexity of the task @cite_8 @cite_0 . These studies, as well as much of the research on effective crowdsourcing [ inter alia ] lasecki2012real,wang2011diversity , do not focus on linguistic and conversational factors. | {
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_8"
],
"mid": [
"2408377713",
"1994558754",
"2159907417"
],
"abstract": [
"We analyze the linguistic behaviour of participants in bilateral electronic negotiations, and discover that particular language characteristics are in contrast with face-to-face negotiations. Language patterns in the later part of electronic negotiation are highly indicative of the successful or unsuccessful outcome of the process, whereas in face-toface negotiations, the first part of the negotiation is more useful for predicting the outcome. We formulate our problem in terms of text classification on negotiation segments of different sizes. The data are represented by a variety of linguistic features that capture the gist of the discussion: negotiationor strategy-related words. We show that, as we consider ever smaller final segments of a negotiation transcript, the negotiationrelated words become more indicative of the negotiation outcome, and give predictions with higher Accuracy than larger segments from the beginning of the process.",
"Abstract Over the past several decades, social network research has favored either ego-centric (e.g. employee) or bounded networks (e.g. organization) as the primary unit of analysis. This paper revitalizes a focus on the work group, which includes structural properties of both its individual members and the collection as a whole. In a study of 182 work groups in a global organization, we found that structural holes of leaders within groups as well as core-periphery and hierarchical group structures were negatively associated with performance. We show that these effects hold even after controlling for mean levels of group communication, and discuss implications for the future of network analysis in work groups and informal organizations.",
"Bavelas, Smith and Leavitt Bavelas, A. 1950. Communication patterns in task-oriented groups. J. Acoustical Soc. Amer.22 725--730. have posed the problem: what effect do communication patterns have upon the operation of groups? To study this problem they designed a laboratory situation that is a prototype of those occurring in “natural” organizations existing in government and business. Each member of the group is given certain information. Their task is to assemble this information, use it to make a decision, and then issue orders based on the decision. This design provides a situation stripped of the complexities of large-scale social groups but retaining some essential characteristics of the organizational communication problem. In it we can examine how the communication net affects simultaneously a the development of the organization's internal structure, and b the group's performance of its operating task."
]
} |
1604.07236 | 2342973890 | In contrast to much previous work that has focused on location classification of tweets restricted to a specific country, here we undertake the task in a broader context by classifying global tweets at the country level, which is so far unexplored in a real-time scenario. We analyse the extent to which a tweet's country of origin can be determined by making use of eight tweet-inherent features for classification. Furthermore, we use two datasets, collected a year apart from each other, to analyse the extent to which a model trained from historical tweets can still be leveraged for classification of new tweets. With classification experiments on all 217 countries in our datasets, as well as on the top 25 countries, we offer some insights into the best use of tweet-inherent features for an accurate country-level classification of tweets. We find that the use of a single feature, such as the use of tweet content alone -- the most widely used feature in previous work -- leaves much to be desired. Choosing an appropriate combination of both tweet content and metadata can actually lead to substantial improvements of between 20 and 50 . We observe that tweet content, the user's self-reported location and the user's real name, all of which are inherent in a tweet and available in a real-time scenario, are particularly useful to determine the country of origin. We also experiment on the applicability of a model trained on historical tweets to classify new tweets, finding that the choice of a particular combination of features whose utility does not fade over time can actually lead to comparable performance, avoiding the need to retrain. However, the difficulty of achieving accurate classification increases slightly for countries with multiple commonalities, especially for English and Spanish speaking countries. | A growing body of research deals with the automated inference of demographic details of Twitter users @cite_29 . Researchers have attempted to infer attributes of Twitter users such as age @cite_32 @cite_50 , gender @cite_0 @cite_36 @cite_51 @cite_50 , political orientation @cite_35 @cite_45 @cite_16 @cite_31 or a range of social identities @cite_42 . Digging more deeply into the demographics of Twitter users, other researchers have attempted to infer socioeconomic demographics such as occupational class @cite_19 , income @cite_2 and socioeconomic status @cite_23 . Work by @cite_17 has also tried to infer the nationality of users; this work is different from that which we report here in that the country where the tweets were posted from, was already known. | {
"cite_N": [
"@cite_35",
"@cite_36",
"@cite_29",
"@cite_42",
"@cite_32",
"@cite_0",
"@cite_19",
"@cite_45",
"@cite_50",
"@cite_2",
"@cite_23",
"@cite_31",
"@cite_16",
"@cite_51",
"@cite_17"
],
"mid": [
"91442942",
"2284070151",
"2167102709",
"2564851358",
"2105543408",
"9292421",
"2166434810",
"2538660910",
"2017729405",
"1948823840",
"2336223491",
"184758014",
"",
"2025153046",
"2000275004"
],
"abstract": [
"In this study we investigate how social media shape the networked public sphere and facilitate communication between communities with different political orientations. We examine two networks of political communication on Twitter, comprised of more than 250,000 tweets from the six weeks leading up to the 2010 U.S. congressional midterm elections. Using a combination of network clustering algorithms and manually-annotated data we demonstrate that the network of political retweets exhibits a highly segregated partisan structure, with extremely limited connectivity between left- and right-leaning users. Surprisingly this is not the case for the user-to-user mention network, which is dominated by a single politically heterogeneous cluster of users in which ideologically-opposed individuals interact at a much higher rate compared to the network of retweets. To explain the distinct topologies of the retweet and mention networks we conjecture that politically motivated individuals provoke interaction by injecting partisan content into information streams whose primary audience consists of ideologically-opposed users. We conclude with statistical evidence in support of this hypothesis.",
"Despite significant work on the problem of inferring a Twitter user’s gender from her online content, no systematic investigation has been made into leveraging the most obvious signal of a user’s gender: first name. In this paper, we perform a thorough investigation of the link between gender and first name in English tweets. Our work makes several important contributions. The first and most central contribution is two different strategies for incorporating the user’s self-reported name into a gender classifier. We find that this yields a 20 increase in accuracy over a standard baseline classifier. These classifiers are the most accurate gender inference methods for Twitter data developed to date. In order to evaluate our classifiers, we developed a novel way of obtaining gender-labels for Twitter users that does not require analysis of the user’s profile or textual content. This is our second contribution. Our approach eliminates the troubling issue of a label being somehow derived from the same text that a classifier will use to infer the label. Finally, we built a large dataset of gender-labeled Twitter users and, crucially, have published this dataset for community use. To our knowledge, this is the first gender-labeled Twitter dataset available for researchers. Our hope is that this will provide a basis for comparison of gender inference methods.",
"Every second, the thoughts and feelings of millions of people across the world are recorded in the form of 140-character tweets using Twitter. However, despite the enormous potential presented by this remarkable data source, we still do not have an understanding of the Twitter population itself: Who are the Twitter users? How representative of the overall population are they? In this paper, we take the first steps towards answering these questions by analyzing data on a set of Twitter users representing over 1 of the U.S. population. We develop techniques that allow us to compare the Twitter population to the U.S. population along three axes (geography, gender, and race ethnicity), and find that the Twitter population is a highly non-uniform sample of the population.",
"We combine social theory and NLP methods to classify English-speaking Twitter users’ online social identity in profile descriptions. We conduct two text classification experiments. In Experiment 1 we use a 5-category online social identity classification based on identity and self-categorization theories. While we are able to automatically classify two identity categories (Relational and Occupational), automatic classification of the other three identities (Political, Ethnic religious and Stigmatized) is challenging. In Experiment 2 we test a merger of such identities based on theoretical arguments. We find that by combining these identities we can improve the predictive performance of the classifiers in the experiment. Our study shows how social theory can be used to guide NLP methods, and how such methods provide input to revisit traditional social theory that is strongly consolidated in offline settings.",
"Social networking sites (SNS) are quickly becoming one of the most popular tools for social interaction and information exchange. Previous research has shown a relationship between users' personality and SNS use. Using a general population sample (N=300), this study furthers such investigations by examining the personality correlates (Neuroticism, Extraversion, Openness-to-Experience, Agreeableness, Conscientiousness, Sociability and Need-for-Cognition) of social and informational use of the two largest SNS: Facebook and Twitter. Age and Gender were also examined. Results showed that personality was related to online socialising and information seeking exchange, though not as influential as some previous research has suggested. In addition, a preference for Facebook or Twitter was associated with differences in personality. The results reveal differential relationships between personality and Facebook and Twitter usage.",
"Accurate prediction of demographic attributes from social media and other informal online content is valuable for marketing, personalization, and legal investigation. This paper describes the construction of a large, multilingual dataset labeled with gender, and investigates statistical models for determining the gender of uncharacterized Twitter users. We explore several different classifier types on this dataset. We show the degree to which classifier accuracy varies based on tweet volumes as well as when various kinds of profile metadata are included in the models. We also perform a large-scale human assessment using Amazon Mechanical Turk. Our methods significantly out-perform both baseline models and almost all humans on the same task.",
"Social media content can be used as a complementary source to the traditional methods for extracting and studying collective social attributes. This study focuses on the prediction of the occupational class for a public user profile. Our analysis is conducted on a new annotated corpus of Twitter users, their respective job titles, posted textual content and platform-related attributes. We frame our task as classification using latent feature representations such as word clusters and embeddings. The employed linear and, especially, non-linear methods can predict a user’s occupational class with strong accuracy for the coarsest level of a standard occupation taxonomy which includes nine classes. Combined with a qualitative assessment, the derived results confirm the feasibility of our approach in inferring a new user attribute that can be embedded in a multitude of downstream applications.",
"The widespread adoption of social media for political communication creates unprecedented opportunities to monitor the opinions of large numbers of politically active individuals in real time. However, without a way to distinguish between users of opposing political alignments, conflicting signals at the individual level may, in the aggregate, obscure partisan differences in opinion that are important to political strategy. In this article we describe several methods for predicting the political alignment of Twitter users based on the content and structure of their political communication in the run-up to the 2010 U.S. midterm elections. Using a data set of 1,000 manually-annotated individuals, we find that a support vector machine (SVM) trained on hash tag metadata outperforms an SVM trained on the full text of users' tweets, yielding predictions of political affiliations with 91 accuracy. Applying latent semantic analysis to the content of users' tweets we identify hidden structure in the data strongly associated with political affiliation, but do not find that topic detection improves prediction performance. All of these content-based methods are outperformed by a classifier based on the segregated community structure of political information diffusion networks (95 accuracy). We conclude with a practical application of this machinery to web-based political advertising, and outline several approaches to public opinion monitoring based on the techniques developed herein.",
"Social media outlets such as Twitter have become an important forum for peer interaction. Thus the ability to classify latent user attributes, including gender, age, regional origin, and political orientation solely from Twitter user language or similar highly informal content has important applications in advertising, personalization, and recommendation. This paper includes a novel investigation of stacked-SVM-based classification algorithms over a rich set of original features, applied to classifying these four user attributes. It also includes extensive analysis of features and approaches that are effective and not effective in classifying user attributes in Twitter-style informal written genres as distinct from the other primarily spoken genres previously studied in the user-property classification literature. Our models, singly and in ensemble, significantly outperform baseline models in all cases. A detailed analysis of model components and features provides an often entertaining insight into distinctive language-usage variation across gender, age, regional origin and political orientation in modern informal communication.",
"Automatically inferring user demographics from social media posts is useful for both social science research and a range of downstream applications in marketing and politics. We present the first extensive study where user behaviour on Twitter is used to build a predictive model of income. We apply non-linear methods for regression, i.e. Gaussian Processes, achieving strong correlation between predicted and actual user income. This allows us to shed light on the factors that characterise income on Twitter and analyse their interplay with user emotions and sentiment, perceived psycho-demographics and language use expressed through the topics of their posts. Our analysis uncovers correlations between different feature categories and income, some of which reflect common belief e.g. higher perceived education and intelligence indicates higher earnings, known differences e.g. gender and age differences, however, others show novel findings e.g. higher income users express more fear and anger, whereas lower income users express more of the time emotion and opinions.",
"This paper presents a method to classify social media users based on their socioeconomic status. Our experiments are conducted on a curated set of Twitter profiles, where each user is represented by the posted text, topics of discussion, interactive behaviour and estimated impact on the microblogging platform. Initially, we formulate a 3-way classification task, where users are classified as having an upper, middle or lower socioeconomic status. A nonlinear, generative learning approach using a composite Gaussian Process kernel provides significantly better classification accuracy ( (75 , )) than a competitive linear alternative. By turning this task into a binary classification – upper vs. medium and lower class – the proposed classifier reaches an accuracy of (82 , ).",
"This paper addresses the task of user classification in social media, with an application to Twitter. We automatically infer the values of user attributes such as political orientation or ethnicity by leveraging observable information such as the user behavior, network structure and the linguistic content of the user’s Twitter feed. We employ a machine learning approach which relies on a comprehensive set of features derived from such user information. We report encouraging experimental results on 3 tasks with different characteristics: political affiliation detection, ethnicity identification and detecting affinity for a particular business. Finally, our analysis shows that rich linguistic features prove consistently valuable across the 3 tasks and show great promise for additional user classification needs.",
"",
"The rapid growth of social networks has produced an unprecedented amount of user-generated data, which provides an excellent opportunity for text mining. Authorship analysis, an important part of text mining, attempts to learn about the author of the text through subtle variations in the writing styles that occur between gender, age and social groups. Such information has a variety of applications including advertising and law enforcement. One of the most accessible sources of user-generated data is Twitter, which makes the majority of its user data freely available through its data access API. In this study we seek to identify the gender of users on Twitter using Perceptron and Nai ve Bayes with selected 1 through 5-gram features from tweet text. Stream applications of these algorithms were employed for gender prediction to handle the speed and volume of tweet traffic. Because informal text, such as tweets, cannot be easily evaluated using traditional dictionary methods, n-gram features were implemented in this study to represent streaming tweets. The large number of 1 through 5-grams requires that only a subset of them be used in gender classification, for this reason informative n-gram features were chosen using multiple selection algorithms. In the best case the Naive Bayes and Perceptron algorithms produced accuracy, balanced accuracy, and F-measure above 99 .",
"Twitter user profiles contain rich information that allows researchers to infer particular attributes of users' identities. Knowing identity attributes such as gender, age, and or nationality are a first step in many studies which seek to describe various phenomena related to computational social science. Often, it is through such attributes that studies of social media that focus on, for example, the isolation of foreigners, become possible. However, such characteristics are not often clearly stated by Twitter users, so researchers must turn to other means to ascertain various categories of identity. In this paper, we discuss the challenge of detecting the nationality of Twitter users using rich features from their profiles. In addition, we look at the effectiveness of different features as we go about this task. For the case of a highly diverse country---Qatar---we provide a detailed network analysis with insights into user behaviors and linking preference (or the lack thereof) to other nationalities."
]
} |
1604.07236 | 2342973890 | In contrast to much previous work that has focused on location classification of tweets restricted to a specific country, here we undertake the task in a broader context by classifying global tweets at the country level, which is so far unexplored in a real-time scenario. We analyse the extent to which a tweet's country of origin can be determined by making use of eight tweet-inherent features for classification. Furthermore, we use two datasets, collected a year apart from each other, to analyse the extent to which a model trained from historical tweets can still be leveraged for classification of new tweets. With classification experiments on all 217 countries in our datasets, as well as on the top 25 countries, we offer some insights into the best use of tweet-inherent features for an accurate country-level classification of tweets. We find that the use of a single feature, such as the use of tweet content alone -- the most widely used feature in previous work -- leaves much to be desired. Choosing an appropriate combination of both tweet content and metadata can actually lead to substantial improvements of between 20 and 50 . We observe that tweet content, the user's self-reported location and the user's real name, all of which are inherent in a tweet and available in a real-time scenario, are particularly useful to determine the country of origin. We also experiment on the applicability of a model trained on historical tweets to classify new tweets, finding that the choice of a particular combination of features whose utility does not fade over time can actually lead to comparable performance, avoiding the need to retrain. However, the difficulty of achieving accurate classification increases slightly for countries with multiple commonalities, especially for English and Spanish speaking countries. | What motivates the present study is the increasing interest in inferring the geographical location of either tweets or Twitter users @cite_18 . The automated inference of tweet location has been studied for different purposes, ranging from data journalism @cite_49 @cite_7 to public health @cite_21 . As well as numerous different techniques, researchers have relied on different settings and pursued different objectives when conducting experiments. Table shows a summary of previous work reported in the scientific literature, outlining the features that each study used to classify tweets by location, the geographic scope of the study, the languages they dealt with, the classification granularity they tried to achieve and used for evaluation, and whether single tweets, aggregated multiple tweets and or user history were used to train the classifier. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_7",
"@cite_49"
],
"mid": [
"2282112529",
"797920393",
"2316346277",
""
],
"abstract": [
"The increasing popularity of the social networking service, Twitter, has made it more involved in day-to-day communications, strengthening social relationships and information dissemination. Conversations on Twitter are now being explored as indicators within early warning systems to alert of imminent natural disasters such as earthquakes and aid prompt emergency responses to crime. Producers are privileged to have limitless access to market perception from consumer comments on social media and microblogs. Targeted advertising can be made more effective based on user profile information such as demography, interests and location. While these applications have proven beneficial, the ability to effectively infer the location of Twitter users has even more immense value. However, accurately identifying where a message originated from or an author's location remains a challenge, thus essentially driving research in that regard. In this paper, we survey a range of techniques applied to infer the location of Twitter users from inception to state of the art. We find significant improvements over time in the granularity levels and better accuracy with results driven by refinements to algorithms and inclusion of more spatial features.",
"Public health applications using social media often require accurate, broad-coverage location information. However, the standard information provided by social media APIs, such as Twitter, cover a limited number of messages. This paper presents Carmen, a geolocation system that can determine structured location information for messages provided by the Twitter API. Our system utilizes geocoding tools and a combination of automatic and manual alias resolution methods to infer location structures from GPS positions and user-provided profile data. We show that our system is accurate and covers many locations, and we demonstrate its utility for improving influenza surveillance.",
"In recent years, there has been a growing trend to use publicly available social media sources within the field of journalism. Breaking news has tight reporting deadlines, measured in minutes not days, but content must still be checked and rumors verified. As such, journalists are looking at automated content analysis to prefilter large volumes of social media content prior to manual verification. This article describes a real-time social media analytics framework for journalists. We extend our previously published geoparsing approach to improve its scalability and efficiency. We develop and evaluate a novel approach to geosemantic feature extraction, classifying evidence in terms of situatedness, timeliness, confirmation, and validity. Our approach works for new unseen news topics. We report results from four experiments using five Twitter datasets crawled during different English-language news events. One of our datasets is the standard TREC 2012 microblog corpus. Our classification results are promising, with F1 scores varying by class from 0.64 to 0.92 for unseen event types. We lastly report results from two case studies during real-world news stories, showcasing different ways our system can assist journalists filter and cross-check content as they examine the trust and veracity of content and sources.",
""
]
} |
1604.07236 | 2342973890 | In contrast to much previous work that has focused on location classification of tweets restricted to a specific country, here we undertake the task in a broader context by classifying global tweets at the country level, which is so far unexplored in a real-time scenario. We analyse the extent to which a tweet's country of origin can be determined by making use of eight tweet-inherent features for classification. Furthermore, we use two datasets, collected a year apart from each other, to analyse the extent to which a model trained from historical tweets can still be leveraged for classification of new tweets. With classification experiments on all 217 countries in our datasets, as well as on the top 25 countries, we offer some insights into the best use of tweet-inherent features for an accurate country-level classification of tweets. We find that the use of a single feature, such as the use of tweet content alone -- the most widely used feature in previous work -- leaves much to be desired. Choosing an appropriate combination of both tweet content and metadata can actually lead to substantial improvements of between 20 and 50 . We observe that tweet content, the user's self-reported location and the user's real name, all of which are inherent in a tweet and available in a real-time scenario, are particularly useful to determine the country of origin. We also experiment on the applicability of a model trained on historical tweets to classify new tweets, finding that the choice of a particular combination of features whose utility does not fade over time can actually lead to comparable performance, avoiding the need to retrain. However, the difficulty of achieving accurate classification increases slightly for countries with multiple commonalities, especially for English and Spanish speaking countries. | Only a handful of studies have relied solely on the content of a single tweet to infer its location @cite_33 @cite_44 @cite_46 @cite_30 @cite_52 @cite_39 @cite_41 . Again, most of these have actually worked on very restricted geographical areas, with tweets being limited to different regions, such as the United States @cite_46 @cite_41 , four different cities @cite_30 , and New York only @cite_44 . @cite_33 did focus on a broader geographical area, including 3.7k cities all over the world. Nevertheless, their study focused on a limited number of cities, disregarding other locations, and only classified tweets written in English. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_41",
"@cite_52",
"@cite_39",
"@cite_44",
"@cite_46"
],
"mid": [
"",
"2250457198",
"2165442870",
"2288692534",
"2137435333",
"2950066308",
"2142889507"
],
"abstract": [
"",
"Geolocation prediction is vital to geospatial applications like localised search and local event detection. Predominately, social media geolocation models are based on full text data, including common words with no geospatial dimension (e.g. today) and noisy strings (tmrw), potentially hampering prediction and leading to slower more memory-intensive models. In this paper, we focus on finding location indicative words (LIWs) via feature selection, and establishing whether the reduced feature set boosts geolocation accuracy. Our results show that an information gain ratiobased approach surpasses other methods at LIW selection, outperforming state-of-the-art geolocation prediction methods by 10.6 in accuracy and reducing the mean and median of prediction error distance by 45km and 209km, respectively, on a public dataset. We further formulate notions of prediction confidence, and demonstrate that performance is even higher in cases where our model is more confident, striking a trade-off between accuracy and coverage. Finally, the identified LIWs reveal regional language differences, which could be potentially useful for lexicographers.",
"We investigate automatic geolocation (i.e. identification of the location, expressed as latitude longitude coordinates) of documents. Geolocation can be an effective means of summarizing large document collections and it is an important component of geographic information retrieval. We describe several simple supervised methods for document geolocation using only the document's raw text as evidence. All of our methods predict locations in the context of geodesic grids of varying degrees of resolution. We evaluate the methods on geotagged Wikipedia articles and Twitter feeds. For Wikipedia, our best method obtains a median prediction error of just 11.8 kilometers. Twitter geolocation is more challenging: we obtain a median error of 479 km, an improvement on previous results for the dataset.",
"The rise in the use of social networks in the recent years has resulted in an abundance of information on different aspects of everyday social activities that is available online, with the most prominent and timely source of such information being Twitter. This has resulted in a proliferation of tools and applications that can help end-users and large-scale event organizers to better plan and manage their activities. In this process of analysis of the information originating from social networks, an important aspect is that of the geographic coordinates, i.e., geolocalisation, of the relevant information, which is necessary for several applications (e.g., on trending venues, traffic jams, etc.). Unfortunately, only a very small percentage of the twitter posts are geotagged, which significantly restricts the applicability and utility of such applications. In this work, we address this problem by proposing a framework for geolocating tweets that are not geotagged. Our solution is general, and estimates the location from which a post was generated by exploiting the similarities in the content between this post and a set of geotagged tweets, as well as their time-evolution characteristics. Contrary to previous approaches, our framework aims at providing accurate geolocation estimates at fine grain (i.e., within a city). The experimental evaluation with real data demonstrates the efficiency and effectiveness of our approach.",
"The geographical properties of words have recently begun to be exploited for geolocating documents based solely on their text, often in the context of social media and online content. One common approach for geolocating texts is rooted in information retrieval. Given training documents labeled with latitude longitude coordinates, a grid is overlaid on the Earth and pseudo-documents constructed by concatenating the documents within a given grid cell; then a location for a test document is chosen based on the most similar pseudo-document. Uniform grids are normally used, but they are sensitive to the dispersion of documents over the earth. We define an alternative grid construction using k-d trees that more robustly adapts to data, especially with larger training sets. We also provide a better way of choosing the locations for pseudo-documents. We evaluate these strategies on existing Wikipedia and Twitter corpora, as well as a new, larger Twitter corpus. The adaptive grid achieves competitive results with a uniform grid on small training sets and outperforms it on the large Twitter corpus. The two grid constructions can also be combined to produce consistently strong results across all training sets.",
"Associating geo-coordinates with the content of social media posts can enhance many existing applications and services and enable a host of new ones. Unfortunately, a majority of social media posts are not tagged with geo-coordinates. Even when location data is available, it may be inaccurate, very broad or sometimes fictitious. Contemporary location estimation approaches based on analyzing the content of these posts can identify only broad areas such as a city, which limits their usefulness. To address these shortcomings, this paper proposes a methodology to narrowly estimate the geo-coordinates of social media posts with high accuracy. The methodology relies solely on the content of these posts and prior knowledge of the wide geographical region from where the posts originate. An ensemble of language models, which are smoothed over non-overlapping sub-regions of a wider region, lie at the heart of the methodology. Experimental evaluation using a corpus of over half a million tweets from New York City shows that the approach, on an average, estimates locations of tweets to within just 2.15km of their actual positions.",
"The rapid growth of geotagged social media raises new computational possibilities for investigating geographic linguistic variation. In this paper, we present a multi-level generative model that reasons jointly about latent topics and geographical regions. High-level topics such as \"sports\" or \"entertainment\" are rendered differently in each geographic region, revealing topic-specific regional distinctions. Applied to a new dataset of geotagged microblogs, our model recovers coherent topics and their regional variants, while identifying geographic areas of linguistic consistency. The model also enables prediction of an author's geographic location from raw text, outperforming both text regression and supervised topic models."
]
} |
1604.07279 | 2410118306 | Actionness was introduced to quantify the likelihood of containing a generic action instance at a specific location. Accurate and efficient estimation of actionness is important in video analysis and may benefit other relevant tasks such as action recognition and action detection. This paper presents a new deep architecture for actionness estimation, called hybrid fully convolutional network (H-FCN), which is composed of appearance FCN (A-FCN) and motion FCN (M-FCN). These two FCNs leverage the strong capacity of deep models to estimate actionness maps from the perspectives of static appearance and dynamic motion, respectively. In addition, the fully convolutional nature of H-FCN allows it to efficiently process videos with arbitrary sizes. Experiments are conducted on the challenging datasets of Stanford40, UCF Sports, and JHMDB to verify the effectiveness of H-FCN on actionness estimation, which demonstrate that our method achieves superior performance to previous ones. Moreover, we apply the estimated actionness maps on action proposal generation and action detection. Our actionness maps advance the current state-of-the-art performance of these tasks substantially. | Chen @cite_20 first studied the problem of actionness from the philosophical and visual perspective of action. They proposed Lattice Conditional Ordinal Random Fields to rank actionness. Our definition of actionness is consistent with theirs but we introduce a new method called hybrid fully convolutional networks to estimate actionness. Besides, we further apply our actionness map for the task of action detection. Motivated by object proposals in images @cite_22 @cite_19 , several methods have been developed to generate action proposals in video domain @cite_7 @cite_41 @cite_5 @cite_46 . Most of these methods generated action proposals based on low-level segmentation and hierarchically merge super-voxels @cite_32 in spatio-temporal domain. However, video segmentation itself is a difficult problem and still under research. Yu @cite_5 exploited human and motion detection algorithms to generate candidate bounding boxes as action proposals. Our method does not rely on any pre-processing technique and directly transform raw images into actionness map with fully convolutional networks. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_41",
"@cite_32",
"@cite_19",
"@cite_5",
"@cite_46",
"@cite_20"
],
"mid": [
"2066624635",
"2136539266",
"",
"2081432165",
"2088049833",
"1945129080",
"2018068650",
""
],
"abstract": [
"We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.",
"Super pixel and objectness algorithms are broadly used as a pre-processing step to generate support regions and to speed-up further computations. Recently, many algorithms have been extended to video in order to exploit the temporal consistency between frames. However, most methods are computationally too expensive for real-time applications. We introduce an online, real-time video super pixel algorithm based on the recently proposed SEEDS super pixels. A new capability is incorporated which delivers multiple diverse samples (hypotheses) of super pixels in the same image or video sequence. The multiple samples are shown to provide a strong cue to efficiently measure the objectness of image windows, and we introduce the novel concept of objectness in temporal windows. Experiments show that the video super pixels achieve comparable performance to state-of-the-art offline methods while running at 30 fps on a single 2.8 GHz i7 CPU. State-of-the-art performance on objectness is also demonstrated, yet orders of magnitude faster and extended to temporal windows in video.",
"",
"Supervoxel segmentation has strong potential to be incorporated into early video analysis as superpixel segmentation has in image analysis. However, there are many plausible supervoxel methods and little understanding as to when and where each is most appropriate. Indeed, we are not aware of a single comparative study on supervoxel segmentation. To that end, we study five supervoxel algorithms in the context of what we consider to be a good supervoxel: namely, spatiotemporal uniformity, object region boundary detection, region compression and parsimony. For the evaluation we propose a comprehensive suite of 3D volumetric quality metrics to measure these desirable supervoxel characteristics. We use three benchmark video data sets with a variety of content-types and varying amounts of human annotations. Our findings have led us to conclusive evidence that the hierarchical graph-based and segmentation by weighted aggregation methods perform best and almost equally-well on nearly all the metrics and are the methods of choice given our proposed assumptions.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).",
"In this paper we target at generating generic action proposals in unconstrained videos. Each action proposal corresponds to a temporal series of spatial bounding boxes, i.e., a spatio-temporal video tube, which has a good potential to locate one human action. Assuming each action is performed by a human with meaningful motion, both appearance and motion cues are utilized to measure the actionness of the video tubes. After picking those spatiotemporal paths of high actionness scores, our action proposal generation is formulated as a maximum set coverage problem, where greedy search is performed to select a set of action proposals that can maximize the overall actionness score. Compared with existing action proposal approaches, our action proposals do not rely on video segmentation and can be generated in nearly real-time. Experimental results on two challenging datasets, MSRII and UCF 101, validate the superior performance of our action proposals as well as competitive results on action detection and search.",
"This paper considers the problem of action localization, where the objective is to determine when and where certain actions appear. We introduce a sampling strategy to produce 2D+t sequences of bounding boxes, called tubelets. Compared to state-of-the-art alternatives, this drastically reduces the number of hypotheses that are likely to include the action of interest. Our method is inspired by a recent technique introduced in the context of image localization. Beyond considering this technique for the first time for videos, we revisit this strategy for 2D+t sequences obtained from super-voxels. Our sampling strategy advantageously exploits a criterion that reflects how action related motion deviates from background motion. We demonstrate the interest of our approach by extensive experiments on two public datasets: UCF Sports and MSR-II. Our approach significantly outperforms the state-of-the-art on both datasets, while restricting the search of actions to a fraction of possible bounding box sequences.",
""
]
} |
1604.07279 | 2410118306 | Actionness was introduced to quantify the likelihood of containing a generic action instance at a specific location. Accurate and efficient estimation of actionness is important in video analysis and may benefit other relevant tasks such as action recognition and action detection. This paper presents a new deep architecture for actionness estimation, called hybrid fully convolutional network (H-FCN), which is composed of appearance FCN (A-FCN) and motion FCN (M-FCN). These two FCNs leverage the strong capacity of deep models to estimate actionness maps from the perspectives of static appearance and dynamic motion, respectively. In addition, the fully convolutional nature of H-FCN allows it to efficiently process videos with arbitrary sizes. Experiments are conducted on the challenging datasets of Stanford40, UCF Sports, and JHMDB to verify the effectiveness of H-FCN on actionness estimation, which demonstrate that our method achieves superior performance to previous ones. Moreover, we apply the estimated actionness maps on action proposal generation and action detection. Our actionness maps advance the current state-of-the-art performance of these tasks substantially. | Action detection has been comprehensively studied in previous works @cite_45 @cite_28 @cite_6 @cite_36 @cite_47 @cite_46 @cite_26 @cite_2 . Methods in @cite_6 @cite_45 used Bag of Visual Words (BoVWs) representation to describe action and utilized sliding window scheme for detection. Ke @cite_28 utilized global template matching with the volume features for event detection. Lan @cite_45 resorted to latent learning to locate action automatically. Tian @cite_36 extended the 2D deformable part model to 3D cases for localizing actions and Wang @cite_47 proposed a unified approach to perform action detection and pose estimation by using dynamic-poselets and modeling their relations. Lu @cite_2 proposed a MRF framework for human action segmentation with hierarchical super-voxel consistency. Jain @cite_46 produced action proposals using super-voxels and utilized hand-crafted features. Gkioxari @cite_26 proposed a similar proposal-based action detection method, but replaced hand-crafted features with deep-learned representations. Our method focuses on actionness estimation and is complementary to these proposal-based action detection methods in sense that our actionness map can be used to generate proposals. | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_36",
"@cite_46",
"@cite_6",
"@cite_45",
"@cite_2",
"@cite_47"
],
"mid": [
"1923332106",
"2137981002",
"2095661305",
"2018068650",
"2162451865",
"2131311058",
"1912148408",
"410625161"
],
"abstract": [
"We address the problem of action detection in videos. Driven by the latest progress in object detection from 2D images, we build action models using rich feature hierarchies derived from shape and kinematic cues. We incorporate appearance and motion in two ways. First, starting from image region proposals we select those that are motion salient and thus are more likely to contain the action. This leads to a significant reduction in the number of regions being processed and allows for faster computations. Second, we extract spatio-temporal feature representations to build strong classifiers using Convolutional Neural Networks. We link our predictions to produce detections consistent in time, which we call action tubes. We show that our approach outperforms other techniques in the task of action detection.",
"Real-world actions occur often in crowded, dynamic environments. This poses a difficult challenge for current approaches to video event detection because it is difficult to segment the actor from the background due to distracting motion from other objects in the scene. We propose a technique for event recognition in crowded videos that reliably identifies actions in the presence of partial occlusion and background clutter. Our approach is based on three key ideas: (1) we efficiently match the volumetric representation of an event against oversegmented spatio-temporal video volumes; (2) we augment our shape-based features using flow; (3) rather than treating an event template as an atomic entity, we separately match by parts (both in space and time), enabling robustness against occlusions and actor variability. Our experiments on human actions, such as picking up a dropped object or waving in a crowd show reliable detection with few false positives.",
"Deformable part models have achieved impressive performance for object detection, even on difficult image datasets. This paper explores the generalization of deformable part models from 2D images to 3D spatiotemporal volumes to better study their effectiveness for action detection in video. Actions are treated as spatiotemporal patterns and a deformable part model is generated for each action from a collection of examples. For each action model, the most discriminative 3D sub volumes are automatically selected as parts and the spatiotemporal relations between their locations are learned. By focusing on the most distinctive parts of each action, our models adapt to intra-class variation and show robustness to clutter. Extensive experiments on several video datasets demonstrate the strength of spatiotemporal DPMs for classifying and localizing actions.",
"This paper considers the problem of action localization, where the objective is to determine when and where certain actions appear. We introduce a sampling strategy to produce 2D+t sequences of bounding boxes, called tubelets. Compared to state-of-the-art alternatives, this drastically reduces the number of hypotheses that are likely to include the action of interest. Our method is inspired by a recent technique introduced in the context of image localization. Beyond considering this technique for the first time for videos, we revisit this strategy for 2D+t sequences obtained from super-voxels. Our sampling strategy advantageously exploits a criterion that reflects how action related motion deviates from background motion. We demonstrate the interest of our approach by extensive experiments on two public datasets: UCF Sports and MSR-II. Our approach significantly outperforms the state-of-the-art on both datasets, while restricting the search of actions to a fraction of possible bounding box sequences.",
"Actions are spatio-temporal patterns which can be characterized by collections of spatio-temporal invariant features. Detection of actions is to find the re-occurrences (e.g. through pattern matching) of such spatio-temporal patterns. This paper addresses two critical issues in pattern matching-based action detection: (1) efficiency of pattern search in 3D videos and (2) tolerance of intra-pattern variations of actions. Our contributions are two-fold. First, we propose a discriminative pattern matching called naive-Bayes based mutual information maximization (NBMIM) for multi-class action categorization. It improves the state-of-the-art results on standard KTH dataset. Second, a novel search algorithm is proposed to locate the optimal subvolume in the 3D video space for efficient action detection. Our method is purely data-driven and does not rely on object detection, tracking or background subtraction. It can well handle the intra-pattern variations of actions such as scale and speed variations, and is insensitive to dynamic and clutter backgrounds and even partial occlusions. The experiments on versatile datasets including KTH and CMU action datasets demonstrate the effectiveness and efficiency of our method.",
"In this paper we develop an algorithm for action recognition and localization in videos. The algorithm uses a figure-centric visual word representation. Different from previous approaches it does not require reliable human detection and tracking as input. Instead, the person location is treated as a latent variable that is inferred simultaneously with action recognition. A spatial model for an action is learned in a discriminative fashion under a figure-centric representation. Temporal smoothness over video sequences is also enforced. We present results on the UCF-Sports dataset, verifying the effectiveness of our model in situations where detection and tracking of individuals is challenging.",
"Detailed analysis of human action, such as action classification, detection and localization has received increasing attention from the community; datasets like JHMDB have made it plausible to conduct studies analyzing the impact that such deeper information has on the greater action understanding problem. However, detailed automatic segmentation of human action has comparatively been unexplored. In this paper, we take a step in that direction and propose a hierarchical MRF model to bridge low-level video fragments with high-level human motion and appearance; novel higher-order potentials connect different levels of the supervoxel hierarchy to enforce the consistency of the human segmentation by pulling from different segment-scales. Our single layer model significantly outperforms the current state-of-the-art on actionness, and our full model improves upon the single layer baselines in action segmentation.",
"Action detection is of great importance in understanding human motion from video. Compared with action recognition, it not only recognizes action type, but also localizes its spatiotemporal extent. This paper presents a relational model for action detection, which first decomposes human action into temporal “key poses” and then further into spatial “action parts”. Specifically, we start by clustering cuboids around each human joint into dynamic-poselets using a new descriptor. The cuboids from the same cluster share consistent geometric and dynamic structure, and each cluster acts as a mixture of body parts. We then propose a sequential skeleton model to capture the relations among dynamic-poselets. This model unifies the tasks of learning the composites of mixture dynamic-poselets, the spatiotemporal structures of action parts, and the local model for each action part in a single framework. Our model not only allows to localize the action in a video stream, but also enables a detailed pose estimation of an actor. We formulate the model learning problem in a structured SVM framework and speed up model inference by dynamic programming. We conduct experiments on three challenging action detection datasets: the MSR-II dataset, the UCF Sports dataset, and the JHMDB dataset. The results show that our method achieves superior performance to the state-of-the-art methods on these datasets."
]
} |
1604.07279 | 2410118306 | Actionness was introduced to quantify the likelihood of containing a generic action instance at a specific location. Accurate and efficient estimation of actionness is important in video analysis and may benefit other relevant tasks such as action recognition and action detection. This paper presents a new deep architecture for actionness estimation, called hybrid fully convolutional network (H-FCN), which is composed of appearance FCN (A-FCN) and motion FCN (M-FCN). These two FCNs leverage the strong capacity of deep models to estimate actionness maps from the perspectives of static appearance and dynamic motion, respectively. In addition, the fully convolutional nature of H-FCN allows it to efficiently process videos with arbitrary sizes. Experiments are conducted on the challenging datasets of Stanford40, UCF Sports, and JHMDB to verify the effectiveness of H-FCN on actionness estimation, which demonstrate that our method achieves superior performance to previous ones. Moreover, we apply the estimated actionness maps on action proposal generation and action detection. Our actionness maps advance the current state-of-the-art performance of these tasks substantially. | Convolutional neural networks (CNNs) have achieved remarkable successes for various tasks, such as object recognition @cite_24 @cite_25 @cite_17 @cite_1 , event recognition @cite_30 @cite_8 , crowd analysis @cite_12 @cite_13 and so on. Recently, several attempts have been made in applying CNN for action recognition from videos @cite_4 @cite_15 @cite_29 @cite_0 . Fully convolutional networks (FCN) were originally introduced for the task of semantic segmentation in images @cite_44 . One advantage of FCN is that it can take input of arbitrary size and produce semantic maps of corresponding size with efficient inference and learning. To our best knowledge, we are the first to apply FCN into video domain for actionness estimation. In particular, we propose an effective hybrid fully convolutional network which leverages both appearance and motion cues for detecting actionness. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_8",
"@cite_29",
"@cite_1",
"@cite_24",
"@cite_0",
"@cite_44",
"@cite_15",
"@cite_13",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"1903612395",
"2952186347",
"1900913856",
"1944615693",
"2952186574",
"",
"",
"2952632681",
"2308045930",
"2436685139",
"1686810756",
"1962468782",
"2950179405"
],
"abstract": [
"A considerable portion of web images capture events that occur in our personal lives or social activities. In this paper, we aim to develop an effective method for recognizing events from such images. Despite the sheer amount of study on event recognition, most existing methods rely on videos and are not directly applicable to this task. Generally, events are complex phenomena that involve interactions among people and objects, and therefore analysis of event photos requires techniques that can go beyond recognizing individual objects and carry out joint reasoning based on evidences of multiple aspects. Inspired by the recent success of deep learning, we formulate a multi-layer framework to tackle this problem, which takes into account both visual appearance and the interactions among humans and objects, and combines them via semantic fusion. An important issue arising here is that humans and objects discovered by detectors are in the form of bounding boxes, and there is no straightforward way to represent their interactions and incorporate them with a deep network. We address this using a novel strategy that projects the detected instances onto multi-scale spatial maps. On a large dataset with 60, 000 images, the proposed method achieved substantial improvement over the state-of-the-art, raising the accuracy of event recognition by over 10 .",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"In this paper, we focus on complex event detection in internet videos while also providing the key evidences of the detection results. Convolutional Neural Networks (CNNs) have achieved promising performance in image classification and action recognition tasks. However, it remains an open problem how to use CNNs for video event detection and recounting, mainly due to the complexity and diversity of video events. In this work, we propose a flexible deep CNN infrastructure, namely Deep Event Network (DevNet), that simultaneously detects pre-defined events and provides key spatial-temporal evidences. Taking key frames of videos as input, we first detect the event of interest at the video level by aggregating the CNN features of the key frames. The pieces of evidences which recount the detection results, are also automatically localized, both temporally and spatially. The challenge is that we only have video level labels, while the key evidences usually take place at the frame levels. Based on the intrinsic property of CNNs, we first generate a spatial-temporal saliency map by back passing through DevNet, which then can be used to find the key frames which are most indicative to the event, as well as to localize the specific spatial position, usually an object, in the frame of the highly indicative area. Experiments on the large scale TRECVID 2014 MEDTest dataset demonstrate the promising performance of our method, both for event detection and evidence recounting.",
"Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features [31] and deep-learned features [24]. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMD-B51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features [31] and deep-learned features [24]. Our method also achieves superior performance to the state of the art on these datasets.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"",
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"",
"Learning and capturing both appearance and dynamic representations are pivotal for crowd video understanding. Convolutional Neural Networks (CNNs) have shown its remarkable potential in learning appearance representations from images. However, the learning of dynamic representation, and how it can be effectively combined with appearance features for video analysis, remains an open problem. In this study, we propose a novel spatio-temporal CNN, named Slicing CNN (S-CNN), based on the decomposition of 3D feature maps into 2D spatio-and 2D temporal-slices representations. The decomposition brings unique advantages: (1) the model is capable of capturing dynamics of different semantic units such as groups and objects, (2) it learns separated appearance and dynamic representations while keeping proper interactions between them, and (3) it exploits the selectiveness of spatial filters to discard irrelevant background clutter for crowd understanding. We demonstrate the effectiveness of the proposed S-CNN model on the WWW crowd video dataset for attribute recognition and observe significant performance improvements to the state-of-the-art methods (62.55 from 51.84 [21]).",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Crowded scene understanding is a fundamental problem in computer vision. In this study, we develop a multi-task deep model to jointly learn and combine appearance and motion features for crowd understanding. We propose crowd motion channels as the input of the deep model and the channel design is inspired by generic properties of crowd systems. To well demonstrate our deep model, we construct a new large-scale WWW Crowd dataset with 10, 000 videos from 8, 257 crowded scenes, and build an attribute set with 94 attributes on WWW. We further measure user study performance on WWW and compare this with the proposed deep models. Extensive experiments show that our deep models display significant performance improvements in cross-scene attribute recognition compared to strong crowd-related feature-based baselines, and the deeply learned features behave a superior performance in multi-task learning.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection."
]
} |
1604.06965 | 2415169254 | In most biological tissues, light scattering due to small differences in refractive index limits the depth of optical imaging systems. Two-photon microscopy (2PM), which significantly reduces the scattering of the excitation light, has emerged as the most common method to image deep within scattering biological tissue. This technique, however, requires high-power pulsed lasers that are both expensive and difficult to integrate into compact portable systems. Using a combination of theoretical and experimental techniques, we show that if the excitation path length can be minimized, selective plane illumination microscopy (SPIM) can image nearly as deep as 2PM without the need for a high-powered pulsed laser. Compared to other single-photon imaging techniques like epifluorescence and confocal microscopy, SPIM can image more than twice as deep in scattering media (∼10 times the mean scattering length). These results suggest that SPIM has the potential to provide deep imaging in scattering media in situations in which 2PM systems would be too large or costly. | The depth limits of confocal and multi-photon microscopy is a well-studied property @cite_36 @cite_11 @cite_37 @cite_23 @cite_30 @cite_28 . Light scattering by tissue is the primary factor that limits the imaging depth: the stronger the scattering, the shallower the depth limit. To meaningfully account for the effect of tissue scattering while comparing the imaging capabilities of various microscopy techniques, depth limits of imaging systems are typically expressed as a multiple of the number of mean free paths (MFPs), or scattering length, within the sample tissue @cite_36 @cite_37 . The mean free path is defined as the average distance a photon travels before a scattering event. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_36",
"@cite_28",
"@cite_23",
"@cite_11"
],
"mid": [
"2011487080",
"2113245224",
"",
"2013297745",
"",
"2141354231"
],
"abstract": [
"In this study we show that it is possible to successfully combine the benefits of light-sheet microscopy, self-reconstructing Bessel beams and two-photon fluorescence excitation to improve imaging in large, scattering media such as cancer cell clusters. We achieved a nearly two-fold increase in axial image resolution and 5–10 fold increase in contrast relative to linear excitation with Bessel beams. The light-sheet penetration depth could be increased by a factor of 3–5 relative to linear excitation with Gaussian beams. These finding arise from both experiments and computer simulations. In addition, we provide a theoretical description of how these results are composed. We investigated the change of image quality along the propagation direction of the illumination beams both for clusters of spheres and tumor multicellular spheroids. The results reveal that light-sheets generated by pulsed near-infrared Bessel beams and two photon excitation provide the best image resolution, contrast at both a minimum amount of artifacts and signal degradation along the propagation of the beam into the sample.",
"Endogenous fluorescence provides morphological, spectral, and lifetime contrast that can indicate disease states in tissues. Previous studies have demonstrated that two-photon autofluorescence microscopy (2PAM) can be used for noninvasive, three-dimensional imaging of epithelial tissues down to approximately 150 μm beneath the skin surface. We report ex-vivo 2PAM images of epithelial tissue from a human tongue biopsy down to 370 μm below the surface. At greater than 320 μm deep, the fluorescence generated outside the focal volume degrades the image contrast to below one. We demonstrate that these imaging depths can be reached with 160 mW of laser power (2-nJ per pulse) from a conventional 80-MHz repetition rate ultrafast laser oscillator. To better understand the maximum imaging depths that we can achieve in epithelial tissues, we studied image contrast as a function of depth in tissue phantoms with a range of relevant optical properties. The phantom data agree well with the estimated contrast decays from time-resolved Monte Carlo simulations and show maximum imaging depths similar to that found in human biopsy results. This work demonstrates that the low staining inhomogeneity (∼20) and large scattering coefficient (∼10 mm−1) associated with conventional 2PAM limit the maximum imaging depth to 3 to 5 mean free scattering lengths deep in epithelial tissue.",
"",
"It is highly desirable to be able to optically probe biological activities deep inside live organisms. By employing a spatially confined excitation via a nonlinear transition, multiphoton fluorescence microscopy has become indispensable for imaging scattering samples. However, as the incident laser power drops exponentially with imaging depth due to scattering loss, the out-of-focus fluorescence eventually overwhelms the in-focal signal. The resulting loss of imaging contrast defines a fundamental imaging-depth limit, which cannot be overcome by increasing excitation intensity. Herein we propose to significantly extend this depth limit by multiphoton activation and imaging (MPAI) of photo-activatable fluorophores. The imaging contrast is drastically improved due to the created disparity of bright-dark quantum states in space. We demonstrate this new principle by both analytical theory and experiments on tissue phantoms labeled with synthetic caged fluorescein dye or genetically encodable photoactivatable GFP.",
"",
"We have analyzed how the maximal imaging depth of two-photon microscopy in scattering samples depends on properties of the sample and the imaging system. We find that the imaging depth increases with increasing numerical aperture and staining inhomogeneity and with decreasing excitation-pulse duration and scattering anisotropy factor, but is ultimately limited by near-surface fluorescence with slight improvements possible using special detection strategies."
]
} |
1604.06965 | 2415169254 | In most biological tissues, light scattering due to small differences in refractive index limits the depth of optical imaging systems. Two-photon microscopy (2PM), which significantly reduces the scattering of the excitation light, has emerged as the most common method to image deep within scattering biological tissue. This technique, however, requires high-power pulsed lasers that are both expensive and difficult to integrate into compact portable systems. Using a combination of theoretical and experimental techniques, we show that if the excitation path length can be minimized, selective plane illumination microscopy (SPIM) can image nearly as deep as 2PM without the need for a high-powered pulsed laser. Compared to other single-photon imaging techniques like epifluorescence and confocal microscopy, SPIM can image more than twice as deep in scattering media (∼10 times the mean scattering length). These results suggest that SPIM has the potential to provide deep imaging in scattering media in situations in which 2PM systems would be too large or costly. | @cite_4 experimentally demonstrated that the depth limits of confocal microscopy is greater than 3-4 MFPs using a 10-line mm Ronchi ruling embedded in a suspension of polystyrene latex microspheres in water. Their results show that independent of the ability of the microscope to reject out-of-focus light, there exists a depth limit for confocal microscopy due to the fall of signal intensity. The depth limit is the smallest signal that can be detected by the sensor above the sensor noise floor. also showed that decreasing the diameter of the pinhole in the confocal microscope improves the rejection of background scattered light; however, the smaller pinhole comes at the cost of reduced light collection efficiency from the sample plane. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2014644083"
],
"abstract": [
"We examine the performance of confocal microscopes designed for probing structures embedded in turbid media. A heuristic scheme is described that combines a numerical Monte Carlo simulation of photon transport in a turbid medium with a geometrical ray trace through the confocal optics. To show the effects of multiple scattering on depth discrimination, we compare results from the Monte Carlo simulations and scalar diffraction theory. Experimental results showing the effects of the pinhole diameter and other variables on imaging performance at various optical depths in suspensions of polystyrene microspheres were found to correspond well with the Monte Carlo simulations. The major conclusion of the paper is that the trade-off between signal level and background scattered-light rejection places a fundamental limit on the sectioning capability of the microscope."
]
} |
1604.06965 | 2415169254 | In most biological tissues, light scattering due to small differences in refractive index limits the depth of optical imaging systems. Two-photon microscopy (2PM), which significantly reduces the scattering of the excitation light, has emerged as the most common method to image deep within scattering biological tissue. This technique, however, requires high-power pulsed lasers that are both expensive and difficult to integrate into compact portable systems. Using a combination of theoretical and experimental techniques, we show that if the excitation path length can be minimized, selective plane illumination microscopy (SPIM) can image nearly as deep as 2PM without the need for a high-powered pulsed laser. Compared to other single-photon imaging techniques like epifluorescence and confocal microscopy, SPIM can image more than twice as deep in scattering media (∼10 times the mean scattering length). These results suggest that SPIM has the potential to provide deep imaging in scattering media in situations in which 2PM systems would be too large or costly. | @cite_35 measured the depth limit of confocal microscopy experimentally using a reflecting grating structure suspended in a solution of latex spheres and water. To simulate the effect of imaging deeper, increased the concentration of the latex spheres thereby increasing the number of mean free paths in their sample. When the concentration of the latex sphere was increased to 2 | {
"cite_N": [
"@cite_35"
],
"mid": [
"2083575776"
],
"abstract": [
"We investigate limits of the confocal microscope when applied to imaging through scattering layers and compare its performance with that of a correlation (heterodyne) microscope. Confocal laser scanning microscopy is shown to make possible imaging through scattering media owing to the spatial filtering of the signal back-reflected from the sample. Its performance is limited by the noise of the detection system and or insufficient rejection of scattered light, depending on the sample under investigation. Correlation microscopy with narrow or broad bandwidth light can extend these limits by selective, optical amplification of the image information."
]
} |
1604.06965 | 2415169254 | In most biological tissues, light scattering due to small differences in refractive index limits the depth of optical imaging systems. Two-photon microscopy (2PM), which significantly reduces the scattering of the excitation light, has emerged as the most common method to image deep within scattering biological tissue. This technique, however, requires high-power pulsed lasers that are both expensive and difficult to integrate into compact portable systems. Using a combination of theoretical and experimental techniques, we show that if the excitation path length can be minimized, selective plane illumination microscopy (SPIM) can image nearly as deep as 2PM without the need for a high-powered pulsed laser. Compared to other single-photon imaging techniques like epifluorescence and confocal microscopy, SPIM can image more than twice as deep in scattering media (∼10 times the mean scattering length). These results suggest that SPIM has the potential to provide deep imaging in scattering media in situations in which 2PM systems would be too large or costly. | As an alternative approach to estimate the depth limit of 2PM, @cite_21 used probabilistic model to estimate the contrast between in-focus and out-of-focus fluorescence. They reported a depth limit between 15-22 MFPs at which the image contrast falls to one reported the depth limit as 10-15 MFP, calculated at excitation wavelength, which translates to 15-22 MFPs at emission wavelength of 560 nm. . | {
"cite_N": [
"@cite_21"
],
"mid": [
"2161844736"
],
"abstract": [
"A model for the propagation of a focused light beam in a strongly scattering medium is used to analyse various factors that limit the capability of two-photon fluorescence microscopy (TPFM) to image deep sections of optically thick biological specimens. The TPFM imaging depth is shown to be limited by three main factors: (1) beam broadening as a result of multiple small-angle scattering, leading to a loss of submicron lateral resolution; (2) strong near-surface fluorescence at large imaging depths as a result of the increase in average source power in order to compensate for scattering losses; (3) reduction in useful two-photon fluorescence signal level because of the exponential attenuation of the excitation power. The influence of these factors is examined in a small-angle diffusion approximation of radiative transfer theory. The first two of them are shown to set a fundamental TPFM limit, whereas the last is an instrumental limitation and appears to be the most critical to state-of-the-art commercial two-photon laser scanning microscopy systems for the vast majority of fluorophores in current use."
]
} |
1604.06965 | 2415169254 | In most biological tissues, light scattering due to small differences in refractive index limits the depth of optical imaging systems. Two-photon microscopy (2PM), which significantly reduces the scattering of the excitation light, has emerged as the most common method to image deep within scattering biological tissue. This technique, however, requires high-power pulsed lasers that are both expensive and difficult to integrate into compact portable systems. Using a combination of theoretical and experimental techniques, we show that if the excitation path length can be minimized, selective plane illumination microscopy (SPIM) can image nearly as deep as 2PM without the need for a high-powered pulsed laser. Compared to other single-photon imaging techniques like epifluorescence and confocal microscopy, SPIM can image more than twice as deep in scattering media (∼10 times the mean scattering length). These results suggest that SPIM has the potential to provide deep imaging in scattering media in situations in which 2PM systems would be too large or costly. | Despite the growing popularity of SPIM, the fundamental imaging depth limit of this microscopy technique has yet to characterized either theoretically or experimentally. @cite_38 reported that in their experiments the signal strength and the noise strength are equal at 10 MFPs, but did not characterize the factors that affect this maximum imaging depth. Here, using the openSPIM system, @cite_16 we also show a depth limit close to 10 MFPs, and take this result further to identify the major factors that influence this imaging depth. | {
"cite_N": [
"@cite_38",
"@cite_16"
],
"mid": [
"2073193146",
"2056942342"
],
"abstract": [
"Highly flexible fibers enable simultaneous electrical neural recording, optical stimulation and drug delivery in freely moving mice.",
"Light sheet microscopy promises to revolutionize developmental biology by enabling live in toto imaging of entire embryos with minimal phototoxicity. We present detailed instructions for building a compact and customizable Selective Plane Illumination Microscopy (SPIM) system. The integrated OpenSPIM hardware and software platform is shared with the scientific community through a public website, thereby making light sheet microscopy accessible for widespread use and optimization to various applications."
]
} |
1604.06853 | 2415944420 | Cyclic or general overlays may provide multiple paths between publishers and subscribers. However, an advertisement tree and a matching subscription activates only one path for notifications routing in publish subscribe systems. This poses serious challenges in handling network conditions like congestion, and link or broker failures. Further, content-based dynamic routing of notifications requires instantaneous updates in routing paths, which is not a scalable option. This paper introduces a clustering approach with a bit-vector technique for inter-cluster dynamic routing of notifications in a structured cyclic topology that provides multiple paths between publishers and interested subscribers. The advertisement forwarding process exploits the structured nature of the overlay topology to generate advertisement trees of length 1 without generating duplicate messages in the advertisement forwarding process. Issued subscriptions are divided into multiple disjoint subgropus, where each subscription is broadcast to a cluster, which is a limited part of the structured cyclic overlay network. We implemented novel static and intra-cluster dynamic routing algorithms in the proposed overlay topology for our advertisement-based publish subscribe system, called OctopiA. We also performed a pragmatic comparison of our two algorithms with the state-of-the-art. Experiments on a cluster testbed show that our approach generates fewer inter-broker messages, and is scalable. | In Fig. 1(a), publisher P is connected to Broker 1, while an interested subscriber S, is connected to Broker 6. Broker 1, which is also root of the tree of the advertisement issued by P, assigns a unique TID to advertisement before forwarding it to other brokers. After receiving the advertisement from Broker 4, Broker 5 discard duplicate of the same advertisement when received from Broker 2. Similarly, Broker 6 discards the duplicate when received from Broker 3 @cite_19 . This indicates that despite using unique TID for each advertisement, extra inter-broker messages (IMs) still generate to detect duplicates. The advertisement tree shown in Fig. 1(a) is of the shortest length, which may also reduces lengths of the subscription trees. In this case, publications are processed by a fewer number of brokers in the routing paths, which improves throughput and reduces latency. | {
"cite_N": [
"@cite_19"
],
"mid": [
"1608885791"
],
"abstract": [
"This paper develops content-based publish subscribe algorithms to support general overlay topologies, as opposed to traditional acyclic or tree-based topologies. Among other benefits, publication routes can adapt to dynamic conditions by choosing among alternate routing paths, and composite events can be detected at optimal points in the network. The algorithms are implemented in the PADRES publish subscribe system and evaluated in a controlled local environment and a wide-area PlanetLab deployment. Atomic subscription notification delivery time improves by 20 in a well connected network, and composite subscriptions can be processed with 80 less network traffic and notifications delivered with about half the end to end delay."
]
} |
1604.06970 | 2949396800 | We present a probabilistic generative model for inferring a description of coordinated, recursively structured group activities at multiple levels of temporal granularity based on observations of individuals' trajectories. The model accommodates: (1) hierarchically structured groups, (2) activities that are temporally and compositionally recursive, (3) component roles assigning different subactivity dynamics to subgroups of participants, and (4) a nonparametric Gaussian Process model of trajectories. We present an MCMC sampling framework for performing joint inference over recursive activity descriptions and assignment of trajectories to groups, integrating out continuous parameters. We demonstrate the model's expressive power in several simulated and complex real-world scenarios from the VIRAT and UCLA Aerial Event video data sets. | A number of researchers have proposed models that distinguish the different roles that individuals play in a coordinated activity @cite_4 @cite_17 @cite_27 @cite_2 . These models capture the semantics of activities with component structure. It can be difficult to scale role identification in scenes with an arbitrary number of individuals, in particular while properly handling identification of non-participants @cite_2 . A consequence of our joint inference over role assignments and groups is that our model naturally distinguishes and separates participants of different activities playing different roles. | {
"cite_N": [
"@cite_2",
"@cite_27",
"@cite_4",
"@cite_17"
],
"mid": [
"2083779228",
"2057067088",
"2003708924",
"2158785151"
],
"abstract": [
"We present a joint estimation technique of event localization and role assignment when the target video event is described by a scenario. Specifically, to detect multi-agent events from video, our algorithm identifies agents involved in an event and assigns roles to the participating agents. Instead of iterating through all possible agent-role combinations, we formulate the joint optimization problem as two efficient sub problems-quadratic programming for role assignment followed by linear programming for event localization. Additionally, we reduce the computational complexity significantly by applying role-specific event detectors to each agent independently. We test the performance of our algorithm in natural videos, which contain multiple target events and nonparticipating agents.",
"We present a hierarchical model for human activity recognition in entire multi-person scenes. Our model describes human behaviour at multiple levels of detail, ranging from low-level actions through to high-level events. We also include a model of social roles, the expected behaviours of certain people, or groups of people, in a scene. The hierarchical model includes these varied representations, and various forms of interactions between people present in a scene. The model is trained in a discriminative max-margin framework. Experimental results demonstrate that this model can improve performance at all considered levels of detail, on two challenging datasets.",
"This paper describes a stochastic methodology for the recognition of various types of high-level group activities. Our system maintains a probabilistic representation of a group activity, describing how individual activities of its group members must be organized temporally, spatially, and logically. In order to recognize each of the represented group activities, our system searches for a set of group members that has the maximum posterior probability of satisfying its representation. A hierarchical recognition algorithm utilizing a Markov chain Monte Carlo (MCMC)-based probability distribution sampling has been designed, detecting group activities and finding the acting groups simultaneously. The system has been tested to recognize complex activities such as a group of thieves stealing an object from another group' and a group assaulting a person'. Videos downloaded from YouTube as well as videos that we have taken are tested. Experimental results show that our system recognizes a wide range of group activities more reliably and accurately, as compared to previous approaches.",
"We present a system that produces sentential descriptions of video: who did what to whom, and where and how they did it. Action class is rendered as a verb, participant objects as noun phrases, properties of those objects as adjectival modifiers in those noun phrases, spatial relations between those participants as prepositional phrases, and characteristics of the event as prepositional-phrase adjuncts and adverbial modifiers. Extracting the information needed to render these linguistic entities requires an approach to event recognition that recovers object tracks, the trackto-role assignments, and changing body posture."
]
} |
1604.06970 | 2949396800 | We present a probabilistic generative model for inferring a description of coordinated, recursively structured group activities at multiple levels of temporal granularity based on observations of individuals' trajectories. The model accommodates: (1) hierarchically structured groups, (2) activities that are temporally and compositionally recursive, (3) component roles assigning different subactivity dynamics to subgroups of participants, and (4) a nonparametric Gaussian Process model of trajectories. We present an MCMC sampling framework for performing joint inference over recursive activity descriptions and assignment of trajectories to groups, integrating out continuous parameters. We demonstrate the model's expressive power in several simulated and complex real-world scenarios from the VIRAT and UCLA Aerial Event video data sets. | Considerable work has been devoted to developing more expressive models in which activities are decomposed into hierarchical levels of description, across different spatial and temporal granularities. Some prior models account for a specific number of hierarchical levels of description, with up to 3 levels being a popular choice @cite_3 @cite_13 @cite_20 @cite_9 . Other models permit a potentially greater number of levels of activity description, but the activity hierarchy is fixed prior to inference @cite_2 @cite_8 @cite_6 . In only a few cases, including our model, are the levels of activity description assessed during inference @cite_15 @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"2003708924",
"1925727155",
"",
"100367037",
"2016127016",
"2083779228",
"",
"2163415258",
"2023048829"
],
"abstract": [
"This paper describes a stochastic methodology for the recognition of various types of high-level group activities. Our system maintains a probabilistic representation of a group activity, describing how individual activities of its group members must be organized temporally, spatially, and logically. In order to recognize each of the represented group activities, our system searches for a set of group members that has the maximum posterior probability of satisfying its representation. A hierarchical recognition algorithm utilizing a Markov chain Monte Carlo (MCMC)-based probability distribution sampling has been designed, detecting group activities and finding the acting groups simultaneously. The system has been tested to recognize complex activities such as a group of thieves stealing an object from another group' and a group assaulting a person'. Videos downloaded from YouTube as well as videos that we have taken are tested. Experimental results show that our system recognizes a wide range of group activities more reliably and accurately, as compared to previous approaches.",
"This paper makes use of recent advances in group tracking and behavior recognition to process large amounts of video surveillance data from an underground railway station and perform a statistical analysis. The most important advantages of our approach are the robustness to process long videos and the capacity to recognize several and different events at once. This analysis automatically brings forward data about the usage of the station and the various behaviors of groups in different hours of the day. This data would be very hard to obtain without an automatic group tracking and behavior recognition method. We present the results and interpretation of one month of processed data from a video surveillance camera in the Torino subway.",
"",
"We present a coherent, discriminative framework for simultaneously tracking multiple people and estimating their collective activities. Instead of treating the two problems separately, our model is grounded in the intuition that a strong correlation exists between a person's motion, their activity, and the motion and activities of other nearby people. Instead of directly linking the solutions to these two problems, we introduce a hierarchy of activity types that creates a natural progression that leads from a specific person's motion to the activity of the group as a whole. Our model is capable of jointly tracking multiple people, recognizing individual activities (atomic activities), the interactions between pairs of people (interaction activities), and finally the behavior of groups of people (collective activities). We also propose an algorithm for solving this otherwise intractable joint inference problem by combining belief propagation with a version of the branch and bound algorithm equipped with integer programming. Experimental results on challenging video datasets demonstrate our theoretical claims and indicate that our model achieves the best collective activity classification results to date.",
"This paper presents an approach to detect and track groups of people in video-surveillance applications, and to automatically recognize their behavior. This method keeps track of individuals moving together by maintaining a spacial and temporal group coherence. First, people are individually detected and tracked. Second, their trajectories are analyzed over a temporal window and clustered using the Mean-Shift algorithm. A coherence value describes how well a set of people can be described as a group. Furthermore, we propose a formal event description language. The group events recognition approach is successfully validated on 4 camera views from 3 datasets: an airport, a subway, a shopping center corridor and an entrance hall.",
"We present a joint estimation technique of event localization and role assignment when the target video event is described by a scenario. Specifically, to detect multi-agent events from video, our algorithm identifies agents involved in an event and assigns roles to the participating agents. Instead of iterating through all possible agent-role combinations, we formulate the joint optimization problem as two efficient sub problems-quadratic programming for role assignment followed by linear programming for event localization. Additionally, we reduce the computational complexity significantly by applying role-specific event detectors to each agent independently. We test the performance of our algorithm in natural videos, which contain multiple target events and nonparticipating agents.",
"",
"We propose a discriminative model for recognizing group activities. Our model jointly captures the group activity, the individual person actions, and the interactions among them. Two new types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework. Different from most of the previous latent structured models which assume a predefined structure for the hidden layer, e.g. a tree structure, we treat the structure of the hidden layer as a latent variable and implicitly infer it during learning and inference. Our experimental results demonstrate that by inferring this contextual information together with adaptive structures, the proposed model can significantly improve activity recognition performance.",
"This paper addresses the challenge of recognizing behavior of groups of individuals in unconstraint surveillance environments. As opposed to approaches that rely on agglomerative or decisive hierarchical clustering techniques, we propose to recognize group interactions without making hard decisions about the underlying group structure. Instead we use a probabilistic grouping strategy evaluated from the pairwise spatial-temporal tracking information. A path-based grouping scheme determines a soft segmentation of groups and produces a weighted connection graph where its edges express the probability of individuals belonging to a group. Without further segmenting this graph, we show how a large number of low- and high-level behavior recognition tasks can be performed. Our work builds on a mature multi-camera multi-target person tracking system that operates in real-time. We derive probabilistic models to analyze individual track motion as well as group interactions. We show that the soft grouping can combine with motion analysis elegantly to robustly detect and predict group-level activities. Experimental results demonstrate the efficacy of our approach."
]
} |
1604.06970 | 2949396800 | We present a probabilistic generative model for inferring a description of coordinated, recursively structured group activities at multiple levels of temporal granularity based on observations of individuals' trajectories. The model accommodates: (1) hierarchically structured groups, (2) activities that are temporally and compositionally recursive, (3) component roles assigning different subactivity dynamics to subgroups of participants, and (4) a nonparametric Gaussian Process model of trajectories. We present an MCMC sampling framework for performing joint inference over recursive activity descriptions and assignment of trajectories to groups, integrating out continuous parameters. We demonstrate the model's expressive power in several simulated and complex real-world scenarios from the VIRAT and UCLA Aerial Event video data sets. | A third branch of work has been devoted to modeling activities not just among individuals, but involving groups of actors. In some models, activities include groups, but interactions are still considered between individuals within groups @cite_3 @cite_13 @cite_28 @cite_10 @cite_5 . Other models allow for activities to be performed between groups themselves @cite_20 @cite_21 . Still others, including our model, take group activity modeling a step further, allowing for arbitrary numbers of participants in groups, provided they satisfy group membership criteria while performing the activity @cite_15 @cite_2 @cite_7 @cite_1 . | {
"cite_N": [
"@cite_13",
"@cite_7",
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_20"
],
"mid": [
"2163415258",
"2952390756",
"2001190170",
"1980216718",
"2054248557",
"100367037",
"2083779228",
"1538609046",
"",
"1884674945",
"2023048829"
],
"abstract": [
"We propose a discriminative model for recognizing group activities. Our model jointly captures the group activity, the individual person actions, and the interactions among them. Two new types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework. Different from most of the previous latent structured models which assume a predefined structure for the hidden layer, e.g. a tree structure, we treat the structure of the hidden layer as a latent variable and implicitly infer it during learning and inference. Our experimental results demonstrate that by inferring this contextual information together with adaptive structures, the proposed model can significantly improve activity recognition performance.",
"With the advent of drones, aerial video analysis becomes increasingly important; yet, it has received scant attention in the literature. This paper addresses a new problem of parsing low-resolution aerial videos of large spatial areas, in terms of 1) grouping, 2) recognizing events and 3) assigning roles to people engaged in events. We propose a novel framework aimed at conducting joint inference of the above tasks, as reasoning about each in isolation typically fails in our setting. Given noisy tracklets of people and detections of large objects and scene surfaces (e.g., building, grass), we use a spatiotemporal AND-OR graph to drive our joint inference, using Markov Chain Monte Carlo and dynamic programming. We also introduce a new formalism of spatiotemporal templates characterizing latent sub-events. For evaluation, we have collected and released a new aerial videos dataset using a hex-rotor flying over picnic areas rich with group events. Our results demonstrate that we successfully address above inference tasks under challenging conditions.",
"In this paper, a novel crowd behavior representation, Bag of Trajectory Graphs (BoTG), is presented for dense crowd event recognition. To overcome huge loss of crowd structure and variability of motion in previous particle flow based methods, we design group-level representation beyond particle flow. From the observation that crowd particles are composed of atomic subgroups corresponding to informative behavior patterns, particle trajectories which simulate motion of individuals will be clustered to form groups at the first step. Then we connect nodes in each group as a trajectory graph and discover informative features to depict the graphs. A clip of crowd event can be further described by Bag of Trajectory Graphs (BoTG)-occurrences of behavior patterns, which provides critical clues for categorizing specific crowd event and detecting abnormality. The experimental results of abnormality detection and event recognition on public datasets demonstrate the effectiveness of our proposed BoTG on characterizing the group behaviors in dense crowd.",
"Environments such as schools, public parks and prisonsand others that contain a large number of people are typi-cally characterized by frequent and complex social interac-tions. In order to identify activities and behaviors in suchenvironments, it is necessary to understand the interactionsthat take place at a group level. To this end, this paper ad-dresses the problem of detecting and predicting suspiciousand in particular aggressive behaviors between groups ofindividuals such as gangs in prison yards. The work buildson a mature multi-camera multi-target person tracking sys-tem that operates in real-time and has the ability to han-dle crowded conditions. We consider two approaches forgrouping individuals: (i) agglomerative clustering favoredby the computer vision community, as well as (ii) decisiveclustering based on the concept of modularity, which is fa-vored by the social network analysis community. We showthe utility of such grouping analysis towards the detectionof group activities of interest. The presented algorithm isintegrated with a system operating in real-time to success-fully detect highly realistic aggressive behaviors enacted bycorrectional officers in a simulated prison environment. Wepresent results from these enactments that demonstrate theefficacy of our approach.",
"Collective behaviors are usually composed of several groups. Considering the interactions among groups, this paper presents a novel framework to parse collective behaviors for video surveillance applications. We first propose a latent hierarchical model (LHM) with varying structure to represent the behavior with multiple groups. Furthermore, we also propose a multi-layer-based (MLB) inference method, where a sample-based heuristic search (SHS) is introduced to infer the group affiliation. And latent SVM is adopted to learn our model. With the proposed LHM, not only are the collective behaviors detected effectively, but also the group affiliation in the collective behaviors is figured out. Experiment results demonstrate the effectiveness of the proposed framework.",
"We present a coherent, discriminative framework for simultaneously tracking multiple people and estimating their collective activities. Instead of treating the two problems separately, our model is grounded in the intuition that a strong correlation exists between a person's motion, their activity, and the motion and activities of other nearby people. Instead of directly linking the solutions to these two problems, we introduce a hierarchy of activity types that creates a natural progression that leads from a specific person's motion to the activity of the group as a whole. Our model is capable of jointly tracking multiple people, recognizing individual activities (atomic activities), the interactions between pairs of people (interaction activities), and finally the behavior of groups of people (collective activities). We also propose an algorithm for solving this otherwise intractable joint inference problem by combining belief propagation with a version of the branch and bound algorithm equipped with integer programming. Experimental results on challenging video datasets demonstrate our theoretical claims and indicate that our model achieves the best collective activity classification results to date.",
"We present a joint estimation technique of event localization and role assignment when the target video event is described by a scenario. Specifically, to detect multi-agent events from video, our algorithm identifies agents involved in an event and assigns roles to the participating agents. Instead of iterating through all possible agent-role combinations, we formulate the joint optimization problem as two efficient sub problems-quadratic programming for role assignment followed by linear programming for event localization. Additionally, we reduce the computational complexity significantly by applying role-specific event detectors to each agent independently. We test the performance of our algorithm in natural videos, which contain multiple target events and nonparticipating agents.",
"Activity understanding plays an essential role in video content analysis and remains a challenging open problem. Most of previous research is limited due to the use of excessively localized features without sufficiently encapsulating the interaction context or focus on simply discriminative models but totally ignoring the interaction patterns. In this paper, a new approach is proposed to recognize human group activities. Firstly, we design a new quaternion descriptor to describe the interactive insight of activities regarding the appearance, dynamic, causality and feedback, respectively. The designed descriptor is capable of delineating the individual and pairwise interactions in the activities. Secondly, considering both activity category and interaction variety, we propose an extended pLSA (probabilistic Latent Semantic Analysis) model with two hidden variables. This extended probabilistic graphic paradigm constructed on the quaternion descriptors facilitates the effective inference of activity categories as well as the exploration of activity interaction patterns. The experiments on the realistic movie and human activity databases validate that the proposed approach outperforms the state-of-the-art results.",
"",
"In this paper, we propose an activity localization method with contextual information of person relationships. Activity localization is a task to determine \"who participates to an activity group\", such as detecting \"walking in a group\" or \"talking in a group\". Usage of contextual information has been providing promising results in the previous activity recognition methods, however, the contextual information has been limited to the local information extracted from one person or only two people relationship. We propose a new context descriptor named \"contextual spatial pyramid model (CSPM)\", which represents the global relationships extracted from the whole of activities in single images. CSPM encodes useful relationships for activity localization, such as \"facing each other\". The experimental result shows CSPM improve activity localization performance, therefore CSPM provides strong contextual cues for activity recognition in complex scenes.",
"This paper addresses the challenge of recognizing behavior of groups of individuals in unconstraint surveillance environments. As opposed to approaches that rely on agglomerative or decisive hierarchical clustering techniques, we propose to recognize group interactions without making hard decisions about the underlying group structure. Instead we use a probabilistic grouping strategy evaluated from the pairwise spatial-temporal tracking information. A path-based grouping scheme determines a soft segmentation of groups and produces a weighted connection graph where its edges express the probability of individuals belonging to a group. Without further segmenting this graph, we show how a large number of low- and high-level behavior recognition tasks can be performed. Our work builds on a mature multi-camera multi-target person tracking system that operates in real-time. We derive probabilistic models to analyze individual track motion as well as group interactions. We show that the soft grouping can combine with motion analysis elegantly to robustly detect and predict group-level activities. Experimental results demonstrate the efficacy of our approach."
]
} |
1604.07070 | 2950948235 | The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAG-ADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an integration of ADMM with the method of stochastic variance reduced gradient (SVRG). Unlike another recent integration attempt called SCAS-ADMM, the proposed algorithm retains the fast convergence benefits of SAG-ADMM and SDCA-ADMM, but is more advantageous in that its storage requirement is very low, even independent of the sample size @math . Experimental results demonstrate that it is as fast as SAG-ADMM and SDCA-ADMM, much faster than SCAS-ADMM, and can be used on much bigger data sets. | Consider the regularized risk minimization problem: @math , where @math is the model parameter, @math is the number of training samples, @math is the loss due to sample @math , and @math is a regularizer. For many structured sparsity regularizers, @math is of the form @math , where @math is a matrix @cite_8 @cite_21 . By introducing an additional @math , the problem can be rewritten as where Problem ) can be conveniently solved by the alternating direction method of multipliers (ADMM) @cite_22 . In general, ADMM considers problems of the form where @math are convex functions, and @math (resp. @math ) are constant matrices (resp. vector). Let @math be a penalty parameter, and @math be the dual variable. At iteration @math , ADMM performs the updates: | {
"cite_N": [
"@cite_21",
"@cite_22",
"@cite_8"
],
"mid": [
"1970554427",
"2164278908",
"2102241087"
],
"abstract": [
"We propose a new penalty function which, when used as regularization for empirical risk minimization procedures, leads to sparse estimators. The support of the sparse vector is typically a union of potentially overlapping groups of co-variates defined a priori, or a set of covariates which tend to be connected to each other when a graph of covariates is given. We study theoretical properties of the estimator, and illustrate its behavior on simulated and breast cancer gene expression data.",
"Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.",
"Motivation: Many complex disease syndromes such as asthma consist of a large number of highly related, rather than independent, clinical phenotypes, raising a new technical challenge in identifying genetic variations associated simultaneously with correlated traits. Although a causal genetic variation may influence a group of highly correlated traits jointly, most of the previous association analyses considered each phenotype separately, or combined results from a set of single-phenotype analyses. Results: We propose a new statistical framework called graph-guided fused lasso to address this issue in a principled way. Our approach represents the dependency structure among the quantitative traits explicitly as a network, and leverages this trait network to encode structured regularizations in a multivariate regression model over the genotypes and traits, so that the genetic markers that jointly influence subgroups of highly correlated traits can be detected with high sensitivity and specificity. While most of the traditional methods examined each phenotype independently, our approach analyzes all of the traits jointly in a single statistical method to discover the genetic markers that perturb a subset of correlated triats jointly rather than a single trait. Using simulated datasets based on the HapMap consortium data and an asthma dataset, we compare the performance of our method with the single-marker analysis, and other sparse regression methods that do not use any structural information in the traits. Our results show that there is a significant advantage in detecting the true causal single nucleotide polymorphisms when we incorporate the correlation pattern in traits using our proposed methods. Availability: Software for GFlasso is available at http: www.sailing.cs.cmu.edu gflasso.html Contact:sssykim@cs.cmu.edu; ksohn@cs.cmu.edu;"
]
} |
1604.07070 | 2950948235 | The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAG-ADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an integration of ADMM with the method of stochastic variance reduced gradient (SVRG). Unlike another recent integration attempt called SCAS-ADMM, the proposed algorithm retains the fast convergence benefits of SAG-ADMM and SDCA-ADMM, but is more advantageous in that its storage requirement is very low, even independent of the sample size @math . Experimental results demonstrate that it is as fast as SAG-ADMM and SDCA-ADMM, much faster than SCAS-ADMM, and can be used on much bigger data sets. | With @math in ), solving ) can be computationally expensive when the data set is large. Recently, a number of stochastic and online variants of ADMM have been developed @cite_6 @cite_5 @cite_14 . However, they converge much slower than the batch ADMM, namely, @math vs @math for convex problems, and @math vs linear convergence for strongly convex problems. | {
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_6"
],
"mid": [
"2510516734",
"25321933",
"1563975843"
],
"abstract": [
"The Alternating Direction Method of Multipliers (ADMM) has received lots of attention recently due to the tremendous demand from large-scale and data-distributed machine learning applications. In this paper, we present a stochastic setting for optimization problems with non-smooth composite objective functions. To solve this problem, we propose a stochastic ADMM algorithm. Our algorithm applies to a more general class of convex and nonsmooth objective functions, beyond the smooth and separable least squares loss used in lasso. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic function: O(1 √t)) for convex functions and O(log t t) for strongly convex functions. Compared to previous literature, we establish the convergence rate of ADMM for convex problems in terms of both the objective value and the feasibility violation. A novel application named Graph-Guided SVM is proposed to demonstrate the usefulness of our algorithm.",
"We develop new stochastic optimization methods that are applicable to a wide range of structured regularizations. Basically our methods are combinations of basic stochastic optimization techniques and Alternating Direction Multiplier Method (ADMM). ADMM is a general framework for optimizing a composite function, and has a wide range of applications. We propose two types of online variants of ADMM, which correspond to online proximal gradient descent and regularized dual averaging respectively. The proposed algorithms are computationally efficient and easy to implement. Our methods yield O(1 √T) convergence of the expected risk. Moreover, the online proximal gradient descent type method yields O(log(T) T) convergence for a strongly convex loss. Numerical experiments show effectiveness of our methods in learning tasks with structured sparsity such as overlapped group lasso.",
"Online optimization has emerged as powerful tool in large scale optimization. In this paper, we introduce efficient online algorithms based on the alternating directions method (ADM). We introduce a new proof technique for ADM in the batch setting, which yields the O(1 T) convergence rate of ADM and forms the basis of regret analysis in the online setting. We consider two scenarios in the online setting, based on whether the solution needs to lie in the feasible set or not. In both settings, we establish regret bounds for both the objective function as well as constraint violation for general and strongly convex functions. Preliminary results are presented to illustrate the performance of the proposed algorithms."
]
} |
1604.07070 | 2950948235 | The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAG-ADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an integration of ADMM with the method of stochastic variance reduced gradient (SVRG). Unlike another recent integration attempt called SCAS-ADMM, the proposed algorithm retains the fast convergence benefits of SAG-ADMM and SDCA-ADMM, but is more advantageous in that its storage requirement is very low, even independent of the sample size @math . Experimental results demonstrate that it is as fast as SAG-ADMM and SDCA-ADMM, much faster than SCAS-ADMM, and can be used on much bigger data sets. | For gradient descent, a similar gap in convergence rates between the stochastic and batch algorithms is well-known @cite_15 . As noted by @cite_24 , the underlying reason is that SGD has to control the gradient's variance by gradually reducing its stepsize @math . Recently, by observing that the training set is always finite in practice, a number of variance reduction techniques have been developed that allow the use of a constant stepsize, and consequently faster convergence. In this paper, we focus on the SVRG @cite_24 , which is advantageous in that no extra space for the intermediate gradients or dual variables is needed. The algorithm proceeds in stages. At the beginning of each stage, the gradient @math is computed using a past parameter estimate @math . For each subsequent iteration @math in this stage, the approximate gradient is used, where @math is a mini-batch of size @math from @math . Note that @math is unbiased (i.e., @math ), and its (expected) variance goes to zero asymptotically. | {
"cite_N": [
"@cite_24",
"@cite_15"
],
"mid": [
"2107438106",
"2105875671"
],
"abstract": [
"Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.",
"We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly."
]
} |
1604.07070 | 2950948235 | The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAG-ADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an integration of ADMM with the method of stochastic variance reduced gradient (SVRG). Unlike another recent integration attempt called SCAS-ADMM, the proposed algorithm retains the fast convergence benefits of SAG-ADMM and SDCA-ADMM, but is more advantageous in that its storage requirement is very low, even independent of the sample size @math . Experimental results demonstrate that it is as fast as SAG-ADMM and SDCA-ADMM, much faster than SCAS-ADMM, and can be used on much bigger data sets. | Recently, variance reduction has also been incorporated into stochastic ADMM. For example, SAG-ADMM @cite_13 is based on SAG @cite_15 ; and SDCA-ADMM @cite_17 is based on SDCA @cite_2 . Both enjoy low iteration complexities and fast convergence. However, SAG-ADMM requires @math space for the old gradients and weights, where @math is the dimensionality of @math . As for SDCA-ADMM, even though its space requirement is lower, it is still proportional to @math , the number of labels in a multiclass multilabel multitask learning problem. As @math can easily be in the thousands or even millions (e.g., Flickr has more than 20 millions tags), SAG-ADMM and SDCA-ADMM can still be problematic. | {
"cite_N": [
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_17"
],
"mid": [
"1939652453",
"2105875671",
"2951481254",
"38875623"
],
"abstract": [
"Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.",
"We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly.",
"In this paper, we propose a new stochastic alternating direction method of multipliers (ADMM) algorithm, which incrementally approximates the full gradient in the linearized ADMM formulation. Besides having a low per-iteration complexity as existing stochastic ADMM algorithms, the proposed algorithm improves the convergence rate on convex problems from @math to @math , where @math is the number of iterations. This matches the convergence rate of the batch ADMM algorithm, but without the need to visit all the samples in each iteration. Experiments on the graph-guided fused lasso demonstrate that the new algorithm is significantly faster than state-of-the-art stochastic and batch ADMM algorithms.",
"We propose a new stochastic dual coordinate ascent technique that can be applied to a wide range of regularized learning problems. Our method is based on alternating direction method of multipliers (ADMM) to deal with complex regularization functions such as structured regularizations. Although the original ADMM is a batch method, the proposed method offers a stochastic update rule where each iteration requires only one or few sample observations. Moreover, our method can naturally afford mini-batch update and it gives speed up of convergence. We show that, under mild assumptions, our method converges exponentially. The numerical experiments show that our method actually performs efficiently."
]
} |
1604.06877 | 2949728256 | The prevalent scene text detection approach follows four sequential steps comprising character candidate detection, false character candidate removal, text line extraction, and text line verification. However, errors occur and accumulate throughout each of these sequential steps which often lead to low detection performance. To address these issues, we propose a unified scene text detection system, namely Text Flow, by utilizing the minimum cost (min-cost) flow network model. With character candidates detected by cascade boosting, the min-cost flow network model integrates the last three sequential steps into a single process which solves the error accumulation problem at both character level and text line level effectively. The proposed technique has been tested on three public datasets, i.e, ICDAR2011 dataset, ICDAR2013 dataset and a multilingual dataset and it outperforms the state-of-the-art methods on all three datasets with much higher recall and F-score. The good performance on the multilingual dataset shows that the proposed technique can be used for the detection of texts in different languages. | The proposed min-cost flow model is structurally similar to the CRF-based model. However, the CRF model in @cite_35 is applied on cropped words where layout is simple; the model is unable to determine the corresponding text lines of each character, which needs to be properly addressed in text detection. Besides, the number of detected noisy character candidates is much larger for text detection task. Hence applying the lexicon scheme similar to that in text recognition is less feasible, especially for languages with thousands of character classes such as Chinese. This is due to the fact that text non-text classifier is more reliable and efficient than character classifier. In addition, for images with multi-scripts, script needs to be identified first before using the correct lexicon. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2049951199"
],
"abstract": [
"Scene text recognition has gained significant attention from the computer vision community in recent years. Recognizing such text is a challenging problem, even more so than the recognition of scanned documents. In this work, we focus on the problem of recognizing text extracted from street images. We present a framework that exploits both bottom-up and top-down cues. The bottom-up cues are derived from individual character detections from the image. We build a Conditional Random Field model on these detections to jointly model the strength of the detections and the interactions between them. We impose top-down cues obtained from a lexicon-based prior, i.e. language statistics, on the model. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We show significant improvements in accuracies on two challenging public datasets, namely Street View Text (over 15 ) and ICDAR 2003 (nearly 10 )."
]
} |
1604.07124 | 2344981912 | In this paper, we present a multi-user resource allocation framework using fragmented-spectrum synchronous OFDM-CDMA modulation over a frequency-selective fading channel. In particular, given pre-existing communications in the spectrum where the system is operating, a channel sensing and estimation method is used to obtain information of subcarrier availability. Given this information, some real-valued multi-level orthogonal codes, which are orthogonal codes with values of @math , are provided for emerging new users, i.e., cognitive radio users. Additionally, we have obtained a closed form expression for bit error rate of cognitive radio receivers in terms of detection probability of primary users, CR users' sensing time and CR users' signal to noise ratio. Moreover, simulation results obtained in this paper indicate the precision with which the analytical results have been obtained in modeling the aforementioned system. | Various schemes have been proposed to facilitate the dynamic spectrum access in such a cognitive radio based network. In @cite_10 , a spectrum-sharing problem in an unlicensed band is studied under game-theoretic approaches. In @cite_9 , the problem of joint multichannel cooperative sensing and resource allocation is investigated where an optimization problem is formulated, which maximizes the weighted sum of secondary users' throughputs while guaranteeing a certain level of protection for the activities of primary users. In @cite_5 , a military cognitive radio based multi-user resource allocation framework for mobile ad hoc networks using multi-carrier DS CDMA modulation has been proposed. In this scheme, in particular, the entire spectral range is first sensed, and then the un-used subcarrier bands are assigned to each cognitive radio (CR) user who employs Direct Sequence Spread Spectrum (DS-SS) modulation. | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_10"
],
"mid": [
"",
"2169879363",
"2002937495"
],
"abstract": [
"",
"In this paper, the problem of joint multichannel cooperative sensing and resource allocation is investigated. A cognitive radio network with multiple potential channels and multiple secondary users is considered. Each secondary user carries out wideband spectrum sensing to get a test statistic for each channel and transmits the test statistic to a coordinator. After collecting all the test statistics from secondary users, the coordinator makes the estimation as to whether primary users are idle or not in the channels. When a channel is estimated to be free, secondary users can get access to the channel with assigned bandwidth and power. An optimization problem is formulated, which maximizes the weighted sum of secondary users' throughputs while guaranteeing a certain level of protection for the activities of primary users. Although the problem is nonconvex, it is shown that the problem can be solved by bilevel optimization and monotonic programming. This paper is also extended to cases with consideration of proportional and max-min fairness.",
"We study a spectrum sharing problem in an unlicensed band where multiple systems coexist and interfere with each other. Due to asymmetries and selfish system behavior, unfair and inefficient situations may arise. We investigate whether efficiency and fairness can be obtained with self-enforcing spectrum sharing rules. These rules have the advantage of not requiring a central authority that verifies compliance to the protocol. Any self-enforcing protocol must correspond to an equilibrium of a game. We first analyze the possible outcomes of a one shot game, and observe that in many cases an inefficient solution results. However, systems often coexist for long periods and a repeated game is more appropriate to model their interaction. In this repeated game the possibility of building reputations and applying punishments allows for a larger set of self-enforcing outcomes. When this set includes the optimal operating point, efficient, fair, and incentive compatible spectrum sharing becomes possible. We present examples that illustrate that in many cases the performance loss due to selfish behavior is small. We also prove that our results are tight and quantify the best achievable performance in a non-cooperative scenario"
]
} |
1604.07124 | 2344981912 | In this paper, we present a multi-user resource allocation framework using fragmented-spectrum synchronous OFDM-CDMA modulation over a frequency-selective fading channel. In particular, given pre-existing communications in the spectrum where the system is operating, a channel sensing and estimation method is used to obtain information of subcarrier availability. Given this information, some real-valued multi-level orthogonal codes, which are orthogonal codes with values of @math , are provided for emerging new users, i.e., cognitive radio users. Additionally, we have obtained a closed form expression for bit error rate of cognitive radio receivers in terms of detection probability of primary users, CR users' sensing time and CR users' signal to noise ratio. Moreover, simulation results obtained in this paper indicate the precision with which the analytical results have been obtained in modeling the aforementioned system. | In @cite_0 , an interesting dynamic spectrum access method based on non-contiguous carrier interferometry MC-CDMA has been introduced and its performance is evaluated. Particularly, CR users need to deactivate some of their subcarriers to avoid interference to primary users; and orthogonal Carrier Interferometry codes are used to eliminate the loss of orthogonality among spreading codes caused by deactivating subcarriers. However, compared with simple binary Hadamard-Walsh signature codes with values @math , orthogonal Carrier Interferometry codes, with length @math , take complex values @math for @math chip of the @math signature code that result in complexity of the system, from implementation point of view; also, this scheme does not consider the probabilities of false alarm and detection and @cite_3 of primary users in computing error probability. In a similar study, the authors in @cite_7 , @cite_11 , introduced a partially matched filter to eliminate narrow band interference (NBI), which can be interpreted as interference from primary users. However, detection of NBI was assumed to be ideal, which is far from reality. | {
"cite_N": [
"@cite_0",
"@cite_7",
"@cite_3",
"@cite_11"
],
"mid": [
"2102191992",
"1967582724",
"2071324566",
"2123528608"
],
"abstract": [
"In this paper, we present a quantitative performance evaluation of non-contiguous multi-carrier code vision multiple access (NC-MC-CDMA) for cognitive radio in a dynamic spectrum access (DSA) network. In a DSA network, multi-carrier based cognitive radio transceivers need to deactivate some of its subcarriers to avoid interference to primary users. However, by deactivating subcarriers, orthogonality among different spreading codes are lost, leading to poor BER performance. The performance of the NC-MC-CDMA can be improved by adaptive spreading code adjustment to compensate for the loss of orthogonality. However, since Hadamard-Walsh codes only exist for certain code length, loss of orthogonality can only be minimized instead of eliminated for many cases. On the other hand, orthogonal Carrier Interferometry codes exist for code length of any integer. Hence, by applying non-contiguous CI MC-CDMA into DSA, the loss of orthogonality among spreading codes caused by deactivating subcarriers can be eliminated. As a direct result, adaptive NC-CI MC-CDMA significantly outperforms adaptive NC-MC-CDMA using Hadamard-Walsh codes.",
"",
"In this paper, we present and discuss the physical implementation and testing of a Spectrally-Encoded Spread-Time Code Devision Multiple Access (SE ST-CDMA) transceiver. The proposed design is suitable for multiple-access and anti jamming wireless communication systems. The design and implementation takes into account the ability to automatically detect and mitigate tone jammers by altering the corresponding Gold sequence codes assigned uniquely to each user and setting the corresponding code's chip to zero. A simple real-time anti jamming mechanism capable of dealing with frequency hopping jammers is also proposed. The simulation results are provided in terms of Bit Error Rate (BER) versus Signal to Noise Ratio (SNR) in one case and the number of multi tone jammers in another, both conditioned on the number of users. Finally, the experimental and simulation results are compared and we observe that they are in a very good agreement.",
"In this paper, we study and analyze the performance of matched and partially-matched receivers for a typical ultra wide-band Spectrally-Encoded Spread-Time (SE ST) CDMA in the presence of multiple tone jammers. We investigate a scenario where there is no information about the jammer frequencies in the transmitter side while it is desirable to cancel it in the receiver side. We will demonstrate the performance of the SE ST system would be improved through partially matched filter at the receiver. In addition, we approximate the two-dimensional RAKE receiver by a simple one-dimensional RAKE for the scenario where there are K asynchronous users. Furthermore, closed form formula for the characteristic function of the decision variable will be found and used to evaluate the probability of error. Our numerical results show that the partially-matched receiver outperforms the conventional matched filter in case of high jammer to signal power ratio."
]
} |
1604.07093 | 2342668609 | Despite significant progress in object categorization, in recent years, a number of important challenges remain, mainly, ability to learn from limited labeled data and ability to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of semi-supervised vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot and open set recognition using a unified framework. Specifically, we propose a maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms, ensuring that labeled samples are projected closest to their correct prototypes, in the embedding space, than to others. We show that resulting model shows improvements in supervised, zero-shot, and large open set recognition, with up to 310K class vocabulary on AwA and ImageNet datasets. | While most of machine learning-based object recognition algorithms require large amount of training data, one-shot learning @cite_11 aims to learn object classifiers from one, or only few examples. To compensate for the lack of training instances and enable one-shot learning, knowledge much be transferred from other sources, for example, by sharing features @cite_21 , semantic attributes @cite_46 @cite_14 @cite_45 @cite_9 , or contextual information @cite_28 . However, none of previous works had used the open set vocabulary to help learn the object classifiers. | {
"cite_N": [
"@cite_14",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_45",
"@cite_46",
"@cite_11"
],
"mid": [
"",
"1999378860",
"1992454046",
"2155853132",
"2151575489",
"2003723718",
"2115733720"
],
"abstract": [
"",
"Recognizing objects in images is an active area of research in computer vision. In the last two decades, there has been much progress and there are already object recognition systems operating in commercial products. However, most of the algorithms for detecting objects perform an exhaustive search across all locations and scales in the image comparing local image regions with an object model. That approach ignores the semantic structure of scenes and tries to solve the recognition problem by brute force. In the real world, objects tend to covary with other objects, providing a rich collection of contextual associations. These contextual associations can be used to reduce the search space by looking only in places in which the object is expected to be; this also increases performance, by rejecting patterns that look like the target but appear in unlikely places. Most modeling attempts so far have defined the context of an object in terms of other previously recognized objects. The drawback of this approach is that inferring the context becomes as difficult as detecting each object. An alternative view of context relies on using the entire scene information holistically. This approach is algorithmically attractive since it dispenses with the need for a prior step of individual object recognition. In this paper, we use a probabilistic framework for encoding the relationships between context and object properties and we show how an integrated system provides improved performance. We view this as a significant step toward general purpose machine vision systems.",
"Remarkable performance has been reported to recognize single object classes. Scalability to large numbers of classes however remains an important challenge for today's recognition methods. Several authors have promoted knowledge transfer between classes as a key ingredient to address this challenge. However, in previous work the decision which knowledge to transfer has required either manual supervision or at least a few training examples limiting the scalability of these approaches. In this work we explicitly address the question of how to automatically decide which information to transfer between classes without the need of any human intervention. For this we tap into linguistic knowledge bases to provide the semantic link between sources (what) and targets (where) of knowledge transfer. We provide a rigorous experimental evaluation of different knowledge bases and state-of-the-art techniques from Natural Language Processing which goes far beyond the limited use of language in related work. We also give insights into the applicability (why) of different knowledge sources and similarity measures for knowledge transfer.",
"We develop an object classification method that can learn a novel class from a single training example. In this method, experience with already learned classes is used to facilitate the learning of novel classes. Our classification scheme employs features that discriminate between class and non-class images. For a novel class, new features are derived by selecting features that proved useful for already learned classification tasks, and adapting these features to the new classification task. This adaptation is performed by replacing the features from already learned classes with similar features taken from the novel class. A single example of a novel class is sufficient to perform feature adaptation and achieve useful classification performance. Experiments demonstrate that the proposed algorithm can learn a novel class from a single training example, using 10 additional familiar classes. The performance is significantly improved compared to using no feature adaptation. The robustness of the proposed feature adaptation concept is demonstrated by similar performance gains across 107 widely varying object categories.",
"Category models for objects or activities typically rely on supervised learning requiring sufficiently large training sets. Transferring knowledge from known categories to novel classes with no or only a few labels is far less researched even though it is a common scenario. In this work, we extend transfer learning with semi-supervised learning to exploit unlabeled instances of (novel) categories with no or only a few labeled instances. Our proposed approach Propagated Semantic Transfer combines three techniques. First, we transfer information from known to novel categories by incorporating external knowledge, such as linguistic or expert-specified information, e.g., by a mid-level layer of semantic attributes. Second, we exploit the manifold structure of novel classes. More specifically we adapt a graph-based learning algorithm - so far only used for semi-supervised learning -to zero-shot and few-shot learning. Third, we improve the local neighborhood in such graph structures by replacing the raw feature-based representation with a mid-level object- or attribute-based representation. We evaluate our approach on three challenging datasets in two different applications, namely on Animals with Attributes and ImageNet for image classification and on MPII Composites for activity recognition. Our approach consistently outperforms state-of-the-art transfer and semi-supervised approaches on all datasets.",
"The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular, we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multimodal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we 1) introduce a concept of semilatent attribute space, expressing user-defined and latent attributes in a unified framework, and 2) propose a novel scalable probabilistic topic model for learning multimodal semilatent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multitask learning, learning with label noise, N-shot transfer learning, and importantly zero-shot learning.",
"Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully."
]
} |
1604.07093 | 2342668609 | Despite significant progress in object categorization, in recent years, a number of important challenges remain, mainly, ability to learn from limited labeled data and ability to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of semi-supervised vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot and open set recognition using a unified framework. Specifically, we propose a maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms, ensuring that labeled samples are projected closest to their correct prototypes, in the embedding space, than to others. We show that resulting model shows improvements in supervised, zero-shot, and large open set recognition, with up to 310K class vocabulary on AwA and ImageNet datasets. | ZSL aims to recognize novel classes with no training instance by transferring knowledge from source classes. ZSL was first explored with use of attribute-based semantic representations @cite_23 @cite_3 @cite_46 @cite_15 @cite_8 @cite_41 . This required pre-defined attribute vector prototypes for each class, which is costly for a large-scale dataset. Recently, semantic word vectors were proposed as a way to embed any class name without human annotation effort; they can therefore serve as an alternative semantic representation @cite_42 @cite_0 @cite_35 @cite_25 for ZSL. Semantic word vectors are learned from large-scale text corpus by language models, such as word2vec @cite_17 , or GloVec @cite_16 . However, most of previous work only use word vectors as semantic representations in ZSL setting, but have neither (1) utilized semantic word vectors explicitly for learning better classifiers; nor (2) for extending ZSL setting towards open set image recognition. A notable exception is @cite_25 which aims to recognize 21K zero-shot classes given a modest vocabulary of 1K source classes; we explore vocabularies that are up to an order of the magnitude larger -- 310K. | {
"cite_N": [
"@cite_35",
"@cite_8",
"@cite_15",
"@cite_41",
"@cite_42",
"@cite_3",
"@cite_0",
"@cite_23",
"@cite_46",
"@cite_16",
"@cite_25",
"@cite_17"
],
"mid": [
"1960364170",
"",
"2141350700",
"",
"",
"2150674759",
"",
"2098411764",
"2003723718",
"2250539671",
"93016980",
"2950133940"
],
"abstract": [
"Object recognition by zero-shot learning (ZSL) aims to recognise objects without seeing any visual examples by learning knowledge transfer between seen and unseen object classes. This is typically achieved by exploring a semantic embedding space such as attribute space or semantic word vector space. In such a space, both seen and unseen class labels, as well as image features can be embedded (projected), and the similarity between them can thus be measured directly. Existing works differ in what embedding space is used and how to project the visual data into the semantic embedding space. Yet, they all measure the similarity in the space using a conventional distance metric (e.g. cosine) that does not consider the rich intrinsic structure, i.e. semantic manifold, of the semantic categories in the embedding space. In this paper we propose to model the semantic manifold in an embedding space using a semantic class label graph. The semantic manifold structure is used to redefine the distance metric in the semantic embedding space for more effective ZSL. The proposed semantic manifold distance is computed using a novel absorbing Markov chain process (AMP), which has a very efficient closed-form solution. The proposed new model improves upon and seamlessly unifies various existing ZSL algorithms. Extensive experiments on both the large scale ImageNet dataset and the widely used Animal with Attribute (AwA) dataset show that our model outperforms significantly the state-of-the-arts.",
"",
"Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding , to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.",
"",
"",
"The rapid development of social video sharing platforms has created a huge demand for automatic video classification and annotation techniques, in particular for videos containing social activities of a group of people (e.g. YouTube video of a wedding reception). Recently, attribute learning has emerged as a promising paradigm for transferring learning to sparsely labelled classes in object or single-object short action classification. In contrast to existing work, this paper for the first time, tackles the problem of attribute learning for understanding group social activities with sparse labels. This problem is more challenging because of the complex multi-object nature of social activities, and the unstructured nature of the activity context. To solve this problem, we (1) contribute an unstructured social activity attribute (USAA) dataset with both visual and audio attributes, (2) introduce the concept of semi-latent attribute space and (3) propose a novel model for learning the latent attributes which alleviate the dependence of existing models on exact and exhaustive manual specification of the attribute-space. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multi-media sparse data learning tasks including: multi-task learning, N-shot transfer learning, learning with label noise and importantly zero-shot learning.",
"",
"We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.",
"The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular, we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multimodal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we 1) introduce a concept of semilatent attribute space, expressing user-defined and latent attributes in a unified framework, and 2) propose a novel scalable probabilistic topic model for learning multimodal semilatent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multitask learning, learning with label noise, N-shot transfer learning, and importantly zero-shot learning.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"Abstract: Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the embedding space is trained jointly with the image transformation. In other cases the semantic embedding space is established by an independent natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional classification framing of image understanding, particularly in terms of the promise for zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. In this paper, we propose a simple method for constructing an image embedding system from any existing image classifier and a semantic word embedding model, which contains the @math class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional training. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible."
]
} |
1604.07093 | 2342668609 | Despite significant progress in object categorization, in recent years, a number of important challenges remain, mainly, ability to learn from limited labeled data and ability to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of semi-supervised vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot and open set recognition using a unified framework. Specifically, we propose a maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms, ensuring that labeled samples are projected closest to their correct prototypes, in the embedding space, than to others. We show that resulting model shows improvements in supervised, zero-shot, and large open set recognition, with up to 310K class vocabulary on AwA and ImageNet datasets. | Mapping between visual features and semantic entities has been explored in two ways: (1) directly learning the embedding by regressing from visual features to the semantic space using Support Vector Regressors (SVR) @cite_23 @cite_14 or neural network @cite_19 ; (2) projecting visual features and semantic entities into a common new space, such as SJE @cite_42 , WSABIE @cite_29 , ALE @cite_1 , DeViSE @cite_0 , and CCA @cite_22 @cite_15 . In contrast, our model trains a better visual-semantic embedding from only few training instances with the help of large amount of open set vocabulary items (using a maximum margin strategy). Our formulation is inspired by the unified semantic embedding model of @cite_38 , however, unlike @cite_38 , our formulation is built on word vector representation, contains a data term, and incorporates constraints to unlabeled vocabulary prototypes. | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_22",
"@cite_29",
"@cite_42",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_15"
],
"mid": [
"",
"",
"43954826",
"21006490",
"",
"",
"",
"2950276680",
"2098411764",
"2141350700"
],
"abstract": [
"",
"",
"Most existing zero-shot learning approaches exploit transfer learning via an intermediate-level semantic representation such as visual attributes or semantic word vectors. Such a semantic representation is shared between an annotated auxiliary dataset and a target dataset with no annotation. A projection from a low-level feature space to the semantic space is learned from the auxiliary dataset and is applied without adaptation to the target dataset. In this paper we identify an inherent limitation with this approach. That is, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding, to solve it. It is ‘transductive’ in that unlabelled target data points are explored for projection adaptation, and ‘multi-view’ in that both low-level feature (view) and multiple semantic representations (views) are embedded to rectify the projection shift. We demonstrate through extensive experiments that our framework (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) achieves state-of-the-art recognition results on image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.",
"Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at the top of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method, called WSABIE, both outperforms several baseline methods and is faster and consumes less memory.",
"",
"",
"",
"This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.",
"We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.",
"Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding , to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks."
]
} |
1604.07095 | 2949248709 | Monte Carlo Tree Search (MCTS) methods have proven powerful in planning for sequential decision-making problems such as Go and video games, but their performance can be poor when the planning depth and sampling trajectories are limited or when the rewards are sparse. We present an adaptation of PGRD (policy-gradient for reward-design) for learning a reward-bonus function to improve UCT (a MCTS algorithm). Unlike previous applications of PGRD in which the space of reward-bonus functions was limited to linear functions of hand-coded state-action-features, we use PGRD with a multi-layer convolutional neural network to automatically learn features from raw perception as well as to adapt the non-linear reward-bonus function parameters. We also adopt a variance-reducing gradient method to improve PGRD's performance. The new method improves UCT's performance on multiple ATARI games compared to UCT without the reward bonus. Combining PGRD and Deep Learning in this way should make adapting rewards for MCTS algorithms far more widely and practically applicable than before. | ALE as a challenging testbed for RL. The Arcade Learning Environment (ALE) includes an ATARI 2600 emulator and about 50 games @cite_4 . These games present a challenging combination of perception and policy selection problems. All of the games have the same high-dimensional observation screen, a 2D array of 7-bit pixels, 160 pixels wide by 210 pixels high. The action space depends on the game but consists of at most 18 discrete actions. Building agents to learn to play ATARI games is challenging due to the high-dimensionality and partial observability of the perceptions, as well as the sparse and often highly-delayed nature of the rewards. There are a number of approaches to building ATARI game players, including some that do not use DL (e.g., bellemare2012sketch , bellemare2012investigating , bellemare2013bayesian , lipovetzky2015classical ). Among these are players that use UCT to plan with the ALE emulator as a model @cite_4 , an approach that we build on here. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2150468603"
],
"abstract": [
"In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available."
]
} |
1604.06578 | 2344512656 | Motivated by the practical constraints arising in sensor networks and Internet-of-Things applications, the zero-delay transmission of a Gaussian measurement over a real single-input multiple-output (SIMO) additive white Gaussian noise (AWGN) channel is studied with a low-resolution analog-to-digital converter (ADC) front end. The optimization of the encoder and the decoder is tackled under both the mean squared error (MSE) distortion and the distortion outage probability (DOP) criteria, with an average power constraint on the channel input. Optimal encoder and decoder mappings are identified for the case of a one-bit ADC front end for both criteria. For the MSE distortion, the optimal encoder mapping is shown to be non-linear in general, and it tends to a linear encoder, in the low signal-to-noise ratio (SNR) regime, and to an antipodal digital encoder in the high SNR regime. This is in contrast to the optimality of linear encoding at all SNR values in the presence of a full precision front end. For the DOP criterion, it is shown that the optimal encoder mapping is piecewise constant, and can take only two opposite values when it is non-zero. For both the MSE distortion and the DOP criteria, necessary optimality conditions are derived for a @math -level ADC front end and for multiple one-bit ADC front ends. These conditions then are used to obtain numerically optimized solutions. Extensive numerical results are provided to gain insights into the optimal encoding and decoding functions. | Within the spectrum of information theoretic analysis of joint source-channel coding, while the classical Shannon theoretic infinite block-length regime occupies one end of the spectrum, the zero-delay transmission of a single source sample over single channel use lies at the opposite end. Intermediate, but still asymptotic, analyses have also been put forth recently by investigating second-order approximations @cite_32 @cite_6 . For the problem of zero-delay transmission, there is no explicit method to obtain the optimal encoding and decoding mappings, except for the special case of statistically matched source-channel pairs @cite_29 , @cite_8 . This special case includes the setting studied in this paper when the receiver front end has infinite resolution. Various solutions have been proposed in the literature for specific unmatched source-channel pairs. Notable examples are the space-filling curves proposed by Shannon @cite_7 and Kotelnikov @cite_39 , and later extended in @cite_12 @cite_13 @cite_11 @cite_16 @cite_23 , for the delay-limited transmission of a Gaussian source over an AWGN channel with bandwidth mismatch between the source and the channel. Other solutions include @cite_12 @cite_3 @cite_27 @cite_17 . | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_29",
"@cite_32",
"@cite_6",
"@cite_39",
"@cite_3",
"@cite_17",
"@cite_27",
"@cite_23",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2135479785",
"2129977719",
"2148164553",
"2106864314",
"",
"2139002467",
"",
"2014414302",
"",
"2107167880",
"",
"2118420528",
"2127170173",
"2127413069"
],
"abstract": [
"A method is developed for representing any communication system geometrically. Messages and the corresponding signals are points in two \"function spaces,\" and the modulation process is a mapping of one space into the other. Using this representation, a number of results in communication theory are deduced concerning expansion and compression of bandwidth and the threshold effect. Formulas are found for the maximum rate of transmission of binary digits over a system when the signal is perturbed by various types of noise. Some of the properties of \"ideal\" systems which transmit at this maximum rate are discussed The equivalent number of binary digits per second for certain information sources is calculated.",
"Fundamental limitations on the performance of any type of communication system transmitting data from an analog source may be obtained by application of a theorem in Shannon's original paper on information theory. In order to transmit the output of an analog source to a sink with mean-squared error e or less, it is necessary to transmit information from source to sink at rates greater than some minimal rate R( ) bits per second. If a channel of capacity C bits per second is used, then in order for any communication system to be able to achieve mean-squared error or less with this channel it is necessary that R( ) C . This inequality can be used to determine the minimum mean-squared error obtainable for a given source-channel pair without discussing any particular communication system. As an example, the minimum mean-squared error is calculated for cases in which the analog source and additive channel noise are stationary, Gaussian processes. The performance of amplitude and angle modulation systems is compared to the theoretically ideal performance obtainable in some of these cases.",
"What makes a source-channel communication system optimal? It is shown that in order to achieve an optimal cost-distortion tradeoff, the source and the channel have to be matched in a probabilistic sense. The match (or lack of it) involves the source distribution, the distortion measure, the channel conditional distribution, and the channel input cost function. Closed-form necessary and sufficient expressions relating the above entities are given. This generalizes both the separation-based approach as well as the two well-known examples of optimal uncoded communication. The condition of probabilistic matching is extended to certain nonergodic and multiuser scenarios. This leads to a result on optimal single-source broadcast communication.",
"This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability ? isclosely approximated by C - ?(V n) Q-1(?) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion , and Q is the complementary Gaussian cumulative distribution function.",
"",
"A frame for the body of a vehicle, specifically a passenger automobile, has a box-like framework which includes a roof frame, a base frame, corner columns connecting the roof frame and base frame, and at least one supporting structure. The supporting structure is essentially aligned in the longitudinal direction of the vehicle outside the box-like framework and comprises two symmetric fork-shaped girders with their outer prongs connected to corner columns, their inner prongs connected to the base frame and their struts connected to a bumper.",
"",
"This paper studies the zero-delay source-channel coding problem, and specifically the problem of obtaining the vector transformations that optimally map between the m-dimensional source space and k-dimensional channel space, under a given transmission power constraint and for the mean square error distortion. The functional properties of the cost are studied and the necessary conditions for the optimality of the encoder and decoder mappings are derived. An optimization algorithm that imposes these conditions iteratively, in conjunction with the noisy channel relaxation method to mitigate poor local minima, is proposed. The numerical results show strict improvement over prior methods. The numerical approach is extended to the scenario of source-channel coding with decoder side information. The resulting encoding mappings are shown to be continuous relatives of, and in fact subsume as special case, the Wyner-Ziv mappings encountered in digital distributed source coding systems. A well-known result in information theory pertains to the linearity of optimal encoding and decoding mappings in the scalar Gaussian source and channel setting, at all channel signal-to-noise ratios (CSNRs). In this paper, the linearity of optimal coding, beyond the Gaussian source and channel, is considered and the necessary and sufficient condition for linearity of optimal mappings, given a noise (or source) distribution, and a specified a total power constraint are derived. It is shown that the Gaussian source-channel pair is unique in the sense that it is the only source-channel pair for which the optimal mappings are linear at more than one CSNR values. Moreover, the asymptotic linearity of optimal mappings is shown for low CSNR if the channel is Gaussian regardless of the source and, at the other extreme, for high CSNR if the source is Gaussian, regardless of the channel. The extension to the vector settings is also considered where besides the conditions inherited from the scalar case, additional constraints must be satisfied to ensure linearity of the optimal mappings.",
"",
"This paper deals with lossy joint source-channel coding for transmitting memoryless sources over AWGN channels. The scheme is based on the geometrical interpretation of communication by Kotel'nikov and Shannon where amplitudecontinuous, time-discrete source samples are mapped directly onto the channel using curves or planes. The source and channel spaces can have different dimensions and thereby achieving either compression or error control, depending on whether the source bandwidth is smaller or larger than the channel bandwidth. We present a general theory for 1:N and M:1 dimension changing mappings, and provide two examples for a Gaussian source and channel where we optimize both a 2:1 bandwidth-reducing and a 1:2 bandwidth-expanding mapping. Both examples show high spectral efficiency and provide both graceful degradation and improvement for imperfect channel state information at the transmitter.",
"",
"This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint source-channel coding schemes. We first show some examples of sources and channels where no coding is optimal for all values of the signal-to-noise ratio (SNR). When the source bandwidth is greater than the channel bandwidth, joint coding schemes based on space-filling curves and other families of curves are proposed. For uniform sources and modulo channels, our coding scheme based on space-filling curves operates within 1.1 dB of Shannon's rate-distortion bound. For Gaussian sources and additive white Gaussian noise (AWGN) channels, we can achieve within 0.9 dB of the rate-distortion bound. The second scheme is based on low-density parity-check (LDPC) codes. We first demonstrate that we can translate threshold values of an LDPC code between channels accurately using a simple mapping. We develop some models for density evolution from this observation, namely erasure-channel, Gaussian-capacity, and reciprocal-channel approximations. The reciprocal-channel approximation, based on dualizing LDPC codes, provides a very accurate model of density evolution for the AWGN channel. We also develop another approximation method, Gaussian approximation, which enables us to visualize infinite-dimensional density evolution and optimization of LDPC codes. We also develop other tools to better understand density evolution. Using these tools, we design some LDPC codes that approach the Shannon limit extremely closely. For multilevel AWGN channels, we design a rate 1 2 code that has a threshold within 0.0063 dB of the Shannon limit of the noisiest level. For binary-input AWGN channels, our best rate 1 2 LDPC code has a threshold within 0.0045 dB of the Shannon limit. Simulation results show that we can achieve within 0.04 dB of the Shannon limit at a bit error rate of 10−6 using a block length of 10 7. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)",
"Two methods for transmission of a continuous amplitude source signal over a continuous amplitude channel with a power constraint are proposed. For both methods, bandwidth reduction is achieved by mapping from a higher dimensional source space to a lower dimensional channel space. In the first system, a source vector is quantized and mapped to a discrete set of points in a multidimensional PAM signal constellation. In the second system the source vector is approximated with a point in a continuous subset of the source space. This operation is followed by mapping the resulting vector to the channel space by a one-to-one continuous mapping resulting in continuous amplitude channel symbols. The proposed methods are evaluated for a memoryless Gaussian source with an additive white Gaussian noise channel, and offer significant gains over previously reported methods. Specifically, in the case of two-dimensional source vectors, and one-dimensional channel vectors, the gap to the optimum performance theoretically attainable is less than 1.0 dB for a wide range of channel signal-to-noise ratios.",
"We consider two codes based on dynamical systems, for transmitting information from a continuous alphabet, discrete-time source over a Gaussian channel. The first code, a homogeneous spherical code, is generated by the linear dynamical system s spl dot =As, with A a square skew-symmetric matrix. The second code is generated by the shift map s sub n =b sub n s sub n-1 (mod 1). The performance of each of these codes is determined by the geometry of its locus or signal set, specifically, its arc length and minimum distance, suitably defined. We show that the performance analyses for these systems are closely related, and derive exact expressions and bounds for relevant geometric parameters. We also observe that the lattice spl Zopf sup N underlies both modulation systems and we develop a fast decoding algorithm that relies on this observation. Analytic results show that for fixed bandwidth expansion, good scaling behavior of the mean squared error is obtained relative to the channel signal-to-noise ratio (SNR). Particularly interesting is the resulting observation that sampled, exponentially chirped modulation codes are good bandwidth expansion codes."
]
} |
1604.06270 | 2952802255 | The relevance between a query and a document in search can be represented as matching degree between the two objects. Latent space models have been proven to be effective for the task, which are often trained with click-through data. One technical challenge with the approach is that it is hard to train a model for tail queries and tail documents for which there are not enough clicks. In this paper, we propose to address the challenge by learning a latent matching model, using not only click-through data but also semantic knowledge. The semantic knowledge can be categories of queries and documents as well as synonyms of words, manually or automatically created. Specifically, we incorporate semantic knowledge into the objective function by including regularization terms. We develop two methods to solve the learning task on the basis of coordinate descent and gradient descent respectively, which can be employed in different settings. Experimental results on two datasets from an app search engine demonstrate that our model can make effective use of semantic knowledge, and thus can significantly enhance the accuracies of latent matching models, particularly for tail queries. | Topic modeling techniques aim to discover the topics as well as the topic representations of documents in the document collection, and can be used to deal with query document mismatch. Latent semantic indexing (LSI) @cite_20 is one typical non-probabilistic topic model, which decomposes the term-document matrix by Singular Value Composition (SVD) under the assumption that topic vectors are orthogonal. Regularized Latent Semantic Indexing (RLSI) @cite_8 formalizes topic modeling as matrix factorization with regularization of @math @math -norm on topic vectors and document representation vectors. Probabilistic Latent Semantic Indexing (PLSI) @cite_10 and Latent Dirichlet Allocation (LDA) @cite_9 are two widely used probabilistic topic models, in which each topic is defined as a probability distribution over terms and each document is defined as a probability distribution over topics. By employing one of the topic models, one can project queries and documents into the topic space and calculate their similarities in the space. However, topic modeling does not directly learn the query document matching relation, and thus its ability of dealing with query document mismatch is limited. | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_20",
"@cite_8"
],
"mid": [
"1880262756",
"2107743791",
"2147152072",
"2169681319"
],
"abstract": [
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.",
"Probabilistic Latent Semantic Indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data. Fitted from a training corpus of text documents by a generalization of the Expectation Maximization algorithm, the utilized model is able to deal with domain specific synonymy as well as with polysemous words. In contrast to standard Latent Semantic Indexing (LSI) by Singular Value Decomposition, the probabilistic variant has a solid statistical foundation and defines a proper generative data model. Retrieval experiments on a number of test collections indicate substantial performance gains over direct term matching methods as well as over LSI. In particular, the combination of models with different dimensionalities has proven to be advantageous.",
"A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.",
"Topic modeling provides a powerful way to analyze the content of a collection of documents. It has become a popular tool in many research areas, such as text mining, information retrieval, natural language processing, and other related fields. In real-world applications, however, the usefulness of topic modeling is limited due to scalability issues. Scaling to larger document collections via parallelization is an active area of research, but most solutions require drastic steps, such as vastly reducing input vocabulary. In this article we introduce Regularized Latent Semantic Indexing (RLSI)---including a batch version and an online version, referred to as batch RLSI and online RLSI, respectively---to scale up topic modeling. Batch RLSI and online RLSI are as effective as existing topic modeling techniques and can scale to larger datasets without reducing input vocabulary. Moreover, online RLSI can be applied to stream data and can capture the dynamic evolution of topics. Both versions of RLSI formalize topic modeling as a problem of minimizing a quadratic loss function regularized by e1 and or e2 norm. This formulation allows the learning process to be decomposed into multiple suboptimization problems which can be optimized in parallel, for example, via MapReduce. We particularly propose adopting e1 norm on topics and e2 norm on document representations to create a model with compact and readable topics and which is useful for retrieval. In learning, batch RLSI processes all the documents in the collection as a whole, while online RLSI processes the documents in the collection one by one. We also prove the convergence of the learning of online RLSI. Relevance ranking experiments on three TREC datasets show that batch RLSI and online RLSI perform better than LSI, PLSI, LDA, and NMF, and the improvements are sometimes statistically significant. Experiments on a Web dataset containing about 1.6 million documents and 7 million terms, demonstrate a similar boost in performance."
]
} |
1604.06270 | 2952802255 | The relevance between a query and a document in search can be represented as matching degree between the two objects. Latent space models have been proven to be effective for the task, which are often trained with click-through data. One technical challenge with the approach is that it is hard to train a model for tail queries and tail documents for which there are not enough clicks. In this paper, we propose to address the challenge by learning a latent matching model, using not only click-through data but also semantic knowledge. The semantic knowledge can be categories of queries and documents as well as synonyms of words, manually or automatically created. Specifically, we incorporate semantic knowledge into the objective function by including regularization terms. We develop two methods to solve the learning task on the basis of coordinate descent and gradient descent respectively, which can be employed in different settings. Experimental results on two datasets from an app search engine demonstrate that our model can make effective use of semantic knowledge, and thus can significantly enhance the accuracies of latent matching models, particularly for tail queries. | Recently, non-linear matching models have also been studied. For example, @cite_18 propose a model referred to as Deep Structured Semantic Model (DSSM), which performs semantic matching with deep learning techniques. Specifically, the model maps the input term vectors into output vectors of lower dimensions through a multi-layer neural network, and takes cosine similarities between the output vectors as the matching scores. Lu and Li @cite_2 propose a deep architecture for matching short texts, which can also be queries and documents. Their method learns matching relations between words in the two short texts as a hierarchy of topics and takes the topic hierarchy as a deep neural network. | {
"cite_N": [
"@cite_18",
"@cite_2"
],
"mid": [
"2136189984",
"2128892113"
],
"abstract": [
"Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.",
"Many machine learning problems can be interpreted as learning for matching two types of objects (e.g., images and captions, users and products, queries and documents, etc.). The matching level of two objects is usually measured as the inner product in a certain feature space, while the modeling effort focuses on mapping of objects from the original space to the feature space. This schema, although proven successful on a range of matching tasks, is insufficient for capturing the rich structure in the matching process of more complicated objects. In this paper, we propose a new deep architecture to more effectively model the complicated matching relations between two objects from heterogeneous domains. More specifically, we apply this model to matching tasks in natural language, e.g., finding sensible responses for a tweet, or relevant answers to a given question. This new architecture naturally combines the localness and hierarchy intrinsic to the natural language problems, and therefore greatly improves upon the state-of-the-art models."
]
} |
1604.06270 | 2952802255 | The relevance between a query and a document in search can be represented as matching degree between the two objects. Latent space models have been proven to be effective for the task, which are often trained with click-through data. One technical challenge with the approach is that it is hard to train a model for tail queries and tail documents for which there are not enough clicks. In this paper, we propose to address the challenge by learning a latent matching model, using not only click-through data but also semantic knowledge. The semantic knowledge can be categories of queries and documents as well as synonyms of words, manually or automatically created. Specifically, we incorporate semantic knowledge into the objective function by including regularization terms. We develop two methods to solve the learning task on the basis of coordinate descent and gradient descent respectively, which can be employed in different settings. Experimental results on two datasets from an app search engine demonstrate that our model can make effective use of semantic knowledge, and thus can significantly enhance the accuracies of latent matching models, particularly for tail queries. | Matching between query and document can be generalized as a more general machine learning task, which is indicated in @cite_3 and referred to as learning to match. Learning to match aims to study and develop general machine learning techniques for various applications such as search, collaborative filtering, paraphrasing and textual entailment, question answering, short text conversation, online advertisement, and link prediction, and to improve the state-of-the-art for all the applications. The method proposed in this paper is potentially applicable to other applications as well. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2170245882"
],
"abstract": [
"Relevance is the most important factor to assure users' satisfaction in search and the success of a search engine heavily depends on its performance on relevance. It has been observed that most of the dissatisfaction cases in relevance are due to term mismatch between queries and documents (e.g., query \"NY times\" does not match well with a document only containing \"New York Times\"), because term matching, i.e., the bag-of-words approach, still functions as the main mechanism of modern search engines. It is not exaggerated to say, therefore, that mismatch between query and document poses the most critical challenge in search. Ideally, one would like to see query and document match with each other, if they are topically relevant. Recently, researchers have expended significant effort to address the problem. The major approach is to conduct semantic matching, i.e., to perform more query and document understanding to represent the meanings of them, and perform better matching between the enriched query and document representations. With the availability of large amounts of log data and advanced machine learning techniques, this becomes more feasible and significant progress has been made recently. This survey gives a systematic and detailed introduction to newly developed machine learning technologies for query document matching (semantic matching) in search, particularly web search. It focuses on the fundamental problems, as well as the state-of-the-art solutions of query document matching on form aspect, phrase aspect, word sense aspect, topic aspect, and structure aspect. The ideas and solutions explained may motivate industrial practitioners to turn the research results into products. The methods introduced and the discussions made may also stimulate academic researchers to find new research directions and approaches. Matching between query and document is not limited to search and similar problems can be found in question answering, online advertising, cross-language information retrieval, machine translation, recommender systems, link prediction, image annotation, drug design, and other applications, as the general task of matching between objects from two different spaces. The technologies introduced can be generalized into more general machine learning techniques, which is referred to as learning to match in this survey."
]
} |
1604.06431 | 2341997752 | The permanent versus determinant conjecture is a major problem in complexity theory that is equivalent to the separation of the complexity classes VP_ ws and VNP. Mulmuley and Sohoni (SIAM J. Comput., 2001) suggested to study a strengthened version of this conjecture over the complex numbers that amounts to separating the orbit closures of the determinant and padded permanent polynomials. In that paper it was also proposed to separate these orbit closures by exhibiting occurrence obstructions, which are irreducible representations of GL_ n^2 (C), which occur in one coordinate ring of the orbit closure, but not in the other. We prove that this approach is impossible. However, we do not rule out the general approach to the permanent versus determinant problem via multiplicity obstructions as proposed by Mulmuley and Sohoni. | In @cite_19 it was realized that GCT-coefficients can be upper bounded by rectangular Kronecker coefficients: we have @math for @math . In fact, the multiplicity of @math in the coordinate ring of the orbit @math equals the so-called symmetric rectangular Kronecker coefficient @cite_37 , which is upper bounded by @math . | {
"cite_N": [
"@cite_19",
"@cite_37"
],
"mid": [
"2074207216",
"2962747587"
],
"abstract": [
"We suggest an approach based on geometric invariant theory to the fundamental lower bound problems in complexity theory concerning formula and circuit size. Specifically, we introduce the notion of a partially stable point in a reductive-group representation, which generalizes the notion of stability in geometric invariant theory due to Mumford [Geometric Invariant Theory, Springer-Verlag, Berlin, 1965]. Then we reduce fundamental lower bound problems in complexity theory to problems concerning infinitesimal neighborhoods of the orbits of partially stable points. We also suggest an approach to tackle the latter class of problems via construction of explicit obstructions.",
"We discuss the geometry of orbit closures and the asymptotic behavior of Kronecker coefficients in the context of the Geometric Complexity Theory program to prove a variant of Valiant's algebraic analog of the P not equal to NP conjecture. We also describe the precise separation of complexity classes that their program proposes to demonstrate."
]
} |
1604.06431 | 2341997752 | The permanent versus determinant conjecture is a major problem in complexity theory that is equivalent to the separation of the complexity classes VP_ ws and VNP. Mulmuley and Sohoni (SIAM J. Comput., 2001) suggested to study a strengthened version of this conjecture over the complex numbers that amounts to separating the orbit closures of the determinant and padded permanent polynomials. In that paper it was also proposed to separate these orbit closures by exhibiting occurrence obstructions, which are irreducible representations of GL_ n^2 (C), which occur in one coordinate ring of the orbit closure, but not in the other. We prove that this approach is impossible. However, we do not rule out the general approach to the permanent versus determinant problem via multiplicity obstructions as proposed by Mulmuley and Sohoni. | A first attempt towards finding occurrence obstructions was by asymptotic considerations (moment polytopes): this was ruled out in @cite_4 , where it was proven that for all @math there exists a stretching factor @math such that @math . In @cite_5 it was shown that deciding positivity of Kronecker coefficients in general is NP-hard, but this proof fails for rectangular formats. | {
"cite_N": [
"@cite_5",
"@cite_4"
],
"mid": [
"1894723027",
"2148706005"
],
"abstract": [
"We show that the problem of deciding positivity of Kronecker coefficients is NP-hard. Previously, this problem was conjectured to be in P, just as for the Littlewood–Richardson coefficients. Our result establishes in a formal way that Kronecker coefficients are more difficult than Littlewood–Richardson coefficients, unless P = NP.",
"We prove that for any partition ( 1;:::; d2) of size ‘d there exists k 1 such that the tensor square of the irreducible representation of the symmetric group Sk‘d with respect to the rectangular partition (k‘;:::;k‘) contains the irreducible representation corresponding to the stretched partition (k 1;:::;k d 2). We also prove a related approximate version of this statement in which the stretching factor k is eectively bounded in terms of d. We further discuss the consequences for geometric complexity theory which provided the motivation for this work."
]
} |
1604.06431 | 2341997752 | The permanent versus determinant conjecture is a major problem in complexity theory that is equivalent to the separation of the complexity classes VP_ ws and VNP. Mulmuley and Sohoni (SIAM J. Comput., 2001) suggested to study a strengthened version of this conjecture over the complex numbers that amounts to separating the orbit closures of the determinant and padded permanent polynomials. In that paper it was also proposed to separate these orbit closures by exhibiting occurrence obstructions, which are irreducible representations of GL_ n^2 (C), which occur in one coordinate ring of the orbit closure, but not in the other. We prove that this approach is impossible. However, we do not rule out the general approach to the permanent versus determinant problem via multiplicity obstructions as proposed by Mulmuley and Sohoni. | Clearly, @math is bounded from above by the multiplicity @math of the partition @math in the plethysm @math . Kumar @cite_39 ruled out asymptotic considerations for the GCT-coefficients: assuming the Alon-Tarsi Conjecture @cite_35 , Kumar derived that @math for all @math and even @math . A similar conclusion, unconditional, although with less information on the stretching factor, was obtained in @cite_15 . | {
"cite_N": [
"@cite_35",
"@cite_15",
"@cite_39"
],
"mid": [
"2072414205",
"1921398896",
"2119973367"
],
"abstract": [
"Bounds for the chromatic number and for some related parameters of a graph are obtained by applying algebraic techniques. In particular, the following result is proved: IfG is a directed graph with maximum outdegreed, and if the number of Eulerian subgraphs ofG with an even number of edges differs from the number of Eulerian subgraphs with an odd number of edges then for any assignment of a setS(v) ofd+1 colors for each vertexv ofG there is a legal vertex-coloring ofG assigning to each vertexv a color fromS(v).",
"Let Det_n denote the closure of the GL_ n^2 (C)-orbit of the determinant polynomial det_n with respect to linear substitution. The highest weights (partitions) of irreducible GL_ n^2 (C)-representations occurring in the coordinate ring of Det_n form a finitely generated monoid S(Det_n). We prove that the saturation of S(Det_n) contains all partitions lambda with length at most n and size divisible by n. This implies that representation theoretic obstructions for the permanent versus determinant problem must be holes of the monoid S(Det_n).",
"We show the existence of a large family of representations supported by the orbit closure of the determinant. However, the validity of our result is based on the validity of the celebrated ‘Latin square conjecture’ due to Alon and Tarsi or, more precisely, on the validity of an equivalent ‘column Latin square conjecture’ due to Huang and Rota."
]
} |
1604.06433 | 2337242548 | The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art. | Egocentric Vision. First-person vision methods have received renewed attention by the computer vision community @cite_53 @cite_28 @cite_51 . Current methods and datasets have focused on problems such as video summarization @cite_53 @cite_29 , activity recognition @cite_19 , and social interaction @cite_24 . In contrast, our work is focused on the problem of looking at people and modeling facial attributes from a first-person vision perspective. Compared to existing egocentric datasets @cite_35 , our Ego-Humans dataset is the first of its kind; it deals with a different task, it is larger in scale, and it also has associated geo-location and weather information, which could be relevant for many other tasks. | {
"cite_N": [
"@cite_35",
"@cite_28",
"@cite_53",
"@cite_29",
"@cite_24",
"@cite_19",
"@cite_51"
],
"mid": [
"",
"1947050545",
"2071711566",
"2034562920",
"",
"2212494831",
"1906662973"
],
"abstract": [
"",
"We address the challenging problem of recognizing the camera wearer's actions from videos captured by an egocentric camera. Egocentric videos encode a rich set of signals regarding the camera wearer, including head movement, hand pose and gaze information. We propose to utilize these mid-level egocentric cues for egocentric action recognition. We present a novel set of egocentric features and show how they can be combined with motion and object features. The result is a compact representation with superior performance. In addition, we provide the first systematic evaluation of motion, object and egocentric cues in egocentric action recognition. Our benchmark leads to several surprising findings. These findings uncover the best practices for egocentric actions, with a significant performance boost over all previous state-of-the-art methods on three publicly available datasets.",
"We present a video summarization approach for egocentric or \"wearable\" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video--such as the nearness to hands, gaze, and frequency of occurrence--and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.",
"In this paper, we propose a new method to obtain customized video summarization according to specific user preferences. Our approach is tailored on Cultural Heritage scenario and is designed on identifying candidate shots, selecting from the original streams only the scenes with behavior patterns related to the presence of relevant experiences, and further filtering them in order to obtain a summary matching the requested user's preferences. Our preliminary results show that the proposed approach is able to leverage user's preferences in order to obtain a customized summary, so that different users may extract from the same stream different summaries.",
"",
"We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze.",
"We tackle the problem of estimating the 3D pose of an individual's upper limbs (arms+hands) from a chest mounted depth-camera. Importantly, we consider pose estimation during everyday interactions with objects. Past work shows that strong pose+viewpoint priors and depth-based features are crucial for robust performance. In egocentric views, hands and arms are observable within a well defined volume in front of the camera. We call this volume an egocentric workspace. A notable property is that hand appearance correlates with workspace location. To exploit this correlation, we classify arm+hand configurations in a global egocentric coordinate frame, rather than a local scanning window. This greatly simplify the architecture and improves performance. We propose an efficient pipeline which 1) generates synthetic workspace exemplars for training using a virtual chest-mounted camera whose intrinsic parameters match our physical camera, 2) computes perspective-aware depth features on this entire volume and 3) recognizes discrete arm+hand pose classes through a sparse multi-class SVM. We achieve state-of-the-art hand pose recognition performance from egocentric RGB-D images in real-time."
]
} |
1604.06433 | 2337242548 | The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art. | Facial Attribute Modeling. @cite_11 proposed a method based on describable facial attributes to assist in face verification and attribute-based face search. @cite_46 and @cite_6 exploited the inter-dependencies of facial attributes to improve classification accuracy. @cite_7 built a feature representation that relies on discrimination of images based on first names, and showed improved results in age and gender classification. Berg and Belhumeur @cite_52 introduced part-based one-vs.-one features (POOFs) and showed that features constructed based on identity discrimination are helpful for facial feature classification. @cite_30 proposed a method that jointly learns discriminative binary codes and attribute prediction for face retrieval. | {
"cite_N": [
"@cite_30",
"@cite_7",
"@cite_52",
"@cite_6",
"@cite_46",
"@cite_11"
],
"mid": [
"2201936386",
"2120110979",
"",
"2143352446",
"2085660690",
""
],
"abstract": [
"We address the challenging large-scale content-based face image retrieval problem, intended as searching images based on the presence of specific subject, given one face image of him her. To this end, one natural demand is a supervised binary code learning method. While the learned codes might be discriminating, people often have a further expectation that whether some semantic message (e.g., visual attributes) can be read from the human-incomprehensible codes. For this purpose, we propose a novel binary code learning framework by jointly encoding identity discriminability and a number of facial attributes into unified binary code. In this way, the learned binary codes can be applied to not only fine-grained face image retrieval, but also facial attributes prediction, which is the very innovation of this work, just like killing two birds with one stone. To evaluate the effectiveness of the proposed method, extensive experiments are conducted on a new purified large-scale web celebrity database, named CFW 60K, with abundant manual identity and attributes annotation, and experimental results exhibit the superiority of our method over state-of-the-art.",
"This paper introduces a new idea in describing people using their first names, i.e., the name assigned at birth. We show that describing people in terms of similarity to a vector of possible first names is a powerful description of facial appearance that can be used for face naming and building facial attribute classifiers. We build models for 100 common first names used in the United States and for each pair, construct a pair wise first-name classifier. These classifiers are built using training images downloaded from the Internet, with no additional user interaction. This gives our approach important advantages in building practical systems that do not require additional human intervention for labeling. We use the scores from each pair wise name classifier as a set of facial attributes. We show several surprising results. Our name attributes predict the correct first names of test faces at rates far greater than chance. The name attributes are applied to gender recognition and to age classification, outperforming state-of-the-art methods with all training images automatically gathered from the Internet.",
"",
"Recent works have shown that facial attributes are useful in a number of applications such as face recognition and retrieval. However, estimating attributes in images with large variations remains a big challenge. This challenge is addressed in this paper. Unlike existing methods that assume the independence of attributes during their estimation, our approach captures the interdependencies of local regions for each attribute, as well as the high-order correlations between different attributes, which makes it more robust to occlusions and misdetection of face regions. First, we have modeled region interdependencies with a discriminative decision tree, where each node consists of a detector and a classifier trained on a local region. The detector allows us to locate the region, while the classifier determines the presence or absence of an attribute. Second, correlations of attributes and attribute predictors are modeled by organizing all of the decision trees into a large sum-product network (SPN), which is learned by the EM algorithm and yields the most probable explanation (MPE) of the facial attributes in terms of the region's localization and classification. Experimental results on a large data set with 22,400 images show the effectiveness of the proposed approach.",
"We propose a novel approach for ranking and retrieval of images based on multi-attribute queries. Existing image retrieval methods train separate classifiers for each word and heuristically combine their outputs for retrieving multiword queries. Moreover, these approaches also ignore the interdependencies among the query terms. In contrast, we propose a principled approach for multi-attribute retrieval which explicitly models the correlations that are present between the attributes. Given a multi-attribute query, we also utilize other attributes in the vocabulary which are not present in the query, for ranking retrieval. Furthermore, we integrate ranking and retrieval within the same formulation, by posing them as structured prediction problems. Extensive experimental evaluation on the Labeled Faces in the Wild(LFW), FaceTracer and PASCAL VOC datasets show that our approach significantly outperforms several state-of-the-art ranking and retrieval methods.",
""
]
} |
1604.06433 | 2337242548 | The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art. | More recently, deep convolutional neural networks have advanced the state of the art in facial attribute classification. N. @cite_37 proposed pose-aligned networks (PANDA) for deep attribute modeling. Z. @cite_40 proposed a deep model based on facial attributes to perform pairwise face reasoning for social relation prediction. @cite_54 achieved state-of-the-art performance on the LFWA and CelebA datasets using a network pre-trained on massive identity labels. Our work, instead, achieves the same or superior performance without requiring manually annotated identity labels for the pre-training step. | {
"cite_N": [
"@cite_40",
"@cite_37",
"@cite_54"
],
"mid": [
"2206328504",
"2147414309",
"1834627138"
],
"abstract": [
"Social relation defines the association, e.g, warm, friendliness, and dominance, between two or more people. Motivated by psychological studies, we investigate if such fine-grained and high-level relation traits can be characterised and quantified from face images in the wild. To address this challenging problem we propose a deep model that learns a rich face representation to capture gender, expression, head pose, and age-related attributes, and then performs pairwise-face reasoning for relation prediction. To learn from heterogeneous attribute sources, we formulate a new network architecture with a bridging layer to leverage the inherent correspondences among these datasets. It can also cope with missing target attribute labels. Extensive experiments show that our approach is effective for fine-grained social relation learning in images and videos.",
"We propose a method for inferring human attributes (such as gender, hair style, clothes style, expression, action) from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion. Convolutional Neural Nets (CNN) have been shown to perform very well on large scale object recognition problems. In the context of attribute classification, however, the signal is often subtle and it may cover only a small part of the image, while the image is dominated by the effects of pose and viewpoint. Discounting for pose variation would require training on very large labeled datasets which are not presently available. Part-based models, such as poselets [4] and DPM [12] have been shown to perform well for this problem but they are limited by shallow low-level features. We propose a new method which combines part-based models and deep learning by training pose-normalized CNNs. We show substantial improvement vs. state-of-the-art methods on challenging attribute classification tasks in unconstrained settings. Experiments confirm that our method outperforms both the best part-based methods on this problem and conventional CNNs trained on the full bounding box of the person.",
"Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts."
]
} |
1604.06433 | 2337242548 | The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art. | Geo-Tagged Image Analysis. Many methods have been proposed for geo-tagged image analysis. In particular, image geo-localization, i.e., the problem of predicting the location of a query image, has received increased attention in the past few years @cite_57 @cite_43 @cite_38 @cite_16 . Other related research includes discovering architectural elements and recognizing city attributes from large geo-tagged image repositories @cite_34 @cite_8 and using location context to improve image classification @cite_9 . More closely related to our work, @cite_48 @cite_10 investigated the geo-dependence of facial features and attributes; however they used off-the-shelf facial attribute classifiers for this analysis, whereas the goal of our work is to build feature representations so as to improve the accuracy of facial attribute classifiers. | {
"cite_N": [
"@cite_38",
"@cite_8",
"@cite_48",
"@cite_9",
"@cite_34",
"@cite_57",
"@cite_43",
"@cite_16",
"@cite_10"
],
"mid": [
"",
"1525292367",
"1544515645",
"",
"2055132753",
"2103163130",
"1995288918",
"2087475273",
"2293068667"
],
"abstract": [
"",
"After hundreds of years of human settlement, each city has formed a distinct identity, distinguishing itself from other cities. In this work, we propose to characterize the identity of a city via an attribute analysis of 2 million geo-tagged images from 21 cities over 3 continents. First, we estimate the scene attributes of these images and use this representation to build a higher-level set of 7 city attributes, tailored to the form and function of cities. Then, we conduct the city identity recognition experiments on the geo-tagged images and identify images with salient city identity on each city attribute. Based on the misclassification rate of the city identity recognition, we analyze the visual similarity among different cities. Finally, we discuss the potential application of computer vision to urban planning.",
"While face analysis from images is a well-studied area, little work has explored the dependence of facial appearance on the geographic location from which the image was captured. To fill this gap, we constructed GeoFaces, a large dataset of geotagged face images, and used it to examine the geo-dependence of facial features and attributes, such as ethnicity, gender, or the presence of facial hair. Our analysis illuminates the relationship between raw facial appearance, facial attributes, and geographic location, both globally and in selected major urban areas. Some of our experiments, and the resulting visualizations, confirm prior expectations, such as the predominance of ethnically Asian faces in Asia, while others highlight novel information that can be obtained with this type of analysis, such as the major city with the highest percentage of people with a mustache.",
"",
"Given a large repository of geotagged imagery, we seek to automatically find visual elements, e. g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the city of Paris. This is a tremendously difficult task as the visual features distinguishing architectural elements of different places can be very subtle. In addition, we face a hard search problem: given all possible patches in all images, which of them are both frequently occurring and geographically informative? To address these issues, we propose to use a discriminative clustering approach able to take into account the weak geographic supervision. We show that geographically representative image elements can be discovered automatically from Google Street View imagery in a discriminative manner. We demonstrate that these elements are visually interpretable and perceptually geo-informative. The discovered visual elements can also support a variety of computational geography tasks, such as mapping architectural correspondences and influences within and across cities, finding representative elements at different geo-spatial scales, and geographically-informed image retrieval.",
"Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban rural classification.",
"The aim of this work is to localize a query photograph by finding other images depicting the same place in a large geotagged image database. This is a challenging task due to changes in viewpoint, imaging conditions and the large size of the image database. The contribution of this work is two-fold. First, we cast the place recognition problem as a classification task and use the available geotags to train a classifier for each location in the database in a similar manner to per-exemplar SVMs in object recognition. Second, as only few positive training examples are available for each location, we propose a new approach to calibrate all the per-location SVM classifiers using only the negative examples. The calibration we propose relies on a significance measure essentially equivalent to the p-values classically used in statistical hypothesis testing. Experiments are performed on a database of 25,000 geotagged street view images of Pittsburgh and demonstrate improved place recognition accuracy of the proposed approach over the previous work.",
"Geographic location is a powerful property for organizing large-scale photo collections, but only a small fraction of online photos are geo-tagged. Most work in automatically estimating geo-tags from image content is based on comparison against models of buildings or landmarks, or on matching to large reference collections of geotagged images. These approaches work well for frequently photographed places like major cities and tourist destinations, but fail for photos taken in sparsely photographed places where few reference photos exist. Here we consider how to recognize general geo-informative attributes of a photo, e.g. the elevation gradient, population density, demographics, etc. of where it was taken, instead of trying to estimate a precise geo-tag. We learn models for these attributes using a large (noisy) set of geo-tagged images from Flickr by training deep convolutional neural networks (CNNs). We evaluate on over a dozen attributes, showing that while automatically recognizing some attributes is very difficult, others can be automatically estimated with about the same accuracy as a human.",
"The facial appearance of a person is a product of many factors, including their gender, age, and ethnicity. Methods for estimating these latent factors directly from an image of a face have been extensively studied for decades. We extend this line of work to include estimating the location where the image was taken. We propose a deep network architecture for making such predictions and demonstrate its superiority to other approaches in an extensive set of quantitative experiments on the GeoFaces dataset. Our experiments show that in 26 of the cases the ground truth location is the topmost prediction, and if we allow ourselves to consider the top five predictions, the accuracy increases to 47 . In both cases, the deep learning based approach significantly outperforms random chance as well as another baseline method."
]
} |
1604.06433 | 2337242548 | The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art. | Representation Learning. Most high-performance computer vision methods based on deep learning rely on visual representations that are learned based on supervised pre-training , for example, using networks trained on millions of annotated examples such as the ImageNet dataset for general object classification @cite_4 @cite_0 , or relying on massive amounts of identity labels for facial analysis tasks @cite_54 @cite_50 . Our work, instead, is focused on building rich visual representations for person attribute classification without using manual annotations in the pre-training step. | {
"cite_N": [
"@cite_0",
"@cite_54",
"@cite_4",
"@cite_50"
],
"mid": [
"",
"1834627138",
"2108598243",
"2952304308"
],
"abstract": [
"",
"Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"This paper designs a high-performance deep convolutional network (DeepID2+) for face recognition. It is learned with the identification-verification supervisory signal. By increasing the dimension of hidden representations and adding supervision to early convolutional layers, DeepID2+ achieves new state-of-the-art on LFW and YouTube Faces benchmarks. Through empirical studies, we have discovered three properties of its deep neural activations critical for the high performance: sparsity, selectiveness and robustness. (1) It is observed that neural activations are moderately sparse. Moderate sparsity maximizes the discriminative power of the deep net as well as the distance between images. It is surprising that DeepID2+ still can achieve high recognition accuracy even after the neural responses are binarized. (2) Its neurons in higher layers are highly selective to identities and identity-related attributes. We can identify different subsets of neurons which are either constantly excited or inhibited when different identities or attributes are present. Although DeepID2+ is not taught to distinguish attributes during training, it has implicitly learned such high-level concepts. (3) It is much more robust to occlusions, although occlusion patterns are not included in the training set."
]
} |
1604.06433 | 2337242548 | The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art. | There is a long history of methods for unsupervised learning of visual representations based on deep learning @cite_41 @cite_14 @cite_49 . When large collections of unlabeled still images are available, auto-encoders or methods that optimize for reconstruction of the data are popular solutions to learn features without manual labeling @cite_26 @cite_49 @cite_31 . @cite_27 proposed learning supervised pretext'' tasks between patches within an image as an embedding for unsupervised object discovery. These approaches, however, have not yet proven effective in matching the performance of supervised pre-training methods. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_41",
"@cite_27",
"@cite_49",
"@cite_31"
],
"mid": [
"",
"2950789693",
"2136922672",
"2950187998",
"630242894",
"2025768430"
],
"abstract": [
"",
"We consider the problem of building high- level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 bil- lion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a clus- ter with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental re- sults reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bod- ies. Starting with these learned features, we trained our network to obtain 15.8 accu- racy in recognizing 20,000 object categories from ImageNet, a leap of 70 relative im- provement over the previous state-of-the-art.",
"We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.",
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"We present a novel architecture, the \"stacked what-where auto-encoders\" (SWWAE), which integrates discriminative and generative pathways and provides a unified approach to supervised, semi-supervised and unsupervised learning without relying on sampling during training. An instantiation of SWWAE uses a convolutional net (Convnet) ( (1998)) to encode the input, and employs a deconvolutional net (Deconvnet) ( (2010)) to produce the reconstruction. The objective function includes reconstruction terms that induce the hidden states in the Deconvnet to be similar to those of the Convnet. Each pooling layer produces two sets of variables: the \"what\" which are fed to the next layer, and its complementary variable \"where\" that are fed to the corresponding layer in the generative decoder.",
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite."
]
} |
1604.06433 | 2337242548 | The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art. | When video data is available, additional regularization can be imposed by enforcing temporal coherence @cite_55 @cite_21 or through the so called slow feature analysis @cite_42 . More recently, @cite_22 used multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences, combining auto-encoders and prediction of future video frames. | {
"cite_N": [
"@cite_55",
"@cite_21",
"@cite_42",
"@cite_22"
],
"mid": [
"2145038566",
"219040644",
"2146444479",
"2952453038"
],
"abstract": [
"This work proposes a learning method for deep architectures that takes advantage of sequential data, in particular from the temporal coherence that naturally exists in unlabeled video recordings. That is, two successive frames are likely to contain the same object or objects. This coherence is used as a supervisory signal over the unlabeled data, and is used to improve the performance on a supervised task of interest. We demonstrate the effectiveness of this method on some pose invariant object and face recognition tasks.",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.",
"We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance."
]
} |
1604.06433 | 2337242548 | The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art. | Our work is related to other methods that learn visual representations from videos captured by wearable cameras or vehicle-mounted cameras @cite_1 @cite_18 , where awareness of egomotion can be used as a supervisory signal for feature learning. In contrast to those methods, however, we leverage the geo-location and weather data that are readily available in wearable sensors as a source of free supervisory signal to learn rich visual representations which are suitable to facial attribute classification. | {
"cite_N": [
"@cite_18",
"@cite_1"
],
"mid": [
"2951590555",
"1947727767"
],
"abstract": [
"The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching.",
"Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance i.e. they respond systematically to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning system significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in a disjoint domain."
]
} |
1604.06558 | 2952514704 | In this paper, we consider folding assembly as an assembly primitive suitable for dual-arm robotic assembly, that can be integrated in a higher level assembly strategy. The system composed by two pieces in contact is modelled as an articulated object, connected by a prismatic-revolute joint. Different grasping scenarios were considered in order to model the system, and a simple controller based on feedback linearisation is proposed, using force torque measurements to compute the contact point kinematics. The folding assembly controller has been experimentally tested with two sample parts, in order to showcase folding assembly as a viable assembly primitive. | @PARASPLIT Dual-arm robots can enhance human-robot interaction for augmenting human capabilities @cite_4 and the way a robot is programmed can be simplified via the introduction of high level semantics @cite_1 . In particular, in @cite_7 , a programming library was developed that allows for a user to program dual-arm robot tasks visually. This is done by using vision and voice recognition to interpret the desired functionality (move down, rotate, approach, etc.). | {
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_7"
],
"mid": [
"2163533523",
"2094537981",
"2062569962"
],
"abstract": [
"A widespread application of impedance control in dual-arm robotic systems is still a challenging problem. One of limitations is the absence of a widely-accepted framework for the synthesis of the impedance control parameters that ensure stability of both contact transition and interaction processes and guarantee desired contact performance. The next critical problem relates planning and programming of complex impedance controlled bimanual operations. The proposed new design, planning and programming framework provides efficient and flexible algorithms and tools which considerably facilitate future dual-arm robot applications in complex assembly tasks. The initial testing with new Workerbot dual-arm system demonstrates applicability and feasibility of the proposed framework.",
"Abstract Flexibility and changeability of assembly processes require a close cooperation between the worker and the automated assembly system. The interaction between human and robots improves the efficiency of individual complex assembly processes, particularly when a robot serves as an intelligent assistant. The paper gives a survey about forms of human–machine cooperation in assembly and available technologies that support the cooperation. Organizational and economic aspects of cooperative assembly including efficient component supply and logistics are also discussed.",
"Abstract This study has to do with a method for the intuitive programming of dual arm robots. A task oriented programming procedure is described for this reason, including the proposed dual arm robotics library and the human language. The robotics library aspires after human like capabilities and implements bi-manual operations. This intuitive programming framework is based on a service oriented architecture and is developed in ROS. The user can easily interact with a dual arm robot platform through depth sensors, noise canceling microphones and GUIs. The methods have been implemented in a dual arm robot platform for an assembly case from the automotive industry."
]
} |
1604.06558 | 2952514704 | In this paper, we consider folding assembly as an assembly primitive suitable for dual-arm robotic assembly, that can be integrated in a higher level assembly strategy. The system composed by two pieces in contact is modelled as an articulated object, connected by a prismatic-revolute joint. Different grasping scenarios were considered in order to model the system, and a simple controller based on feedback linearisation is proposed, using force torque measurements to compute the contact point kinematics. The folding assembly controller has been experimentally tested with two sample parts, in order to showcase folding assembly as a viable assembly primitive. | An effective way to create modular and complex assembly systems is to define a set of assembly primitives, and adopt a higher level logic that switches between them in order to obtain a desired assembly behaviour. For instance, in @cite_0 , petri nets are used to model transitions between different discrete steps of an assembly problem while an automatic assembly planner was proposed in @cite_22 , where each assembly task is executed by a suitable controller; switching between states is made when a set of limit conditions is reached. More recently, @cite_26 employs the concept of function blocks for robotic assembly. Each primitive assembly task is encapsulated in a block, every one of which with its own set of techniques for the successful assembly execution. A complete assembly task is divided in these primitives, and petri nets are used to model the transitions between different assembly operations. | {
"cite_N": [
"@cite_0",
"@cite_26",
"@cite_22"
],
"mid": [
"2116113712",
"1977599843",
""
],
"abstract": [
"In this paper a real-time collision avoidance approach is presented for safe human-robot coexistence. The main contribution is a fast method to evaluate distances between the robot and possibly moving obstacles (including humans), based on the concept of depth space. The distances are used to generate repulsive vectors that are used to control the robot while executing a generic motion task. The repulsive vectors can also take advantage of an estimation of the obstacle velocity. In order to preserve the execution of a Cartesian task with a redundant manipulator, a simple collision avoidance algorithm has been implemented where different reaction behaviors are set up for the end-effector and for other control points along the robot structure. The complete collision avoidance framework, from perception of the environment to joint-level robot control, is presented for a 7-dof KUKA Light-Weight-Robot IV using the Microsoft Kinect sensor. Experimental results are reported for dynamic environments with obstacles and a human.",
"The dynamic market today requires manufacturing companies to possess high degree of adaptability and flexibility in order to deal with shop-floor uncertainties. Such uncertainties as missing tools, part shortage, job delay, rush-order and unavailability of resources, etc. happen more often in assembly operations. Targeting this problem, this research proposes a function block enabled approach to achieving adaptability and flexibility in assembly planning and control. In particular, this paper presents our latest development using a robotic mini assembly cell for testing and validation of a function block enabled system capable of assembly and robot trajectory planning and control. It is expected that a better adaptability can be achieved by this approach.",
""
]
} |
1604.06558 | 2952514704 | In this paper, we consider folding assembly as an assembly primitive suitable for dual-arm robotic assembly, that can be integrated in a higher level assembly strategy. The system composed by two pieces in contact is modelled as an articulated object, connected by a prismatic-revolute joint. Different grasping scenarios were considered in order to model the system, and a simple controller based on feedback linearisation is proposed, using force torque measurements to compute the contact point kinematics. The folding assembly controller has been experimentally tested with two sample parts, in order to showcase folding assembly as a viable assembly primitive. | Robotic assembly with dual-arm manipulators has advantages and poses specific challenges; an in-depth review of the state-of-the-art for dual arm manipulation can be found in @cite_23 . The increased flexibility and larger independence from a structured environment comes at the cost of an increase in system complexity. In particular, interaction forces between the manipulators and grasped objects require careful consideration, due to the closed kinematic chain @cite_14 . While both single and dual arm robots can be used for applying planning strategies @cite_0 @cite_11 @cite_15 , a dual arm robot can take advantage from the cooperation between manipulators that allows for a reduction in task complexity @cite_24 . Several works detail master-slave implementations. When carrying a load, it is common to have one arm perform the motion control of the overall system, while its slave uses force feedback to ensure tracking of the object changing position @cite_10 @cite_9 . Master-slave relationships are also relevant in scenarios where the need to overcome a physical distance between an operator and its work subject is present @cite_19 @cite_8 . | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_0",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"2167643747",
"",
"",
"2116113712",
"2072884097",
"2104370419",
"",
"",
"2147634730",
""
],
"abstract": [
"An approach to the control of constrained dynamic systems such as multiple arm systems, multifingered grippers, and walking vehicles is described. The basic philosophy is to utilize a minimal set of inputs to control the trajectory and the surplus input to control the constraint or interaction forces and moments in the closed chain. A dynamic control model for the closed chain is derived that is suitable for designing a controller in which the trajectory and the interaction forces and moments are explicitly controlled. Nonlinear feedback techniques derived from differential geometry are then applied to linearize and decouple the nonlinear model. These ideas are illustrated through a planar example in which two arms are used for cooperative manipulation. Results from a simulation are used to illustrate the efficacy of the method. >",
"",
"",
"In this paper a real-time collision avoidance approach is presented for safe human-robot coexistence. The main contribution is a fast method to evaluate distances between the robot and possibly moving obstacles (including humans), based on the concept of depth space. The distances are used to generate repulsive vectors that are used to control the robot while executing a generic motion task. The repulsive vectors can also take advantage of an estimation of the obstacle velocity. In order to preserve the execution of a Cartesian task with a redundant manipulator, a simple collision avoidance algorithm has been implemented where different reaction behaviors are set up for the end-effector and for other control points along the robot structure. The complete collision avoidance framework, from perception of the environment to joint-level robot control, is presented for a 7-dof KUKA Light-Weight-Robot IV using the Microsoft Kinect sensor. Experimental results are reported for dynamic environments with obstacles and a human.",
"In this paper the problem of achieving a cooperative behavior in a dual-arm robot systemisaddressed.Acontrolstrategyisconceivedinwhichonerobotispositioncontrolledanddevoted to task execution, whereas a suitable compliance is conferred to the end effectoroftheotherrobottocopewithunavoidablemisalignmentbetweenthetwoarmsandtaskplanning inaccuracies. A modular control structure is adopted that allows the selectionof the proper operating mode for each robot, depending on the task requirements. Theproposed approach is experimentally tested in two different tasks involving the tworobots in the laboratory setup. First, a parts-mating task of peg-in-hole type is executed;the robot carrying the peg is position controlled, whereas the robot holding the hollowpart is controlled to behave as a mechanical impedance. Then, a pressure-forming taskis executed, in which a disk-shaped tool is required to align with a flat surface whileexerting a desired pressure; in this case, the robot carrying the disk is position controlled,whereas the robot holding the surface is force controlled.",
"Minimally invasive surgery involves inserting special instruments into the body cavity through tiny incisions in order to perform surgical procedures. In this paper, the design of a robotic master-slave system for use in minimally invasive surgery is discussed. This system is capable of providing haptic feedback to the surgeon in all available degrees of freedom. System design as well as master and slave bilateral control and communication issues are discussed.",
"",
"",
"This paper proposes a control scheme which realizes cooperative motion to handle a target object by using two robot manipulators. The first robot, which is used as the leader, has only the robust position controller. The second robot, which is used as the follower, has the position based force controller. Based on this very simple strategy, we can easily realize intelligent cooperative manipulation by completely decentralized control. We demonstrate its control performance by experiments of moving an object back and forth, and discuss on future problems.",
""
]
} |
1604.06274 | 2335943576 | Learning and generating Chinese poems is a charming yet challenging task. Traditional approaches involve various language modeling and machine translation techniques, however, they perform not as well when generating poems with complex pattern constraints, for example Song iambics, a famous type of poems that involve variable-length sentences and strict rhythmic patterns. This paper applies the attention-based sequence-to-sequence model to generate Chinese Song iambics. Specifically, we encode the cue sentences by a bi-directional Long-Short Term Memory (LSTM) model and then predict the entire iambic with the information provided by the encoder, in the form of an attention-based LSTM that can regularize the generation process by the fine structure of the input cues. Several techniques are investigated to improve the model, including global context integration, hybrid style training, character vector initialization and adaptation. Both the automatic and subjective evaluation results show that our model indeed can learn the complex structural and rhythmic patterns of Song iambics, and the generation is rather successful. | The second approach is based on various genetic algorithms @cite_12 @cite_18 @cite_10 . For example, @cite_10 proposed to use a stochastic search approach to obtain the best matched sentences. | {
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_12"
],
"mid": [
"2126410802",
"2385617402",
"2146053064"
],
"abstract": [
"This article presents a series of experiments in automatically generating poetic texts. We confined our attention to the generation of texts which are syntactically well-formed, meet certain pre-specified patterns of metre and broadly convey some given meaning. Such aspects can be formally defined, thus avoiding the complications of imagery and interpretation that are central to assessing more free forms of verse. Our implemented system, McGONAGALL, applies the genetic algorithm to construct such texts. It uses a sophisticated linguistic formalism to represent its genomic information, from which can be computed the phenotypic information of both semantic representations and patterns of stress. The conducted experiments broadly indicated that relatively meaningful text could be produced if the constraints on metre were relaxed, and precise metric text was possible with loose semantic constraints, but it was difficult to produce text which was both semantically coherent and of high quality metrically.",
"Automatic generation of poetry has always been considered a hard nut in natural language generation.This paper reports some pioneering research on a possible generic algorithm and its automatic generation of SONGCI.In light of the characteristics of Chinese ancient poetry,this paper designed the level and oblique tones-based coding method,the syntactic and semantic weighted function of fitness,the elitism and roulette-combined selection operator,and the partially mapped crossover operator and the heuristic mutation operator.As shown by tests,the system constructed on the basis of the computing model designed in this paper is basically capable of generating Chinese SONGCI with some aesthetic merit.This work represents progress in the field of Chinese poetry automatic generation.",
""
]
} |
1604.06274 | 2335943576 | Learning and generating Chinese poems is a charming yet challenging task. Traditional approaches involve various language modeling and machine translation techniques, however, they perform not as well when generating poems with complex pattern constraints, for example Song iambics, a famous type of poems that involve variable-length sentences and strict rhythmic patterns. This paper applies the attention-based sequence-to-sequence model to generate Chinese Song iambics. Specifically, we encode the cue sentences by a bi-directional Long-Short Term Memory (LSTM) model and then predict the entire iambic with the information provided by the encoder, in the form of an attention-based LSTM that can regularize the generation process by the fine structure of the input cues. Several techniques are investigated to improve the model, including global context integration, hybrid style training, character vector initialization and adaptation. Both the automatic and subjective evaluation results show that our model indeed can learn the complex structural and rhythmic patterns of Song iambics, and the generation is rather successful. | The third approach to poetry generation is by various statistical machine translation (SMT) methods. This approach was used by @cite_0 to generate Chinese couplets, a type of simple regulated verses with only two lines. @cite_11 extended this approach to generate Chinese quatrains (four-line Tang poems), where each line of the poem is generated by translation from the previous line. | {
"cite_N": [
"@cite_0",
"@cite_11"
],
"mid": [
"2059488546",
"12732426"
],
"abstract": [
"Part of the unique cultural heritage of China is the game of Chinese couplets (duilian). One person challenges the other person with a sentence (first sentence). The other person then replies with a sentence (second sentence) equal in length and word segmentation, in a way that corresponding words in the two sentences match each other by obeying certain constraints on semantic, syntactic, and lexical relatedness. This task is viewed as a difficult problem in AI and has not been explored in the research community. In this paper, we regard this task as a kind of machine translation process. We present a phrase-based SMT approach to generate the second sentence. First, the system takes as input the first sentence, and generates as output an N-best list of proposed second sentences, using a phrase-based SMT decoder. Then, a set of filters is used to remove candidates violating linguistic constraints. Finally, a Ranking SVM is applied to rerank the candidates. A comprehensive evaluation, using both human judgments and BLEU scores, has been conducted, and the results demonstrate that this approach is very successful.",
"This paper describes a statistical approach to generation of Chinese classical poetry and proposes a novel method to automatically evaluate poems. The system accepts a set of keywords representing the writing intents from a writer and generates sentences one by one to form a completed poem. A statistical machine translation (SMT) system is applied to generate new sentences, given the sentences generated previously. For each line of sentence a specific model specially trained for that line is used, as opposed to using a single model for all sentences. To enhance the coherence of sentences on every line, a coherence model using mutual information is applied to select candidates with better consistency with previous sentences. In addition, we demonstrate the effectiveness of the BLEU metric for evaluation with a novel method of generating diverse references."
]
} |
1604.06274 | 2335943576 | Learning and generating Chinese poems is a charming yet challenging task. Traditional approaches involve various language modeling and machine translation techniques, however, they perform not as well when generating poems with complex pattern constraints, for example Song iambics, a famous type of poems that involve variable-length sentences and strict rhythmic patterns. This paper applies the attention-based sequence-to-sequence model to generate Chinese Song iambics. Specifically, we encode the cue sentences by a bi-directional Long-Short Term Memory (LSTM) model and then predict the entire iambic with the information provided by the encoder, in the form of an attention-based LSTM that can regularize the generation process by the fine structure of the input cues. Several techniques are investigated to improve the model, including global context integration, hybrid style training, character vector initialization and adaptation. Both the automatic and subjective evaluation results show that our model indeed can learn the complex structural and rhythmic patterns of Song iambics, and the generation is rather successful. | Our approach follows the RNN-based approach and thus closely related to the work @cite_1 . However, several important differences make our proposal novel. Firstly, we use the LSTM rather than the conventional RNN to obtain long-distance memory; secondly, we use the attention-based framework to enforce theme consistency; thirdly, our model is a simple sequence-to-sequence structure, which is much simpler than the model proposed by @cite_1 and can be easily extended to generate poems with various genres. Particularly, we employ this model to generate Song iambics that involve much more complex structures than Tang poems and have never been successfully generated by machines. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2115221470"
],
"abstract": [
"We propose a model for Chinese poem generation based on recurrent neural networks which we argue is ideally suited to capturing poetic content and form. Our generator jointly performs content selection (“what to say”) and surface realization (“how to say”) by learning representations of individual characters, and their combinations into one or more lines as well as how these mutually reinforce and constrain each other. Poem lines are generated incrementally by taking into account the entire history of what has been generated so far rather than the limited horizon imposed by the previous line or lexical n-grams. Experimental results show that our model outperforms competitive Chinese poetry generation systems using both automatic and manual evaluation methods."
]
} |
1604.05813 | 2339562681 | Building successful recommender systems requires uncovering the underlying dimensions that describe the properties of items as well as users' preferences toward them. In domains like clothing recommendation, explaining users' preferences requires modeling the visual appearance of the items in question. This makes recommendation especially challenging, due to both the complexity and subtlety of people's 'visual preferences,' as well as the scale and dimensionality of the data and features involved. Ultimately, a successful model should be capable of capturing considerable variance across different categories and styles, while still modeling the commonalities explained by global' structures in order to combat the sparsity (e.g. cold-start), variability, and scale of real-world datasets. Here, we address these challenges by building such structures to model the visual dimensions across different product categories. With a novel hierarchical embedding architecture, our method accounts for both high-level (colorfulness, darkness, etc.) and subtle (e.g. casualness) visual characteristics simultaneously. | Incorporating Side-Signals. Despite their success, MF-based methods suffer from issues due to the lack of observations to estimate the latent factors of new users and items. Making use of side-signals' on top of MF approaches can provide auxiliary information in settings while still maintaining MF's strengths. Such signals include the content of the items themselves, ranging from the timbre and rhythm for music @cite_9 @cite_16 , textual reviews that encode dimensions of opinions @cite_10 @cite_11 @cite_15 , or social relations @cite_1 @cite_26 @cite_8 @cite_7 . Our work follows this line though we focus on a specific type of signal---visual features---which brings unique challenges when modeling the complex dimensions that determine users' and items' interactions. | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_11"
],
"mid": [
"2135598826",
"2064754513",
"",
"2071133755",
"",
"2050096199",
"2041022395",
"2061873838",
""
],
"abstract": [
"Recommender systems are becoming tools of choice to select the online information relevant to a given user. Collaborative filtering is the most popular approach to building recommender systems and has been successfully employed in many applications. With the advent of online social networks, the social network based approach to recommendation has emerged. This approach assumes a social network among users and makes recommendations for a user based on the ratings of the users that have direct or indirect social relations with the given user. As one of their major benefits, social network based approaches have been shown to reduce the problems with cold start users. In this paper, we explore a model-based approach for recommendation in social networks, employing matrix factorization techniques. Advancing previous work, we incorporate the mechanism of trust propagation into the model. Trust propagation has been shown to be a crucial phenomenon in the social sciences, in social network analysis and in trust-based recommendation. We have conducted experiments on two real life data sets, the public domain Epinions.com dataset and a much larger dataset that we have recently crawled from Flixster.com. Our experiments demonstrate that modeling trust propagation leads to a substantial increase in recommendation accuracy, in particular for cold start users.",
"A key element of the social networks on the internet such as Facebook and Flickr is that they encourage users to create connections between themselves, other users and objects. One important task that has been approached in the literature that deals with such data is to use social graphs to predict user behavior (e.g. joining a group of interest). More specifically, we study the cold-start problem, where users only participate in some relations, which we will call social relations, but not in the relation on which the predictions are made, which we will refer to as target relations. We propose a formalization of the problem and a principled approach to it based on multi-relational factorization techniques. Furthermore, we derive a principled feature extraction scheme from the social data to extract predictors for a classifier on the target relation. Experiments conducted on real world datasets show that our approach outperforms current methods.",
"",
"Current music recommender systems typically act in a greedy manner by recommending songs with the highest user ratings. Greedy recommendation, however, is suboptimal over the long term: it does not actively gather information on user preferences and fails to recommend novel songs that are potentially interesting. A successful recommender system must balance the needs to explore user preferences and to exploit this information for recommendation. This article presents a new approach to music recommendation by formulating this exploration-exploitation trade-off as a reinforcement learning task. To learn user preferences, it uses a Bayesian model that accounts for both audio content and the novelty of recommendations. A piecewise-linear approximation to the model and a variational inference algorithm help to speed up Bayesian inference. One additional benefit of our approach is a single unified model for both music recommendation and playlist generation. We demonstrate the strong potential of the proposed approach with simulation results and a user study.",
"",
"Most existing recommender systems focus on modeling the ratings while ignoring the abundant information embedded in the review text. In this paper, we propose a unified model that combines content-based filtering with collaborative filtering, harnessing the information of both ratings and reviews. We apply topic modeling techniques on the review text and align the topics with rating dimensions to improve prediction accuracy. With the information embedded in the review text, we can alleviate the cold-start problem. Furthermore, our model is able to learn latent topics that are interpretable. With these interpretable topics, we can explore the prior knowledge on items or users and recommend completely \"cold\"' items. Empirical study on 27 classes of real-life datasets show that our proposed model lead to significant improvement compared with strong baseline methods, especially for datasets which are extremely sparse where rating-only methods cannot make accurate predictions.",
"In the past decade large scale recommendation datasets were published and extensively studied. In this work we describe a detailed analysis of a sparse, large scale dataset, specifically designed to push the envelope of recommender system models. The Yahoo! Music dataset consists of more than a million users, 600 thousand musical items and more than 250 million ratings, collected over a decade. It is characterized by three unique features: First, rated items are multi-typed, including tracks, albums, artists and genres; Second, items are arranged within a four level taxonomy, proving itself effective in coping with a severe sparsity problem that originates from the unusually large number of items (compared to, e.g., movie ratings datasets). Finally, fine resolution timestamps associated with the ratings enable a comprehensive temporal and session analysis. We further present a matrix factorization model exploiting the special characteristics of this dataset. In particular, the model incorporates a rich bias model with terms that capture information from the taxonomy of items and different temporal dynamics of music ratings. To gain additional insights of its properties, we organized the KddCup-2011 competition about this dataset. As the competition drew thousands of participants, we expect the dataset to attract considerable research activity in the future.",
"In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.",
""
]
} |
1604.05813 | 2339562681 | Building successful recommender systems requires uncovering the underlying dimensions that describe the properties of items as well as users' preferences toward them. In domains like clothing recommendation, explaining users' preferences requires modeling the visual appearance of the items in question. This makes recommendation especially challenging, due to both the complexity and subtlety of people's 'visual preferences,' as well as the scale and dimensionality of the data and features involved. Ultimately, a successful model should be capable of capturing considerable variance across different categories and styles, while still modeling the commonalities explained by global' structures in order to combat the sparsity (e.g. cold-start), variability, and scale of real-world datasets. Here, we address these challenges by building such structures to model the visual dimensions across different product categories. With a novel hierarchical embedding architecture, our method accounts for both high-level (colorfulness, darkness, etc.) and subtle (e.g. casualness) visual characteristics simultaneously. | Visually-aware Recommender Systems. The importance of using visual signals (i.e., product images) in recommendation scenarios has been stressed by previous works, e.g. @cite_4 @cite_14 . State-of-the-art visually-aware recommender systems make use of high-level visual features of product images extracted from (pre-trained) Deep CNNs on top of which a layer of parameterized transformation is learned to uncover those visual dimensions'' that predict users' actions (e.g. purchases) most accurately. Modeling this transformation layer is non-trivial as it is crucial to maximally capture the wide variance across different categories of items without introducing prohibitively many parameters into the model. | {
"cite_N": [
"@cite_14",
"@cite_4"
],
"mid": [
"2139095291",
"2171677167"
],
"abstract": [
"In this paper we study the importance of image based features on the click-through rate (CTR) in the context of a large scale product search engine. Typically product search engines use text based features in their ranking function. We present a novel idea of using image based features, common in the photography literature, in addition to text based features. We used a stochastic gradient boosting based regression model to learn relationships between features and CTR. Our results indicate statistically significant correlations between the image features and CTR. We also see improvements to NDCG and mean standard regression.",
"In online peer-to-peer commerce places where physical examination of the goods is infeasible, textual descriptions, images of the products, reputation of the participants, play key roles. Visual image is a powerful channel to convey crucial information towards e-shoppers and influence their choice. In this paper, we investigate a well-known online marketplace where over millions of products change hands and most are described with the help of one or more images. We present a systematic data mining and knowledge discovery approach that aims to quantitatively dissect the role of images in e-commerce in great detail. Our goal is two-fold. First, we aim to get a thorough understanding of impact of images across various dimensions: product categories, user segments, conversion rate. We present quantitative evaluation of the influence of images and show how to leverage different image aspects, such as quantity and quality, to effectively raise sale. Second, we study interaction of image data with other selling dimensions by jointly modeling them with user behavior data. Results suggest that \"watch\" behavior encodes complex signals combining both attention and hesitation from buyer, in which image still holds an important role when compared to other selling variables, especially for products for which appearance is important. We conclude on how these findings can benefit sellers in a high competitive online e-commerce market."
]
} |
1604.05813 | 2339562681 | Building successful recommender systems requires uncovering the underlying dimensions that describe the properties of items as well as users' preferences toward them. In domains like clothing recommendation, explaining users' preferences requires modeling the visual appearance of the items in question. This makes recommendation especially challenging, due to both the complexity and subtlety of people's 'visual preferences,' as well as the scale and dimensionality of the data and features involved. Ultimately, a successful model should be capable of capturing considerable variance across different categories and styles, while still modeling the commonalities explained by global' structures in order to combat the sparsity (e.g. cold-start), variability, and scale of real-world datasets. Here, we address these challenges by building such structures to model the visual dimensions across different product categories. With a novel hierarchical embedding architecture, our method accounts for both high-level (colorfulness, darkness, etc.) and subtle (e.g. casualness) visual characteristics simultaneously. | A recent line of work @cite_21 @cite_6 @cite_17 makes use of a parameterized embedding matrix to transform items from the CNN feature space into a low-dimensional visual space' whose dimensions (called visual dimensions') are those that maximally explain the variance in users' decisions. While a simple parameterization, this technique successfully captures the common features among different categories, e.g. what characteristics make people consider a t-shirt or a shoe to be colorful.' But it is limited in its ability to model subtler variations, since each item is modeled in terms of a low-dimensional, global embedding. Here, we focus on building flexible hierarchical embedding structures that are capable of efficiently capturing both the common features and subtle variations across and within categories. | {
"cite_N": [
"@cite_21",
"@cite_6",
"@cite_17"
],
"mid": [
"2963655167",
"2294501066",
"2963696605"
],
"abstract": [
"Modern recommender systems model people and items by discovering or 'teasing apart' the underlying dimensions that encode the properties of items and users' preferences toward them. Critically, such dimensions are uncovered based on user feedback, often in implicit form (such as purchase histories, browsing logs, etc.); in addition, some recommender systems make use of side information, such as product attributes, temporal information, or review text. However one important feature that is typically ignored by existing personalized recommendation and ranking methods is the visual appearance of the items being considered. In this paper we propose a scalable factorization model to incorporate visual signals into predictors of people's opinions, which we apply to a selection of large, real-world datasets. We make use of visual features extracted from product images using (pre-trained) deep networks, on top of which we learn an additional layer that uncovers the visual dimensions that best explain the variation in people's feedback. This not only leads to significantly more accurate personalized ranking methods, but also helps to alleviate cold start issues, and qualitatively to analyze the visual dimensions that influence people's opinions.",
"Building a successful recommender system depends on understanding both the dimensions of people's preferences as well as their dynamics. In certain domains, such as fashion, modeling such preferences can be incredibly difficult, due to the need to simultaneously model the visual appearance of products as well as their evolution over time. The subtle semantics and non-linear dynamics of fashion evolution raise unique challenges especially considering the sparsity and large scale of the underlying datasets. In this paper we build novel models for the One-Class Collaborative Filtering setting, where our goal is to estimate users' fashion-aware personalized ranking functions based on their past feedback. To uncover the complex and evolving visual factors that people consider when evaluating products, our method combines high-level visual features extracted from a deep convolutional neural network, users' past feedback, as well as evolving trends within the community. Experimentally we evaluate our method on two large real-world datasets from Amazon.com, where we show it to outperform state-of-the-art personalized ranking measures, and also use it to visualize the high-level fashion trends across the 11-year span of our dataset.",
"To build a fashion recommendation system, we need to help users retrieve fashionable items that are visually similar to a particular query, for reasons ranging from searching alternatives (i.e., substitutes), to generating stylish outfits that are visually consistent, among other applications. In domains like clothing and accessories, such considerations are particularly paramount as the visual appearance of items is a critical feature that guides users' decisions. However, existing systems like Amazon and eBay still rely mainly on keyword search and recommending loosely consistent items (e.g. based on co-purchasing or browsing data), without an interface that makes use of visual information to serve the above needs. In this paper, we attempt to fill this gap by designing and implementing an image-based query system, called Fashionista, which provides a graphical interface to help users efficiently explore those items that are not only visually similar to a given query, but which are also fashionable, as determined by visually-aware recommendation approaches. Methodologically, Fashionista learns a low-dimensional visual space as well as the evolution of fashion trends from large corpora of binary feedback data such as purchase histories of Women's Clothing & Accessories from Amazon, which we use for this demonstration."
]
} |
1604.05813 | 2339562681 | Building successful recommender systems requires uncovering the underlying dimensions that describe the properties of items as well as users' preferences toward them. In domains like clothing recommendation, explaining users' preferences requires modeling the visual appearance of the items in question. This makes recommendation especially challenging, due to both the complexity and subtlety of people's 'visual preferences,' as well as the scale and dimensionality of the data and features involved. Ultimately, a successful model should be capable of capturing considerable variance across different categories and styles, while still modeling the commonalities explained by global' structures in order to combat the sparsity (e.g. cold-start), variability, and scale of real-world datasets. Here, we address these challenges by building such structures to model the visual dimensions across different product categories. With a novel hierarchical embedding architecture, our method accounts for both high-level (colorfulness, darkness, etc.) and subtle (e.g. casualness) visual characteristics simultaneously. | Taxonomy-aware Recommender Systems. There has been some effort to investigate taxonomy-aware' recommendation models, including earlier works extending neighborhood-based methods (e.g. @cite_5 @cite_22 ) and more recent endeavors to extend MF using either explicit (e.g. @cite_13 ) or implicit (e.g. @cite_12 @cite_23 @cite_24 ) taxonomies. This work is related to ours, though does not consider the visual appearance of products as we do here. | {
"cite_N": [
"@cite_22",
"@cite_24",
"@cite_23",
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"2247675004",
"",
"2017894137",
"2155736969",
"1970972959"
],
"abstract": [
"",
"Items in real-world recommender systems exhibit certain hierarchical structures. Similarly, user preferences also present hierarchical structures. Recent studies show that incorporating the explicit hierarchical structures of items or user preferences can improve the performance of recommender systems. However, explicit hierarchical structures are usually unavailable, especially those of user preferences. Thus, there's a gap between the importance of hierarchical structures and their availability. In this paper, we investigate the problem of exploring the implicit hierarchical structures for recommender systems when they are not explicitly available. We propose a novel recommendation framework HSR to bridge the gap, which enables us to capture the implicit hierarchical structures of users and items simultaneously. Experimental results on two real world datasets demonstrate the effectiveness of the proposed framework.",
"",
"Recommender systems have been subject to an enormous rise in popularity and research interest over the last ten years. At the same time, very large taxonomies for product classification are becoming increasingly prominent among e-commerce systems for diverse domains, rendering detailed machine-readable content descriptions feasible. Amazon.com makes use of an entire plethora of hand-crafted taxonomies classifying books, movies, apparel, and various other goods. We exploit such taxonomic background knowledge for the computation of personalized recommendations. Hereby, relationships between super-concepts and sub-concepts constitute an important cornerstone of our novel approach, providing powerful inference opportunities for profile generation based upon the classification of products that customers have chosen. Ample empirical analysis, both offline and online, demonstrates our proposal's superiority over common existing approaches when user information is sparse and implicit ratings prevail.",
"We describe a latent-factor-model-based approach to the Track 2 task of KDD Cup 2011, which required learning to discriminate between highly rated and unrated items from a large dataset of music ratings. We take the pairwise ranking route, training our models to rank the highly rated items above the unrated items that are sampled from the same distribution. Using the item relationship information from the provided taxonomy to constrain item representations results in improved predictive performance. Providing the model with features summarizing the user's rating history as it relates to the item being ranked leads to further gains, producing the best single model result on Track 2.",
"Personalized recommender systems based on latent factor models are widely used to increase sales in e-commerce. Such systems use the past behavior of users to recommend new items that are likely to be of interest to them. However, latent factor model suffer from sparse user-item interaction in online shopping data: for a large portion of items that do not have sufficient purchase records, their latent factors cannot be estimated accurately. In this paper, we propose a novel approach that automatically discovers the taxonomies from online shopping data and jointly learns a taxonomy-based recommendation system. Out model is non-parametric and can learn the taxonomy structure automatically from the data. Since the taxonomy allows purchase data to be shared between items, it effectively improves the accuracy of recommending tail items by sharing strength with the more frequent items. Experiments on a large-scale online shopping dataset confirm that our proposed model improves significantly over state-of-the-art latent factor models. Moreover, our model generates high-quality and human readable taxonomies. Finally, using the algorithm-generated taxonomy, our model even outperforms latent factor models based on the human-induced taxonomy, thus alleviating the need for costly manual taxonomy generation."
]
} |
1604.05803 | 2337045339 | Network Function Virtualization (NFV) enables mobile operators to virtualize their network entities as Virtualized Network Functions (VNFs), offering fine-grained on-demand network capabilities. VNFs can be dynamically scale-in out to meet the performance desire and other dynamic behaviors. However, designing the auto-scaling algorithm for desired characteristics with low operation cost and low latency, while considering the existing capacity of legacy network equipment, is not a trivial task. In this paper, we propose a VNF Dynamic Auto Scaling Algorithm (DASA) considering the tradeoff between performance and operation cost. We develop an analytical model to quantify the tradeoff and validate the analysis through extensive simulations. The NFV enabled Evolved Packet Core (EPC) is modeled as queueing model while legacy network equipment are considered as reserved a block of servers; and VNF instances are powered on and off according to the number of job requests present. The results show that the DASA can significantly reduce operation cost given the latency upper-bound. Moreover, the models provide a quick way to evaluate the cost-performance tradeoff and system design without wide deployment, which can save cost and time. Index Terms Auto Scaling Algorithm, Modeling and Analysis, Network Function Virtualization, 5G, Cloud Networks, Virtualized EPC | The basic idea of EMA, ARMA, and ARIMA is moving average, where the most recent input data within a moving window are used to predict the next input data. Specifically, in @cite_21 , the authors proposed an EMA-based scheme to predict the CPU load. The scheme was implemented in Domain Name System (DNS) server and evaluation results showed that the capacities of servers are well utilized. The authors of @cite_22 introduced a novel prediction-based dynamic resource allocation algorithm to scale video transcoding service in the cloud. They used a two-step prediction to predict the load, resulting in a reduced number of required VMs. | {
"cite_N": [
"@cite_21",
"@cite_22"
],
"mid": [
"2144628693",
"2052294061"
],
"abstract": [
"Cloud computing allows business customers to scale up and down their resource usage based on needs. Many of the touted gains in the cloud model come from resource multiplexing through virtualization technology. In this paper, we present a system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. We introduce the concept of \"skewness” to measure the unevenness in the multidimensional resource utilization of a server. By minimizing skewness, we can combine different types of workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.",
"This paper presents prediction-based dynamic resource allocation algorithms to scale video transcoding service on a given Infrastructure as a Service cloud. The proposed algorithms provide mechanisms for allocation and deallocation of virtual machines (VMs) to a cluster of video transcoding servers in a horizontal fashion. We use a two-step load prediction method, which allows proactive resource allocation with high prediction accuracy under real-time constraints. For cost-efficiency, our work supports transcoding of multiple on-demand video streams concurrently on a single VM, resulting in a reduced number of required VMs. We use video segmentation at group of pictures level, which splits video streams into smaller segments that can be transcoded independently of one another. The approach is demonstrated in a discrete-event simulation and an experimental evaluation involving two different load patterns."
]
} |
1604.05803 | 2337045339 | Network Function Virtualization (NFV) enables mobile operators to virtualize their network entities as Virtualized Network Functions (VNFs), offering fine-grained on-demand network capabilities. VNFs can be dynamically scale-in out to meet the performance desire and other dynamic behaviors. However, designing the auto-scaling algorithm for desired characteristics with low operation cost and low latency, while considering the existing capacity of legacy network equipment, is not a trivial task. In this paper, we propose a VNF Dynamic Auto Scaling Algorithm (DASA) considering the tradeoff between performance and operation cost. We develop an analytical model to quantify the tradeoff and validate the analysis through extensive simulations. The NFV enabled Evolved Packet Core (EPC) is modeled as queueing model while legacy network equipment are considered as reserved a block of servers; and VNF instances are powered on and off according to the number of job requests present. The results show that the DASA can significantly reduce operation cost given the latency upper-bound. Moreover, the models provide a quick way to evaluate the cost-performance tradeoff and system design without wide deployment, which can save cost and time. Index Terms Auto Scaling Algorithm, Modeling and Analysis, Network Function Virtualization, 5G, Cloud Networks, Virtualized EPC | ARMA adds autoregressive (AR) into moving average. A resource allocation algorithm based on ARMA model was reported in @cite_5 , where empirical results showed significant benefits both to cloud users and cloud service providers. @cite_0 , the authors addressed a load forecasting model based on ARMA, which achieved around @math 91 Machine learning approaches are also used for the design of cloud auto-scaling algorithms @cite_31 @cite_37 @cite_20 . The authors of @cite_31 proposed a neural network and linear regression based auto-scaling algorithm. The author of @cite_37 implemented a Bayesian Network based cloud auto-scaling algorithm. @cite_20 , the authors evaluated three machine learning approaches: linear regression, neural network, and Support Vector Machine (SVM). Their results showed that SVM-based scheme outperforms the other two. | {
"cite_N": [
"@cite_37",
"@cite_0",
"@cite_5",
"@cite_31",
"@cite_20"
],
"mid": [
"1988620903",
"2132319142",
"2115743854",
"2119438912",
"2060192389"
],
"abstract": [
"The recent surge in the popularity and usage of Cloud Computing services by both the enterprise and individual consumers has necessitated efficient and proactive management of data center resources which host services having varied characteristics. One of the major issues concerning both the cloud service providers and consumers is the automatic scalability of resources (i.e., compute, storage and bandwidth) in response to the highly unpredictable demands. To this end, an opportunity exists to harness the predictive and diagnostic capabilities of machine learning approaches to incorporate dynamic scaling up and scaling down of resources without violating the Service Level Agreements (SLA) and simultaneously ensuring adequate revenue to the providers. This paper proposes, implements and evaluates a Bayesian Networks based predictive modeling framework to provide for an autonomic scaling of utility computing resources in the Cloud Computing scenario. In essence, the BN-based model captures the historical behavior of the system involving various performance metrics (indicators) and predicts the desired unknown metric (e.g. SLA parameter). Initial simulated experiments involving random demand scenarios provide insights into the feasibility and applicability of the proposed approach for improving the management of present data center facilities.",
"Workload variations on Internet platforms such as YouTube, Flickr, LastFM require novel approaches to dynamic resource provisioning in order to meet QoS requirements, while reducing the Total Cost of Ownership (TCO) of the infrastructures. The economy of scale promise of cloud computing is a great opportunity to approach this problem, by developing elastic large scale server infrastructures. However, a proactive approach to dynamic resource provisioning requires prediction models forecasting future load patterns. On the other hand, unexpected volume and data spikes require reactive provisioning for serving unexpected surges in workloads. When workload can not be predicted, adequate data grouping and placement algorithms may facilitate agile scaling up and down of an infrastructure. In this paper, we analyze a dynamic workload of an on-line music portal and present an elastic Web infrastructure that adapts to workload variations by dynamically scaling up and down servers. The workload is predicted by an autoregressive model capturing trends and seasonal patterns. Further, for enhancing data locality, we propose a predictive data grouping based on the history of content access of a user community. Finally, in order to facilitate agile elasticity, we present a data placement based on workload and access pattern prediction. The experimental results demonstrate that our forecasting model predicts workload with a high precision. Further, the predictive data grouping and placement methods provide high locality, load balance and high utilization of resources, allowing a server infrastructure to scale up and down depending on workload.",
"Large-scale component-based enterprise applications that leverage Cloud resources expect Quality of Service(QoS) guarantees in accordance with service level agreements between the customer and service providers. In the context of Cloud computing, auto scaling mechanisms hold the promise of assuring QoS properties to the applications while simultaneously making efficient use of resources and keeping operational costs low for the service providers. Despite the perceived advantages of auto scaling, realizing the full potential of auto scaling is hard due to multiple challenges stemming from the need to precisely estimate resource usage in the face of significant variability in client workload patterns. This paper makes three contributions to overcome the general lack of effective techniques for workload forecasting and optimal resource allocation. First, it discusses the challenges involved in auto scaling in the cloud. Second, it develops a model-predictive algorithm for workload forecasting that is used for resource auto scaling. Finally, empirical results are provided that demonstrate that resources can be allocated and deal located by our algorithm in a way that satisfies both the application QoS while keeping operational costs low.",
"Cloud computing allows dynamic resource scaling for enterprise online transaction systems, one of the key characteristics that differentiates the cloud from the traditional computing paradigm. However, initializing a new virtual instance in a cloud is not instantaneous; cloud hosting platforms introduce several minutes delay in the hardware resource allocation. In this paper, we develop prediction-based resource measurement and provisioning strategies using Neural Network and Linear Regression to satisfy upcoming resource demands. Experimental results demonstrate that the proposed technique offers more adaptive resource management for applications hosted in the cloud environment, an important mechanism to achieve on-demand resource allocation in the cloud.",
"In order to meet Service Level Agreement (SLA) requirements, efficient scaling of Virtual Machine (VM) resources must be provisioned few minutes ahead due to the VM boot-up time. One way to proactively provision resources is by predicting future resource demands. In this research, we have developed and evaluated cloud client prediction models for TPC-W benchmark web application using three machine learning techniques: Support Vector Machine (SVM), Neural Networks (NN) and Linear Regression (LR). We included the SLA metrics for Response Time and Throughput to the prediction model with the aim of providing the client with a more robust scaling decision choice. Our results show that Support Vector Machine provides the best prediction model."
]
} |
1604.05848 | 2333052987 | We adopt convolutional neural networks (CNNs) to be our parametric model to learn discriminative features and classifiers for local patch classification. Based on the occurrence frequency distribution of classes, an ensemble of CNNs (CNN-Ensemble) are learned, in which each CNN component focuses on learning different and complementary visual patterns. The local beliefs of pixels are output by CNN-Ensemble. Considering that visually similar pixels are indistinguishable under local context, we leverage the global scene semantics to alleviate the local ambiguity. The global scene constraint is mathematically achieved by adding a global energy term to the labeling energy function, and it is practically estimated in a non-parametric framework. A large margin-based CNN metric learning method is also proposed for better global belief estimation. In the end, the integration of local and global beliefs gives rise to the class likelihood of pixels, based on which maximum marginal inference is performed to generate the label prediction maps. Even without any post-processing, we achieve the state-of-the-art results on the challenging SiftFlow and Barcelona benchmarks. | The first direction exploits extracting better features for classifying pixels superpixels. Previously, low-level and mid-level hand-crafted features are designed to capture different image statistics. They usually lack discriminative power and suffer from high dimensionality, thus limiting the complexity of the full labeling system. Recently, machine learning techniques are commonly used to learn discriminative features for various computer vision tasks @cite_49 @cite_1 @cite_52 @cite_25 . In accordance, @cite_49 fed a convolutional neural network with multi-scale raw image patches, and they have presented very interesting results on real-world image scene labeling benchmarks. Furthermore, @cite_1 adopted a recurrent CNN to process the large-size image patches. @cite_12 learned a more compact random forest by substituting the random split function with a stronger Neural Network. black @cite_26 @cite_39 extracted features from different scopes of image regions, and then concatenated them to yield the context-aware representation. Therefore, regional and global context is encoded in the local representation. In their works, the local disambiguation is achieved via augmenting input context. In contrast, we leverage the global scene constraint to mitigate the local ambiguity. | {
"cite_N": [
"@cite_26",
"@cite_1",
"@cite_52",
"@cite_39",
"@cite_49",
"@cite_25",
"@cite_12"
],
"mid": [
"2090997445",
"",
"2066757459",
"1938976761",
"2022508996",
"1968752118",
"2059424674"
],
"abstract": [
"We present an attempt to improve the performance of multi-class image segmentation systems based on a multilevel description of segments. The multi-class image segmentation system used in this paper marks the segments in an image, describes the segments via multilevel feature vectors and passes the vectors to a multi-class object classifier. The focus of this paper is on the segment description section. We first propose a robust, scale-invariant texture feature set, named directional differences (DDs). This feature is designed by investigating the flaws of conventional texture features. The advantages of DDs are justified both analytically and experimentally. We have conducted several experiments on the performance of our multi-class image segmentation system to compare DDs with some well-known texture features. Experimental results show that DDs present about 8 higher classification accuracy. Feature reduction experiments also show that in a combined feature space, DDs remain in the list of most effective features even for small feature vector sizes. To describe a segment fully, we introduce a multilevel strategy called different levels of feature extraction (DLFE) that enables the system to include the semantic relations and contextual information in the features. This information is very effective especially for highly occluded objects. DLFE concatenates the features related to different views of every segment. Experimental results that show more than 4 improvement in multi-class image segmentation accuracy is achieved. Using the semantic information in the classifier section adds another 2 improvement to the accuracy of the system.",
"",
"In this paper, we propose an approach to learn hierarchical features for visual object tracking. First, we offline learn features robust to diverse motion patterns from auxiliary video sequences. The hierarchical features are learned via a two-layer convolutional neural network. Embedding the temporal slowness constraint in the stacked architecture makes the learned features robust to complicated motion transformations, which is important for visual object tracking. Then, given a target video sequence, we propose a domain adaptation module to online adapt the pre-learned features according to the specific target object. The adaptation is conducted in both layers of the deep feature learning module so as to include appearance information of the specific target object. As a result, the learned hierarchical features can be robust to both complicated motion transformations and appearance changes of target objects. We integrate our feature learning algorithm into three tracking methods. Experimental results demonstrate that significant improvement can be achieved using our learned hierarchical features, especially on video sequences with complicated motion transformations.",
"We introduce a purely feed-forward architecture for semantic segmentation. We map small image elements (superpixels) to rich feature representations extracted from a sequence of nested regions of increasing extent. These regions are obtained by “zooming out” from the superpixel all the way to scene-level resolution. This approach exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference. Instead superpixels are classified by a feedforward multilayer network. Our architecture achieves 69.6 average accuracy on the PASCAL VOC 2012 test set.",
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.",
"In order to encode the class correlation and class specific information in image representation, we propose a new local feature learning approach named Deep Discriminative and Shareable Feature Learning (DDSFL). DDSFL aims to hierarchically learn feature transformation filter banks to transform raw pixel image patches to features. The learned filter banks are expected to (1) encode common visual patterns of a flexible number of categories; (2) encode discriminative information; and (3) hierarchically extract patterns at different visual levels. Particularly, in each single layer of DDSFL, shareable filters are jointly learned for classes which share the similar patterns. Discriminative power of the filters is achieved by enforcing the features from the same category to be close, while features from different categories to be far away from each other. Furthermore, we also propose two exemplar selection methods to iteratively select training data for more efficient and effective learning. Based on the experimental results, DDSFL can achieve very promising performance, and it also shows great complementary effect to the state-of-the-art Caffe features. HighlightsWe propose to encode shareable and discriminative information in feature learning.Two exemplar selection methods are proposed to select effective training data.We build a hierarchical learning scheme to capture multiple visual level information.Our DDSFL outperforms most of the existing features.DDSFL features show great complementary effect to Caffe features.",
"In this work we present Neural Decision Forests, a novel approach to jointly tackle data representation- and discriminative learning within randomized decision trees. Recent advances of deep learning architectures demonstrate the power of embedding representation learning within the classifier--An idea that is intuitively supported by the hierarchical nature of the decision forest model where the input space is typically left unchanged during training and testing. We bridge this gap by introducing randomized Multi- Layer Perceptrons (rMLP) as new split nodes which are capable of learning non-linear, data-specific representations and taking advantage of them by finding optimal predictions for the emerging child nodes. To prevent overfitting, we i) randomly select the image data fed to the input layer, ii) automatically adapt the rMLP topology to meet the complexity of the data arriving at the node and iii) introduce an l1-norm based regularization that additionally sparsifies the network. The key findings in our experiments on three different semantic image labelling datasets are consistently improved results and significantly compressed trees compared to conventional classification trees."
]
} |
1604.05848 | 2333052987 | We adopt convolutional neural networks (CNNs) to be our parametric model to learn discriminative features and classifiers for local patch classification. Based on the occurrence frequency distribution of classes, an ensemble of CNNs (CNN-Ensemble) are learned, in which each CNN component focuses on learning different and complementary visual patterns. The local beliefs of pixels are output by CNN-Ensemble. Considering that visually similar pixels are indistinguishable under local context, we leverage the global scene semantics to alleviate the local ambiguity. The global scene constraint is mathematically achieved by adding a global energy term to the labeling energy function, and it is practically estimated in a non-parametric framework. A large margin-based CNN metric learning method is also proposed for better global belief estimation. In the end, the integration of local and global beliefs gives rise to the class likelihood of pixels, based on which maximum marginal inference is performed to generate the label prediction maps. Even without any post-processing, we achieve the state-of-the-art results on the challenging SiftFlow and Barcelona benchmarks. | Another line of works focus on exploring explicit dependency modeling among labels, which is usually formulated as a structure learning problem. @cite_29 formulated the unary and pairwise features in a 2nd-order sparse Conditional Random Fields (CRF) graphical model. @cite_24 , @cite_17 and @cite_2 built a fully connected graph to enforce higher order labeling coherence. @cite_33 , @cite_34 and @cite_40 modeled the higher order relations by considering patch superpixel as a clique. @cite_7 defined a multi-scale CRF that captures different contextual relationships ranging from local to global granularity. black @cite_35 formulates the CRF as a neural network, so its inference equals to applying the same neural network recurrently until some fixed point (convergence) is reached. Recently, @cite_13 @cite_28 adopt recurrent neural networks (RNNs) to propagate local contextual information and it shows superiority over PGMs on the applicability to large-scale scene parsing task. Our work is related to this branch of works, but approaches from a different angle. The potentials in these works are usually modeled parametrically, therefore extensive efforts are needed for learning these parameters. Our global energy term can be estimated very efficiently in a non-parametric framework. | {
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_29",
"@cite_24",
"@cite_40",
"@cite_2",
"@cite_34",
"@cite_13",
"@cite_17"
],
"mid": [
"2124592697",
"2116877738",
"2095844239",
"2204696980",
"1528789833",
"2053973158",
"2252984343",
"1923697677",
"2027327099",
"1512976292",
"2050159143"
],
"abstract": [
"Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.",
"This paper proposes a novel framework for labelling problems which is able to combine multiple segmentations in a principled manner. Our method is based on higher order conditional random fields and uses potentials defined on sets of pixels (image segments) generated using unsupervised segmentation algorithms. These potentials enforce label consistency in image regions and can be seen as a strict generalization of the commonly used pairwise contrast sensitive smoothness potentials. The higher order potential functions used in our framework take the form of the robust Pn model. This enables the use of powerful graph cut based move making algorithms for performing inference in the framework [14 ]. We test our method on the problem of multi-class object segmentation by augmenting the conventional CRF used for object segmentation with higher order potentials defined on image regions. Experiments on challenging data sets show that integration of higher order potentials quantitatively and qualitatively improves results leading to much better definition of object boundaries. We believe that this method can be used to yield similar improvements for many other labelling problems.",
"We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.",
"In image labeling, local representations for image units are usually generated from their surrounding image patches, thus long-range contextual information is not effectively encoded. In this paper, we introduce recurrent neural networks (RNNs) to address this issue. Specifically, directed acyclic graph RNNs (DAG-RNNs) are proposed to process DAG-structured images, which enables the network to model long-range semantic dependencies among image units. Our DAG-RNNs are capable of tremendously enhancing the discriminative power of local representations, which significantly benefits the local classification. Meanwhile, we propose a novel class weighting function that attends to rare classes, which phenomenally boosts the recognition accuracy for non-frequent classes. Integrating with convolution and deconvolution layers, our DAG-RNNs achieve new state-of-the-art results on the challenging SiftFlow, CamVid and Barcelona benchmarks.",
"This paper proposes a new approach to learning a discriminative model of object classes, incorporating appearance, shape and context information efficiently. The learned model is used for automatic visual recognition and semantic segmentation of photographs. Our discriminative model exploits novel features, based on textons, which jointly model shape and texture. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating these classifiers in a conditional random field. Efficient training of the model on very large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy are demonstrated on three different databases: i) our own 21-object class database of photographs of real objects viewed under general lighting conditions, poses and viewpoints, ii) the 7-class Corel subset and iii) the 7-class Sowerby database used in [1]. The proposed algorithm gives competitive results both for highly textured (e.g. grass, trees), highly structured (e.g. cars, faces, bikes, aeroplanes) and articulated objects (e.g. body, cow).",
"This paper addresses the problem of assigning object class labels to image pixels. Following recent holistic formulations, we cast scene labeling as inference of a conditional random field (CRF) grounded onto superpixels. The CRF inference is specified as quadratic program (QP) with mutual exclusion (mutex) constraints on class label assignments. The QP is solved using a beam search (BS), which is well-suited for scene labeling, because it explicitly accounts for spatial extents of objects, conforms to inconsistency constraints from domain knowledge, and has low computational costs. BS gradually builds a search tree whose nodes correspond to candidate scene labelings. Successor nodes are repeatedly generated from a select set of their parent nodes until convergence. We prove that our BS efficiently maximizes the QP objective of CRF inference. Effectiveness of our BS for scene labeling is evaluated on the benchmark MSRC, Stanford Backgroud, PASCAL VOC 2009 and 2010 datasets.",
"Models defined using higher-order potentials are becoming increasingly popular in computer vision. However, the exact representation of a general higher-order potential defined over many variables is computationally unfeasible. This has led prior works to adopt parametric potentials that can be compactly represented. This paper proposes a non-parametric higher-order model for image labeling problems that uses a patch-based representation of its potentials. We use the transformation scheme of [11, 25] to convert the higher-order potentials to a pair-wise form that can be handled using traditional inference algorithms. This representation is able to capture structure, geometrical and topological information of labels from training data and to provide more precise segmentations. Other tasks such as image denoising and reconstruction are also possible. We evaluate our method on denoising and segmentation problems with synthetic and real images.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"In this paper we propose a simple and effective way to integrate structural information in random forests for semantic image labelling. By structural information we refer to the inherently available, topological distribution of object classes in a given image. Different object class labels will not be randomly distributed over an image but usually form coherently labelled regions. In this work we provide a way to incorporate this topological information in the popular random forest framework for performing low-level, unary classification. Our paper has several contributions: First, we show how random forests can be augmented with structured label information. In the second part, we introduce a novel data splitting function that exploits the joint distributions observed in the structured label space for learning typical label transitions between object classes. Finally, we provide two possibilities for integrating the structured output predictions into concise, semantic labellings. In our experiments on the challenging MSRC and CamVid databases, we compare our method to standard random forest and conditional random field classification results.",
"We adopt Convolutional Neural Networks (CNN) to learn discriminative features for local patch classification. We further introduce quaddirectional 2D Recurrent Neural Networks to model the long range dependencies among pixels. Our quaddirectional 2D-RNN is able to embed the global image context into the compact local representation, which significantly enhance their discriminative power. Our experiments demonstrate that the integration of CNN and quaddirectional 2D-RNN achieves very promising results which are comparable to state-of-the-art on real-world image labeling benchmarks.",
"The Conditional Random Field (CRF) is a popular tool for object-based image segmentation. CRFs used in practice typically have edges only between adjacent image pixels. To represent object relationship statistics beyond adjacent pixels, prior work either represents only weak spatial information using the segmented regions, or encodes only global object co-occurrences. In this paper, we propose a unified model that augments the pixel-wise CRFs to capture object spatial relationships. To this end, we use a fully connected CRF, which has an edge for each pair of pixels. The edge potentials are defined to capture the spatial information and preserve the object boundaries at the same time. Traditional inference methods, such as belief propagation and graph cuts, are impractical in such a case where billions of edges are defined. Under only one assumption that the spatial relationships among different objects only depend on their relative positions (spatially stationary), we develop an efficient inference algorithm that converges in a few seconds on a standard resolution image, where belief propagation takes more than one hour for a single iteration."
]
} |
1604.05848 | 2333052987 | We adopt convolutional neural networks (CNNs) to be our parametric model to learn discriminative features and classifiers for local patch classification. Based on the occurrence frequency distribution of classes, an ensemble of CNNs (CNN-Ensemble) are learned, in which each CNN component focuses on learning different and complementary visual patterns. The local beliefs of pixels are output by CNN-Ensemble. Considering that visually similar pixels are indistinguishable under local context, we leverage the global scene semantics to alleviate the local ambiguity. The global scene constraint is mathematically achieved by adding a global energy term to the labeling energy function, and it is practically estimated in a non-parametric framework. A large margin-based CNN metric learning method is also proposed for better global belief estimation. In the end, the integration of local and global beliefs gives rise to the class likelihood of pixels, based on which maximum marginal inference is performed to generate the label prediction maps. Even without any post-processing, we achieve the state-of-the-art results on the challenging SiftFlow and Barcelona benchmarks. | Recently, non-parametric label transfer methods @cite_9 @cite_51 @cite_41 @cite_18 @cite_42 @cite_5 have gained popularity due to their outstanding performance and the scalability to large scale data. They usually estimate the class likelihood of image unit from the globally similar images. In a nutshell, global scene features are firstly utilized to retrieve the relevant images, whose label maps are then leveraged to estimate the class likelihood of image units. The pioneering label transfer work @cite_9 transformed RGB image to SIFT @cite_8 image, which was used to seek correspondences over pixels. Then, an energy function was defined over pixel correspondences, and the label prediction maps are obtained by minimizing the energy. The Superparsing system @cite_51 performed label transfer over image superpixels. @cite_41 learned adaptive weights for each low-level features, and it resulted in better nearest neighbor search. @cite_22 @cite_11 built a graph for dense image patch and superpixel to achieve the label transfer. We adopt this framework to evaluate our global energy term. In comparison with these works that are based on hand-crafted features, we used the learned CNN features which are more compact and discriminative. We expect our features to benefit their systems in terms of accuracy and efficiency as well. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_8",
"@cite_41",
"@cite_9",
"@cite_42",
"@cite_5",
"@cite_51",
"@cite_11"
],
"mid": [
"2154083146",
"190007845",
"2151103935",
"2051179318",
"2125849446",
"38955421",
"2051458493",
"1542723449",
"1597779259"
],
"abstract": [
"This paper presents a nonparametric approach to semantic parsing using small patches and simple gradient, color and location features. We learn the relevance of individual feature channels at test time using a locally adaptive distance metric. To further improve the accuracy of the nonparametric approach, we examine the importance of the retrieval set used to compute the nearest neighbours using a novel semantic descriptor to retrieve better candidates. The approach is validated by experiments on several datasets used for semantic parsing demonstrating the superiority of the method compared to the state of art approaches.",
"We address the problem of semantic segmentation, or multi-class pixel labeling, by constructing a graph of dense overlapping patch correspondences across large image sets. We then transfer annotations from labeled images to unlabeled images using the established patch correspondences. Unlike previous approaches to non-parametric label transfer our approach does not require an initial image retrieval step. Moreover, we operate on a graph for computing mappings between images, which avoids the need for exhaustive pairwise comparisons. Consequently, we can leverage offline computation to enhance performance at test time. We conduct extensive experiments to analyze different variants of our graph construction algorithm and evaluate multi-class pixel labeling performance on several challenging datasets.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"This paper proposes a non-parametric approach to scene parsing inspired by the work of Tighe and Lazebnik [22]. In their approach, a simple kNN scheme with multiple descriptor types is used to classify super-pixels. We add two novel mechanisms: (i) a principled and efficient method for learning per-descriptor weights that minimizes classification error, and (ii) a context-driven adaptation of the training set used for each query, which conditions on common classes (which are relatively easy to classify) to improve performance on rare ones. The first technique helps to remove extraneous descriptors that result from the imperfect distance metrics representations of each super-pixel. The second contribution re-balances the class frequencies, away from the highly-skewed distribution found in real-world scenes. Both methods give a significant performance boost over [22] and the overall system achieves state-of-the-art performance on the SIFT-Flow dataset.",
"In this paper we propose a novel nonparametric approach for object recognition and scene parsing using dense scene alignment. Given an input image, we retrieve its best matches from a large database with annotated images using our modified, coarse-to-fine SIFT flow algorithm that aligns the structures within two images. Based on the dense scene correspondence obtained from the SIFT flow, our system warps the existing annotations, and integrates multiple cues in a Markov random field framework to segment and recognize the query image. Promising experimental results have been achieved by our nonparametric scene parsing system on a challenging database. Compared to existing object recognition approaches that require training for each object category, our system is easy to implement, has few parameters, and embeds contextual information naturally in the retrieval alignment procedure.",
"Scene parsing is the problem of assigning a semantic label to every pixel in an image. Though an ambitious task, impressive advances have been made in recent years, in particular in scalable nonparametric techniques suitable for open-universe databases. This paper presents the CollageParsing algorithm for scalable nonparametric scene parsing. In contrast to common practice in recent nonparametric approaches, CollageParsing reasons about mid-level windows that are designed to capture entire objects, instead of low-level superpixels that tend to fragment objects. On a standard benchmark consisting of outdoor scenes from the LabelMe database, CollageParsing achieves state-of-the-art nonparametric scene parsing results with 7 to 11 higher average per-class accuracy than recent nonparametric approaches.",
"This paper presents a scalable scene parsing algorithm based on image retrieval and superpixel matching. We focus on rare object classes, which play an important role in achieving richer semantic understanding of visual scenes, compared to common background classes. Towards this end, we make two novel contributions: rare class expansion and semantic context description. First, considering the long-tailed nature of the label distribution, we expand the retrieval set by rare class exemplars and thus achieve more balanced superpixel classification results. Second, we incorporate both global and local semantic context information through a feedback based mechanism to refine image retrieval and superpixel matching. Results on the SIFTflow and LMSun datasets show the superior performance of our algorithm, especially on the rare classes, without sacrificing overall labeling accuracy.",
"This paper presents a simple and effective nonparametric approach to the problem of image parsing, or labeling image regions (in our case, superpixels produced by bottom-up segmentation) with their categories. This approach requires no training, and it can easily scale to datasets with tens of thousands of images and hundreds of labels. It works by scene-level matching with global image descriptors, followed by superpixel-level matching with local features and efficient Markov random field (MRF) optimization for incorporating neighborhood context. Our MRF setup can also compute a simultaneous labeling of image regions into semantic classes (e.g., tree, building, car) and geometric classes (sky, vertical, ground). Our system outperforms the state-of-the-art non-parametric method based on SIFT Flow on a dataset of 2,688 images and 33 labels. In addition, we report per-pixel rates on a larger dataset of 15,150 images and 170 labels. To our knowledge, this is the first complete evaluation of image parsing on a dataset of this size, and it establishes a new benchmark for the problem.",
"We present a fast approximate nearest neighbor algorithm for semantic segmentation. Our algorithm builds a graph over superpixels from an annotated set of training images. Edges in the graph represent approximate nearest neighbors in feature space. At test time we match superpixels from a novel image to the training images by adding the novel image to the graph. A move-making search algorithm allows us to leverage the graph and image structure for finding matches. We then transfer labels from the training images to the image under test. To promote good matches between superpixels we propose to learn a distance metric that weights the edges in our graph. Our approach is evaluated on four standard semantic segmentation datasets and achieves results comparable with the state-of-the-art."
]
} |
1604.05848 | 2333052987 | We adopt convolutional neural networks (CNNs) to be our parametric model to learn discriminative features and classifiers for local patch classification. Based on the occurrence frequency distribution of classes, an ensemble of CNNs (CNN-Ensemble) are learned, in which each CNN component focuses on learning different and complementary visual patterns. The local beliefs of pixels are output by CNN-Ensemble. Considering that visually similar pixels are indistinguishable under local context, we leverage the global scene semantics to alleviate the local ambiguity. The global scene constraint is mathematically achieved by adding a global energy term to the labeling energy function, and it is practically estimated in a non-parametric framework. A large margin-based CNN metric learning method is also proposed for better global belief estimation. In the end, the integration of local and global beliefs gives rise to the class likelihood of pixels, based on which maximum marginal inference is performed to generate the label prediction maps. Even without any post-processing, we achieve the state-of-the-art results on the challenging SiftFlow and Barcelona benchmarks. | The ensemble methods @cite_27 @cite_19 have achieved great success in machine learning. The idea is to build a strong predictors by assembling many weak predictors. The assumption is that these weak predictors are complementary to each other when they are trained with different subset of features and training data. Some examples are random forest @cite_47 @cite_48 , bagging @cite_30 , boosting @cite_3 , etc. Random forest has been successfully used in solving image labeling problems. For example, @cite_19 learned an ensemble of decision jungles to output the semantic class labels of pixels. @cite_34 constructed a random forest that directly maps the image patches to their corresponding structured output maps. Our CNN-Ensemble is different from these works. First, the individual model in theirs are very weak, whereas each of our single CNN has very strong capability. Second, the data that are used for each individual model training in their works are sampled without discrimination. In contrast, data that are fed to each single CNN are sampled differently, therefore their statistics are different. | {
"cite_N": [
"@cite_30",
"@cite_48",
"@cite_3",
"@cite_19",
"@cite_27",
"@cite_47",
"@cite_34"
],
"mid": [
"100189773",
"137456267",
"2145073242",
"2098458263",
"605727707",
"",
"2027327099"
],
"abstract": [
"",
"This practical and easy-to-follow text explores the theoretical underpinnings of decision forests, organizing the vast existing literature on the field within a new, general-purpose forest model. Topics and features: with a foreword by Prof. Y. Amit and Prof. D. Geman, recounting their participation in the development of decision forests; introduces a flexible decision forest model, capable of addressing a large and diverse set of image and video analysis tasks; investigates both the theoretical foundations and the practical implementation of decision forests; discusses the use of decision forests for such tasks as classification, regression, density estimation, manifold learning, active learning and semi-supervised classification; includes exercises and experiments throughout the text, with solutions, slides, demo videos and other supplementary material provided at an associated website; provides a free, user-friendly software library, enabling the reader to experiment with forests in a hands-on manner.",
"Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting’s relationship to support-vector machines. Some examples of recent applications of boosting are also described.",
"Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data, the number of nodes in decision trees will grow exponentially with depth. For certain applications, for example on mobile or embedded processors, memory is a limited resource, and so the exponential growth of trees limits their depth, and thus their potential accuracy. This paper proposes decision jungles, revisiting the idea of ensembles of rooted decision directed acyclic graphs (DAGs), and shows these to be compact and powerful discriminative models for classification. Unlike conventional decision trees that only allow one path to every node, a DAG in a decision jungle allows multiple paths from the root to each leaf. We present and compare two new node merging algorithms that jointly optimize both the features and the structure of the DAGs efficiently. During training, node splitting and node merging are driven by the minimization of exactly the same objective function, here the weighted sum of entropies at the leaves. Results on varied datasets show that, compared to decision forests and several other baselines, decision jungles require dramatically less memory while considerably improving generalization.",
"An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity and bias-variance decompositions, and recent progress in information theoretic diversity. Moving on to more advanced topics, the author explains how to achieve better performance through ensemble pruning and how to generate better clustering results by combining multiple clusterings. In addition, he describes developments of ensemble methods in semi-supervised learning, active learning, cost-sensitive learning, class-imbalance learning, and comprehensibility enhancement.",
"",
"In this paper we propose a simple and effective way to integrate structural information in random forests for semantic image labelling. By structural information we refer to the inherently available, topological distribution of object classes in a given image. Different object class labels will not be randomly distributed over an image but usually form coherently labelled regions. In this work we provide a way to incorporate this topological information in the popular random forest framework for performing low-level, unary classification. Our paper has several contributions: First, we show how random forests can be augmented with structured label information. In the second part, we introduce a novel data splitting function that exploits the joint distributions observed in the structured label space for learning typical label transitions between object classes. Finally, we provide two possibilities for integrating the structured output predictions into concise, semantic labellings. In our experiments on the challenging MSRC and CamVid databases, we compare our method to standard random forest and conditional random field classification results."
]
} |
1604.05848 | 2333052987 | We adopt convolutional neural networks (CNNs) to be our parametric model to learn discriminative features and classifiers for local patch classification. Based on the occurrence frequency distribution of classes, an ensemble of CNNs (CNN-Ensemble) are learned, in which each CNN component focuses on learning different and complementary visual patterns. The local beliefs of pixels are output by CNN-Ensemble. Considering that visually similar pixels are indistinguishable under local context, we leverage the global scene semantics to alleviate the local ambiguity. The global scene constraint is mathematically achieved by adding a global energy term to the labeling energy function, and it is practically estimated in a non-parametric framework. A large margin-based CNN metric learning method is also proposed for better global belief estimation. In the end, the integration of local and global beliefs gives rise to the class likelihood of pixels, based on which maximum marginal inference is performed to generate the label prediction maps. Even without any post-processing, we achieve the state-of-the-art results on the challenging SiftFlow and Barcelona benchmarks. | Recently, the ensemble models @cite_0 @cite_10 @cite_53 have been pervasively adopted in the large-scale ImageNet classification competition @cite_20 . Specifically, many deep neural networks (their network architeture is identical) are trained, and the fusion of their decisions give rise to the output. By doing this, the ensemble model is able to enhance its classification accuracy slightly. Our CNN-Ensemble also differs from these works. In their works, each network component is trained with exactly the same data, and the difference of network components mainly originates from nonidentical network initializations. In contrast, every CNN component is trained from entirely different data, which will guide the CNN component to focus on learning different but complementary visual patterns. | {
"cite_N": [
"@cite_0",
"@cite_53",
"@cite_10",
"@cite_20"
],
"mid": [
"2618530766",
"2950179405",
"1686810756",
"2117539524"
],
"abstract": [
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements."
]
} |
1604.05811 | 2342097276 | This paper presents the first reliable physical-layer network coding (PNC) system that supports real TCP IP applica- tions for the two-way relay network (TWRN). Theoretically, PNC could boost the throughput of TWRN by a factor of 2 compared with traditional scheduling (TS) in the high signal-to-noise (SNR) regime. Although there have been many theoretical studies on PNC performance, there have been relatively few experimental and implementation efforts. Our earlier PNC prototype, built in 2012, was an offline system that processed signals offline. For a system that supports real applications, signals must be processed online in real-time. Our real-time reliable PNC pro- totype, referred to as RPNC, solves a number of key challenges to enable the support of real TCP IP applications. The enabling components include: 1) a time-slotted system that achieves s - level synchronization for the PNC system; 2) reduction of PNC signal processing complexity to meet real-time constraints; 3) an ARQ design tailored for PNC to ensure reliable packet delivery; 4) an interface to the application layer. We took on the challenge to implement all the above with general-purpose processors in PC through an SDR platform rather than ASIC or FPGA. With all these components, we have successfully demonstrated image exchange with TCP and two-party video conferencing with UDP over RPNC. Experimental results show that the achieved throughput approaches the PHY-layer data rate at high SNR, demonstrating the high efficiency of the RPNC system. Index Terms—Network coding, physical-layer network coding, two-way relay network, implementation, ARQ, TCP IP. | : Due to the large latency and jitters between USRP and GNU Radio, few high-performance real-time systems are reported on the USRP GNU Radio platform. Fuxj @math @cite_24 presented a real-time 802.11p OFDM transmitter on USRP GNU Radio. @cite_13 @cite_12 presented a real-time 802.11a g p OFDM receiver on USRP GNU Radio. Neither is a complete system that supports real TCP IP applications. To support real applications, our system is designed to ensure link-layer reliability and prevent computation overloading. In addition, we design the time-slotted architecture to address challenges for implementing RPNC. | {
"cite_N": [
"@cite_24",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"1967754176",
"2038894557"
],
"abstract": [
"",
"Experimental research on wireless communication protocols frequently requires full access to all protocol layers, down to and including the physical layer. Software Defined Radio (SDR) hardware platforms, together with real-time signal processing frameworks, offer a basis to implement transceivers that can allow such experimentation and sophisticated measurements. We present a complete Orthogonal Frequency Division Multiplexing (OFDM) receiver implemented in GNU Radio and fitted for operation with an Ettus USRP N210. To the best of our knowledge, this is the first prototype of a GNU Radio based OFDM receiver for this technology. Our receiver comprises all layers up to parsing the MAC header and extracting the payload of IEEE 802.11a g p networks. It supports both WiFi with a bandwidth of 20 MHz and IEEE 802.11p DSRC with a bandwidth of 10 MHz. We validated and verified our implementation by means of interoperability tests, and present representative performance measurements. By making the code available as Open Source we provide an easy-to-access system that can be readily used for experimenting with novel signal processing algorithms.",
"We present a solution for enabling standard compliant channel access for a fully software-based Software Defined Radio (SDR) architecture. With the availability of a GNU Radio implementation of an Orthogonal Frequency Division Multiplexing (OFDM) transceiver, there is substantial demand for standard compliant channel access. It has been shown that implementation of CSMA on a host PC is infeasible due to system-inherent delays. The common approach is to fully implement the protocol stack on the FPGA, which makes further updates or modifications to the protocols a complex and time consuming task. We take another approach and investigate the feasibility of a fully software-based solution and show that standard compliant broadcast transmissions are possible with marginal modifications of the FPGA. We envision the use of our system for example in the vehicular networking domain, where broadcast is the main communication paradigm. We show that our SDR solution exactly complies with the IEEE 802.11 Distributed Coordination Function (DCF) as well as Enhanced Distributed Channel Access (EDCA) timings. We were even able to identify shortcomings of commercial systems and prototypes."
]
} |
1604.05811 | 2342097276 | This paper presents the first reliable physical-layer network coding (PNC) system that supports real TCP IP applica- tions for the two-way relay network (TWRN). Theoretically, PNC could boost the throughput of TWRN by a factor of 2 compared with traditional scheduling (TS) in the high signal-to-noise (SNR) regime. Although there have been many theoretical studies on PNC performance, there have been relatively few experimental and implementation efforts. Our earlier PNC prototype, built in 2012, was an offline system that processed signals offline. For a system that supports real applications, signals must be processed online in real-time. Our real-time reliable PNC pro- totype, referred to as RPNC, solves a number of key challenges to enable the support of real TCP IP applications. The enabling components include: 1) a time-slotted system that achieves s - level synchronization for the PNC system; 2) reduction of PNC signal processing complexity to meet real-time constraints; 3) an ARQ design tailored for PNC to ensure reliable packet delivery; 4) an interface to the application layer. We took on the challenge to implement all the above with general-purpose processors in PC through an SDR platform rather than ASIC or FPGA. With all these components, we have successfully demonstrated image exchange with TCP and two-party video conferencing with UDP over RPNC. Experimental results show that the achieved throughput approaches the PHY-layer data rate at high SNR, demonstrating the high efficiency of the RPNC system. Index Terms—Network coding, physical-layer network coding, two-way relay network, implementation, ARQ, TCP IP. | @cite_34 reported an 802.11 a b g compliant implementation that supports real TCP IP applications on the SORA GPP-based SDR platform. Our paper here, on the other hand, focuses on PNC rather than 802.11. Specifically, we design protocols for PNC, and achieve real-time operations in the compute-bound USRP GNU Radio platform. We believe the demonstrated protocols in RPNC could also benefit the implementation of future compute-bound systems on SORA. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2072892827"
],
"abstract": [
"This paper presents Sora, a fully programmable software radio platform on commodity PC architectures. Sora combines the performance and fidelity of hardware software-defined radio (SDR) platforms with the programmability and flexibility of general-purpose processor (GPP) SDR platforms. Sora uses both hardware and software techniques to address the challenges of using PC architectures for high-speed SDR. The Sora hardware components consist of a radio front-end for reception and transmission, and a radio control board for high-throughput, low-latency data transfer between radio and host memories. Sora makes extensive use of features of contemporary processor architectures to accelerate wireless protocol processing and satisfy protocol timing requirements, including using dedicated CPU cores, large low-latency caches to store lookup tables, and SIMD processor extensions for highly efficient physical layer processing on GPPs. Using the Sora platform, we have developed a few demonstration wireless systems, including SoftWiFi, an 802.11a b g implementation that seamlessly interoperates with commercial 802.11 NICs at all modulation rates, and SoftLTE, a 3GPP LTE uplink PHY implementation that supports up to 43.8Mbps data rate."
]
} |
1604.05811 | 2342097276 | This paper presents the first reliable physical-layer network coding (PNC) system that supports real TCP IP applica- tions for the two-way relay network (TWRN). Theoretically, PNC could boost the throughput of TWRN by a factor of 2 compared with traditional scheduling (TS) in the high signal-to-noise (SNR) regime. Although there have been many theoretical studies on PNC performance, there have been relatively few experimental and implementation efforts. Our earlier PNC prototype, built in 2012, was an offline system that processed signals offline. For a system that supports real applications, signals must be processed online in real-time. Our real-time reliable PNC pro- totype, referred to as RPNC, solves a number of key challenges to enable the support of real TCP IP applications. The enabling components include: 1) a time-slotted system that achieves s - level synchronization for the PNC system; 2) reduction of PNC signal processing complexity to meet real-time constraints; 3) an ARQ design tailored for PNC to ensure reliable packet delivery; 4) an interface to the application layer. We took on the challenge to implement all the above with general-purpose processors in PC through an SDR platform rather than ASIC or FPGA. With all these components, we have successfully demonstrated image exchange with TCP and two-party video conferencing with UDP over RPNC. Experimental results show that the achieved throughput approaches the PHY-layer data rate at high SNR, demonstrating the high efficiency of the RPNC system. Index Terms—Network coding, physical-layer network coding, two-way relay network, implementation, ARQ, TCP IP. | : A simplified version of PNC, called analog network coding (ANC) @cite_2 , was implemented in TWRN. Recently, Wang and Mao @cite_4 extended ANC to support the QAM modulation, and designed some channel estimation methods for ANC. Although ANC is simple to implement, the disadvantage is that the relay amplifies the noise along with the signal before forwarding the signal, causing error propagation. In this paper, we consider the original PNC based on XOR mapping @cite_36 . | {
"cite_N": [
"@cite_36",
"@cite_4",
"@cite_2"
],
"mid": [
"1975099099",
"2334333116",
"2152496949"
],
"abstract": [
"A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). The goal of this paper is to show how the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to \"straightforward\" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. PNC can yield higher capacity than straight-forward network coding when applied to wireless networks. We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity. PNC opens up a whole new research area because of its implications and new design requirements for the physical, MAC, and network layers of ad hoc wireless stations. The resolution of the many outstanding but interesting issues in PNC may lead to a revolutionary new paradigm for wireless ad hoc networking.",
"The applicability of analog network coding (ANC) to a wireless network is constrained by several limitations: 1) some ANC schemes demand fine-grained frame-level synchronization, which cannot be practically achieved in a wireless network; 2) others support only a specific type of modulation or require equal frame size in concurrent transmissions. In this paper, a new ANC scheme, called restriction-free analog network coding (RANC), is developed to eliminate the above limitations. It incorporates several function blocks, including frame boundary detection, joint channel estimation, waveform recovery, circular channel estimation, and frequency offset estimation, to support random concurrent transmissions with arbitrary frame sizes in a wireless network with various linear modulation schemes. To demonstrate the distinguished features of RANC, two network applications are studied. In the first application, RANC is applied to support a new relaying scheme called multi-way relaying, which significantly improves the spectrum efficiency as compared to two-way relaying. In the second application, RANC enables random-access-based ANC in an ad hoc network where flow compensation can be gracefully exploited to further improve the throughput performance. RANC and its network applications are implemented and evaluated on universal software radio peripheral (USRP) software radio platforms. Extensive experiments confirm that all function blocks of RANC work effectively without being constrained by the above limitations. The overall performance of RANC is shown to approach the ideal case of interference-free communications. The results of experiments in a real network setup demonstrate that RANC significantly outperforms existing ANC schemes and achieves constraint-free ANC in wireless networks.",
"Traditionally, interference is considered harmful. Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets, routers forward the interfering signals. The destination leverages network-level information to cancel the interference and recover the signal destined to it. The result is analog network coding because it mixes signals not bits. So, what if wireless routers forward signals instead of packets? Theoretically, such an approach doubles the capacity of the canonical 2-way relay network. Surprisingly, it is also practical. We implement our design using software radios and show that it achieves significantly higher throughput than both traditional wireless routing and prior work on wireless network coding."
]
} |
1604.05811 | 2342097276 | This paper presents the first reliable physical-layer network coding (PNC) system that supports real TCP IP applica- tions for the two-way relay network (TWRN). Theoretically, PNC could boost the throughput of TWRN by a factor of 2 compared with traditional scheduling (TS) in the high signal-to-noise (SNR) regime. Although there have been many theoretical studies on PNC performance, there have been relatively few experimental and implementation efforts. Our earlier PNC prototype, built in 2012, was an offline system that processed signals offline. For a system that supports real applications, signals must be processed online in real-time. Our real-time reliable PNC pro- totype, referred to as RPNC, solves a number of key challenges to enable the support of real TCP IP applications. The enabling components include: 1) a time-slotted system that achieves s - level synchronization for the PNC system; 2) reduction of PNC signal processing complexity to meet real-time constraints; 3) an ARQ design tailored for PNC to ensure reliable packet delivery; 4) an interface to the application layer. We took on the challenge to implement all the above with general-purpose processors in PC through an SDR platform rather than ASIC or FPGA. With all these components, we have successfully demonstrated image exchange with TCP and two-party video conferencing with UDP over RPNC. Experimental results show that the achieved throughput approaches the PHY-layer data rate at high SNR, demonstrating the high efficiency of the RPNC system. Index Terms—Network coding, physical-layer network coding, two-way relay network, implementation, ARQ, TCP IP. | Lu, Wang, Liew and Zhang @cite_9 presented the first PNC implementation based on OFDM, and evaluated the PHY-layer SNR BER performance through experiments. @cite_23 also considered an OFDM-based PNC system, and focused on the problem of carrier frequency offset (CFO) mismatch between the end nodes and the relay. The proposed iterative algorithm was evaluated through FPGA implementation and experiments. The works by Chen, Haley and Nguyen @cite_20 and @cite_25 both implemented a single-carrier asynchronous PNC system on USRP, and studied signal processing issues (e.g., CFO estimation and compensation) at the relay. In contrast, our paper here focuses on system-level implementation issues. Specifically a focus of ours is on the design of the MAC layer and the ARQ protocol for TWRN, and we address reliability and real-time challenges for PNC systems on compute-bound GPP-based SDR platform. | {
"cite_N": [
"@cite_9",
"@cite_20",
"@cite_25",
"@cite_23"
],
"mid": [
"",
"2057233343",
"2194070780",
"2203527323"
],
"abstract": [
"",
"Physical layer network coding (PNC) has the potential to improve the spectral efficiency of wireless relay communications by utilising the superposition of signal propagation. In the ideal case, PNC can double the throughput of a symmetric relay network when compared to conventional time division multiplexing. However, in practice imperfect channel conditions and hardware can lead to time and frequency offsets at the relay receiver that degrade PNC performance.",
"Physical layer network coding has attracted extensive theoretical interest, although relatively little research has been done in support of deployment to wireless networks where internode synchronization is difficult to achieve. In particular, wireless networks constructed with inexpensive and commercially available software defined radio technology, or more generally, radio front-end samplers connected to internet-based remote processors (i.e., the Internet of Things network) may exhibit large time, frequency, and phase offsets that are difficult to control. In this paper, we define an asynchronous discrete-time model that accounts for these impairments as part of the information transfer between network users and a relay. Derived from this model are maximum likelihood algorithms for relay parameter estimation and a symbol decoder inspired from asynchronous multi-user detection. Additionally, null space-based frequency offset estimation that reduces computational complexity is proposed. Simulation results and the design and performance of a two-user system implemented with the Universal Software Radio Peripheral (USRP) platform and GNU radio are included to demonstrate the proof of concept. Our results indicate that the physical layer network coding technique can be successfully deployed and yields significant benefits even in the presence of impairments found in practical settings.",
"In this paper we consider physical-layer network coding (PLNC) in OFDM-based two-way relaying systems. Practically, a key impairment for the application of PLNC is the carrier frequency offset (CFO) mismatch between the sources and the relay, which can not be compensated completely in the multiple-access (MA) phase. As this CFO mismatch results in inter-carrier interference (ICI) for OFDM transmissions, practical CFO compensation and ICI cancelation strategies are investigated to mitigate the impairment for a-posterior probability (APP) based PLNC decoders at the relay. Furthermore, we perform hardware implementation of the two-way relaying network employing an long term evolution (LTE) near parametrization adapted to PLNC. The APP-based decoding schemes with CFO compensation and ICI cancelation are applied on this real-time transmission platform to verify the analytical results."
]
} |
1604.05811 | 2342097276 | This paper presents the first reliable physical-layer network coding (PNC) system that supports real TCP IP applica- tions for the two-way relay network (TWRN). Theoretically, PNC could boost the throughput of TWRN by a factor of 2 compared with traditional scheduling (TS) in the high signal-to-noise (SNR) regime. Although there have been many theoretical studies on PNC performance, there have been relatively few experimental and implementation efforts. Our earlier PNC prototype, built in 2012, was an offline system that processed signals offline. For a system that supports real applications, signals must be processed online in real-time. Our real-time reliable PNC pro- totype, referred to as RPNC, solves a number of key challenges to enable the support of real TCP IP applications. The enabling components include: 1) a time-slotted system that achieves s - level synchronization for the PNC system; 2) reduction of PNC signal processing complexity to meet real-time constraints; 3) an ARQ design tailored for PNC to ensure reliable packet delivery; 4) an interface to the application layer. We took on the challenge to implement all the above with general-purpose processors in PC through an SDR platform rather than ASIC or FPGA. With all these components, we have successfully demonstrated image exchange with TCP and two-party video conferencing with UDP over RPNC. Experimental results show that the achieved throughput approaches the PHY-layer data rate at high SNR, demonstrating the high efficiency of the RPNC system. Index Terms—Network coding, physical-layer network coding, two-way relay network, implementation, ARQ, TCP IP. | : @cite_18 presented the first real-time implementation of a PNC system, and designed a burst mode MAC protocol where a dedicated beacon frame triggers a burst of uplink transmissions. The current paper extends @cite_18 to a more general time-slotted MAC protocol, and demonstrated a complete system supporting real TCP IP applications. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2066938847"
],
"abstract": [
"This paper presents the first real-time physical-layer network coding (PNC) prototype for the two-way relay wireless channel (TWRC). Theoretically, PNC could boost the throughput of TWRC by a factor of 2 compared with traditional scheduling (TS) in the high signal-to-noise (SNR) regime. Although there have been many theoretical studies on PNC performance, there have been few experimental and implementation efforts. We built the first prototype of PNC about a year ago. It was, however, an offline system in which an offline PNC decoder was used at the relay. For a real-time PNC system, there are many additional challenges, including the needs for tighter coordination of the transmissions by the two end nodes, fast real-time PNC decoding at the relay, and a PNC-compatible retransmission scheme (i.e., an ARQ protocol) to ensure reliability of packet delivery. In this paper, we describe a real-time PNC prototype, referred to as RPNC, that provides practical solutions to these challenges. Indoor environment experimental results show that RPNC boosts the throughput of TWRC by a factor of 2 compared with TS, as predicted theoretically. RPNC prototype provides an interface to the application layer, with which we demonstrate the exchange of two image data files between the two end nodes."
]
} |
1604.05811 | 2342097276 | This paper presents the first reliable physical-layer network coding (PNC) system that supports real TCP IP applica- tions for the two-way relay network (TWRN). Theoretically, PNC could boost the throughput of TWRN by a factor of 2 compared with traditional scheduling (TS) in the high signal-to-noise (SNR) regime. Although there have been many theoretical studies on PNC performance, there have been relatively few experimental and implementation efforts. Our earlier PNC prototype, built in 2012, was an offline system that processed signals offline. For a system that supports real applications, signals must be processed online in real-time. Our real-time reliable PNC pro- totype, referred to as RPNC, solves a number of key challenges to enable the support of real TCP IP applications. The enabling components include: 1) a time-slotted system that achieves s - level synchronization for the PNC system; 2) reduction of PNC signal processing complexity to meet real-time constraints; 3) an ARQ design tailored for PNC to ensure reliable packet delivery; 4) an interface to the application layer. We took on the challenge to implement all the above with general-purpose processors in PC through an SDR platform rather than ASIC or FPGA. With all these components, we have successfully demonstrated image exchange with TCP and two-party video conferencing with UDP over RPNC. Experimental results show that the achieved throughput approaches the PHY-layer data rate at high SNR, demonstrating the high efficiency of the RPNC system. Index Terms—Network coding, physical-layer network coding, two-way relay network, implementation, ARQ, TCP IP. | @cite_10 proposed a MAC protocol that supports ANC in multihop wireless networks. @cite_33 proposed a MAC protocol that supports PNC in multihop wireless networks. Nodes in the network collect the queue status of their neighboring nodes through CSMA, and form PNC pairs. Then, the relay node coordinates a PNC pair to start PNC transmissions. He and Liew @cite_0 investigated ARQ designs for PNC systems @cite_19 in multihop wireless networks, and leveraged overheard packets together with coded packets to improve throughput. None of the above work actually proved concepts via a prototype. By contrast, our work here includes a working prototype supporting real TCP IP applications on a SDR platform. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_10",
"@cite_33"
],
"mid": [
"1929963701",
"2085800565",
"1975110788",
"2113092949"
],
"abstract": [
"This paper investigates Automatic Repeat request (ARQ) designs for Physical-layer Network Coding (PNC) systems. Most prior work related to PNC explores its use in Two-Way Relay Channel (TWRC). We have previously found that, besides TWRC, there are many other PNC building blocks—building blocks are simple small network structures that can be used to construct a large network. In some of these PNC building blocks, the receivers can obtain side information through overhearing. Although such overheard information is not the target information that the receivers desire, the receivers can exploit the overheard information together with a network-coded packet received to obtain a desired native packet. This can yield substantial throughput gain. Our previous study, however, assumed what is sent always gets received. In practice, that is not the case. Error control is needed to ensure reliable communication. This paper focuses on ARQ designs for ensuring reliable PNC communication. The availability of overheard Information and its potential exploitation make the ARQ design of a network-coded system different from that of a non-network-coded system. In this paper, we lay out the fundamental considerations for such ARQ designs: 1) we put forth a framework to track the stored coded packets and overheard packets to increase the chance of packet extraction, and derive the throughput gain achieved therefore; 2) we investigate two variations of PNC ARQ, coupled and non-coupled ARQs, and prove that non-coupled ARQ is more efficient; 3) we show how to optimize parameters in PNC ARQ—specifically the window size and the ACK frequency—to minimize the throughput degradation caused by ACK feedback overhead and wasteful retransmissions due to lost ACK.",
"This paper investigates the fundamental building blocks of physical-layer network coding (PNC). Most prior work on PNC focused on its application in a simple two-way-relay channel (TWRC) consisting of three nodes only. Studies of the application of PNC in general networks are relatively few. This paper is an attempt to fill this gap. We put forth two ideas: 1) a general network can be decomposed into small building blocks of PNC, referred to as the PNC atoms, for scheduling of PNC transmissions; and 2) we identify nine PNC atoms, with TWRC being one of them. Three major results are as follows. First, using the decomposition framework, the throughput performance of PNC is shown to be significantly better than those of the traditional multi-hop scheme and the conventional network coding scheme. For example, under heavy traffic volume, PNC can achieve 100 throughput gain relative to the traditional multi-hop scheme. Second, PNC decomposition based on a variety of different PNC atoms can yield much better performance than PNC decomposition based on the TWRC atom alone. Third, three out of the nine atoms are most important to good performance. Specifically, the decomposition based on these three atoms is good enough most of the time, and it is not necessary to use the other six atoms.",
"Analog network coding (ANC) is effective in improving spectrum efficiency. To coordinate ANC among multiple nodes without relying on complicated scheduling algorithm and network optimization, a new random access MAC protocol, called ANC-ERA, is developed to dynamically form ANC-cooperation groups in an ad hoc network. ANC-ERA includes several key mechanisms to maintain high performance in medium access. First, network allocation vectors (NAV) of control frames are properly set to avoid over-blocking of channel access. Second, a channel occupation frame (COF) is added to protect vulnerable periods during the formation of ANC cooperation. Third, an ACK diversity mechanism is designed to reduce potentially high ACK loss probability in ANC-based wireless networks. Since forming an ANC cooperation relies on bi-directional traffic between the initiator and the cooperator, the throughput gain from ANC drops dramatically if bi-directional traffic is not available. To avoid this issue, the fourth key mechanism, called flow compensation, is designed to form different types of ANC cooperation among neighboring nodes of the initiator and the cooperator. Both theoretical analysis and simulations are conducted to evaluate ANC-ERA. Performance results show that ANC-ERA works effectively in ad hoc networks and significantly outperforms existing random access MAC protocols.",
"Physical-layer network coding (PNC) is a promising approach for wireless networks. It allows nodes to transmit simultaneously. Due to the difficulties of scheduling simultaneous transmissions, existing works on PNC are based on simplified medium access control (MAC) protocols, which are not applicable to general multihop wireless networks, to the best of our knowledge. In this paper, we propose a distributed MAC protocol that supports PNC in multihop wireless networks. The proposed MAC protocol is based on the carrier sense multiple access (CSMA) strategy and can be regarded as an extension to the IEEE 802.11 MAC protocol. In the proposed protocol, each node collects information on the queue status of its neighboring nodes. When a node finds that there is an opportunity for some of its neighbors to perform PNC, it notifies its corresponding neighboring nodes and initiates the process of packet exchange using PNC, with the node itself as a relay. During the packet exchange process, the relay also works as a coordinator which coordinates the transmission of source nodes. Meanwhile, the proposed protocol is compatible with conventional network coding and conventional transmission schemes. Simulation results show that the proposed protocol is advantageous in various scenarios of wireless applications."
]
} |
1604.05811 | 2342097276 | This paper presents the first reliable physical-layer network coding (PNC) system that supports real TCP IP applica- tions for the two-way relay network (TWRN). Theoretically, PNC could boost the throughput of TWRN by a factor of 2 compared with traditional scheduling (TS) in the high signal-to-noise (SNR) regime. Although there have been many theoretical studies on PNC performance, there have been relatively few experimental and implementation efforts. Our earlier PNC prototype, built in 2012, was an offline system that processed signals offline. For a system that supports real applications, signals must be processed online in real-time. Our real-time reliable PNC pro- totype, referred to as RPNC, solves a number of key challenges to enable the support of real TCP IP applications. The enabling components include: 1) a time-slotted system that achieves s - level synchronization for the PNC system; 2) reduction of PNC signal processing complexity to meet real-time constraints; 3) an ARQ design tailored for PNC to ensure reliable packet delivery; 4) an interface to the application layer. We took on the challenge to implement all the above with general-purpose processors in PC through an SDR platform rather than ASIC or FPGA. With all these components, we have successfully demonstrated image exchange with TCP and two-party video conferencing with UDP over RPNC. Experimental results show that the achieved throughput approaches the PHY-layer data rate at high SNR, demonstrating the high efficiency of the RPNC system. Index Terms—Network coding, physical-layer network coding, two-way relay network, implementation, ARQ, TCP IP. | : Network time protocol @cite_1 and IEEE 1588 precise time protocol @cite_17 are widely used in wired networks to synchronize the clocks of connected nodes. In wireless networks (e.g., Cellular networks), time synchronization is usually implemented through air interface. Unlike Cellular networks, RPNC is based on a distributed wireless architecture (e.g., similar to IEEE 802.11), where there is no control channel for time synchronization. | {
"cite_N": [
"@cite_1",
"@cite_17"
],
"mid": [
"2288632536",
"2537902178"
],
"abstract": [
"The Network Time Protocol (NTP) is widely used to synchronize computer clocks in the Internet. This document describes NTP version 4 (NTPv4), which is backwards compatible with NTP version 3 (NTPv3), described in RFC 1305, as well as previous versions of the protocol. NTPv4 includes a modified protocol header to accommodate the Internet Protocol version 6 address family. NTPv4 includes fundamental improvements in the mitigation and discipline algorithms that extend the potential accuracy to the tens of microseconds with modern workstations and fast LANs. It includes a dynamic server discovery scheme, so that in many cases, specific server configuration is not required. It corrects certain errors in the NTPv3 design and implementation and includes an optional extension mechanism. [STANDARDS-TRACK]",
"Abstract : This paper discusses the major features and design objectives of the IEEE-1588 standard. Recent performance results of prototype implementations of this standard in an Ethernet environment are presented. Potential areas of application of this standard are outlined."
]
} |
1604.05972 | 2336359849 | More than fifty years ago, Bellman asked for the best escape path within a known forest but for an unknown starting position. This deterministic finite path is the shortest path that leads out of a given environment from any starting point. There are some worst case positions where the full path length is required. Up to now such a fixed ultimate optimal escape path for a known shape for any starting position is only known for some special convex shapes (i.e., circles, strips of a given width, fat convex bodies, some isosceles triangles). Therefore, we introduce a different, simple and intuitive escape path, the so-called certificate path. This escape path depends on the starting position s and takes the distances from s to the outer boundary of the environment into account. Due to the additional information, the certificate path always (for any position s) leaves the environment earlier than the ultimate escape path, in the above convex examples. Next we assume that fewer information is available. Neither the precise shape of the envir- onment, nor the location of the starting point is known. For a class of environments (convex shapes and shapes with kernel positions), we design an online strategy that always leaves the environment. We show that the path length for leaving the environment is always shorter than 3.318764 the length of the corresponding certificate path. We also give a lower bound of 3.313126, which shows that for the above class of environments the factor 3.318764 is (almost) tight. | For circles and fat convex bodies, it was shown that the diameter is the ultimate optimal escape path; see Finch and Wetzel @cite_2 . For the infinite strip of width @math , the ultimate optimal escape path is due to Zalgaller @cite_24 @cite_0 . For the simple equilateral triangle of side length @math , the zig-zag path of Besicovitch @cite_22 of length @math is optimal; see also @cite_19 . Furthermore, in 1961 Gluss @cite_5 introduced the problem of searching for a circle @math of given radius @math and given distance @math away from the start @math . Two different cases can be considered, either @math is inside @math or not. Interestingly, in the latter case and for @math a certificate path with length @math and an arc of length @math is the best one can do. | {
"cite_N": [
"@cite_22",
"@cite_24",
"@cite_19",
"@cite_0",
"@cite_2",
"@cite_5"
],
"mid": [
"30004998",
"",
"2167932946",
"1966842210",
"2070044474",
"2094588925"
],
"abstract": [
"",
"",
"A long standing conjecture is that the Besicovitch triangle, i.e., an equilateral triangle with side ( 28 27 , ) is a worm-cover. We will show that indeed there exists a class of isosceles triangles, that includes the above equilateral triangle, where each triangle from the class is a worm-cover. These triangles are defined so that the shortest symmetric z-arc stretched from side to side and touching the base would have length one.",
"From a random point O in an infinite strip of width 1, we move in a randomly chosen direction along a curve Г. What shape of Г gives the minimum value to the expectation of the length of the path that reaches the boundary of the strip? After certain arguments suggesting that the desired curve belongs to one of four classes, it is proved that the best curve in those classes consists of four parts: an interval OA of length a, its smooth continuation, an arc AB of radius 1 and small length φ, an interval BD that is smooth continuation of the arc, and an interval DF (with a corner at the point D). If O is the origin and OA is the x axis of a coordinate system, then the coordinates of the above-mentioned points are as follows: A(a, 0), B(a + sin φ, 1 − cos φ), F(a, 1), and @math For the best curve, we have: a ≈ 0.814 and φ ≈ 0.032. Related questions are discussed. Bibliography: 9 titles.",
"Call a path an escape path if it eventually leads out of the forest no matter what the initial starting point or the relative orientations of the path and forest. To solve the \"lost in a forest\" problem we must find the \"best\" escape path. Bellman proposed two different interpretations of \"best,\" one in which the maximum time to escape is minimized, and one in which the expected time to escape is minimized. A third interpretation (see",
"The problem has been considered by Isbell of determining the path that minimizes the maximum distance along the path to a line in the same plane whose distance from the starting point of the search is known, while its direction is unknown. We consider in this article the analogous problem for the search for a circle of known radius, of known distance from a starting point in its plane. It is further shown that Isbell's solution is a limiting case of the problem posed as the radius of the circle tends to infinity. The problem appears to be of some practical significance, since it is equivalent to that of searching for an object a given distance away which will be spotted when we get sufficiently close—that is, within a specific radius."
]
} |
1604.05972 | 2336359849 | More than fifty years ago, Bellman asked for the best escape path within a known forest but for an unknown starting position. This deterministic finite path is the shortest path that leads out of a given environment from any starting point. There are some worst case positions where the full path length is required. Up to now such a fixed ultimate optimal escape path for a known shape for any starting position is only known for some special convex shapes (i.e., circles, strips of a given width, fat convex bodies, some isosceles triangles). Therefore, we introduce a different, simple and intuitive escape path, the so-called certificate path. This escape path depends on the starting position s and takes the distances from s to the outer boundary of the environment into account. Due to the additional information, the certificate path always (for any position s) leaves the environment earlier than the ultimate escape path, in the above convex examples. Next we assume that fewer information is available. Neither the precise shape of the envir- onment, nor the location of the starting point is known. For a class of environments (convex shapes and shapes with kernel positions), we design an online strategy that always leaves the environment. We show that the path length for leaving the environment is always shorter than 3.318764 the length of the corresponding certificate path. We also give a lower bound of 3.313126, which shows that for the above class of environments the factor 3.318764 is (almost) tight. | We will see that our solution is a specific logarithmic spiral. In general, logarithmic spirals are natural candidates for optimal competitive search strategies, but in almost all cases the optimality remains a conjecture; see @cite_8 @cite_12 @cite_14 @cite_9 @cite_16 . In @cite_1 the optimality of spiral search was shown for searching a point in the plane with a radar. Many other conjectures are still open. For example Finch and Zhu @cite_9 considered the problem of searching for a line in the plane. . | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_16",
"@cite_12"
],
"mid": [
"1587220204",
"2019729116",
"1770419565",
"2038342425",
"",
""
],
"abstract": [
"When searching for a planar line, if given no further information, one should adopt a logarithmic spiral strategy (although unproven).",
"In this paper we initiate a new area of study dealing with the best way to search a possibly unbounded region for an object. The model for our search algorithms is that we must pay costs proportional to the distance of the next probe position relative to our current position. This model is meant to give a realistic cost measure for a robot moving in the plane. We also examine the effect of decreasing the amount of a priori information given to search problems. Problems of this type are very simple analogues of non-trivial problems on searching an unbounded region, processing digitized images, and robot navigation. We show that for some simple search problems, knowing the general direction of the goal is much more informative than knowing the distance to the goal.",
"Logarithmic spirals are conjectured to be optimal escape paths from a half plane ocean. Assuming this, we find the rate of increase for both min-max and min-mean interpretations of optimal''. For the one-dimensional analog, which we call logarithmic coils, our min-mean solution differs from a widely-cited published account.",
"Searching for a point in the plane is a well-known search game problem introduced in the early eighties. The best known search strategy is given by a spiral and achieves a competitive ratio of 17.289 ... It was shown by Gal [14] that this strategy is the best strategy among all monotone and periodic strategies. Since then it was unknown whether the given strategy is optimal in general. This paper settles this old open fundamental search problem and shows that spiral search is indeed optimal. The given problem can be considered as the continuous version of the well-known m-ray search problem and also appears in several non-geometric applications and modifications. Therefore the optimality of spiral search is an important question considered by many researchers in the last decades. We answer the logarithmic spiral conjecture for the given problem. The lower bound construction might be helpful for similar settings, it also simplifies existing proofs on classical m-ray search.",
"",
""
]
} |
1604.05534 | 2339783534 | Link capacity dimensioning is the periodic task where ISPs have to make provisions for sudden traffic bursts and network failures to assure uninterrupted operations. This provision comes in the form of link working capacities with noticeable amounts of headroom, i.e., spare capacities that are used in case of congestions or network failures. Distributed routing protocols like OSPF provide convergence after network failures and have proven their reliable operation over decades, but require overprovisioning and headroom of over 50 . However, SDN has recently been proposed to either replace or work together with OSPF in routing Internet traffic. This paper addresses the question of how to robustly dimension the link capacities in emerging hybrid SDN OSPF networks. We analyze the networks with various implementations of hybrid SDN OSPF control planes, and show that our idea of SDN Partitioning requires less amounts of spare capacity compared to legacy or other hybrid SDN OSPF schemes, outperformed only by a full SDN deployment. | Background OSPF is a distributed routing protocol that requires its local implementation in every router. This provides that all routers in the network can exchange topology information, which in turn allows each router to locally determine for each destination the most suitable port to forward packets by processing a shortest path algorithm. OSPF's information updates are referred to as Link State Advertisements (LSAs) and a router participating in OSPF distributes all its topological information by flooding LSAs throughout the entire network. In case of a network failure, the routers adjacent to that failure start flooding the according topology updates. Each router that receives such an update instantly recomputes its routing and reconfigures its packet forwarding accordingly. This process is referred to as OSPF convergence and may take tens of seconds @cite_9 in large IP networks. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2122925822"
],
"abstract": [
"Open Shortest Path First (OSPF), a link state routing protocol, is a popular interior gateway protocol (IGP) in the Internet. Wide spread deployment and years of experience running the protocol have motivated continuous improvements in its operation as the nature and demands of the routing infrastructures have changed. Modern routing domains need to maintain a very high level of service availability. Hence, OSPF needs to achieve fast convergence to topology changes. Also, the ever-growing size of routing domains, and possible presence of wireless mobile adhoc network (MANET) components, requires highly scalable operation on part of OSPF to avoid routing instability. Recent years have seen significant efforts aimed at improving OSPF's convergence speed as well as scalability and extending OSPF to achieve seamless integration of mobile adhoc networks with conventional wired networks. In this paper, we present a comprehensive survey of these efforts."
]
} |
1604.05534 | 2339783534 | Link capacity dimensioning is the periodic task where ISPs have to make provisions for sudden traffic bursts and network failures to assure uninterrupted operations. This provision comes in the form of link working capacities with noticeable amounts of headroom, i.e., spare capacities that are used in case of congestions or network failures. Distributed routing protocols like OSPF provide convergence after network failures and have proven their reliable operation over decades, but require overprovisioning and headroom of over 50 . However, SDN has recently been proposed to either replace or work together with OSPF in routing Internet traffic. This paper addresses the question of how to robustly dimension the link capacities in emerging hybrid SDN OSPF networks. We analyze the networks with various implementations of hybrid SDN OSPF control planes, and show that our idea of SDN Partitioning requires less amounts of spare capacity compared to legacy or other hybrid SDN OSPF schemes, outperformed only by a full SDN deployment. | SDN Partitioning, if deployed like in Figure b, allows to advertise routing updates individually customized for each of the three sub-domains, which in turn enables to control the routing of traffic flows between these sub-domains. It should be noted that the OSPF routing of sub-domain-internal traffic is not affected at all, but can therefore also not be controlled by the network management application. To enable SDN Partitioning's separation of the OSPF domain into distinct sub-domains, a few SDN-enabled routers must be placed in strategic positions in the network, such that their removal cuts the topology into disconnected components. Obviously, the way the network is clustered into sub-domains is determined solely by the operator's choice of nodes that will be exchanged with SDN nodes, i.e., the sub-domains are determined only once in the beginning and can be changed only by adding new or changing the position of existing SDN nodes. In graph theory parlance, this means that the SDN-enabled routers must constitute a vertex separator. The three example networks used in the numerical evaluation have been partitioned with the algorithm that we introduced in @cite_11 . | {
"cite_N": [
"@cite_11"
],
"mid": [
"1619430436"
],
"abstract": [
"Network operators can and do deploy multiple routing control-planes, e.g., by running different protocols or instances of the same protocol. With the rise of SDN, multiple control-planes are likely to become even more popular, e.g., to enable hybrid SDN or multi-controller deployments. Unfortunately, previous works do not apply to arbitrary combinations of centralized and distributed control-planes. In this paper, we develop a general theory for coexisting control-planes. We provide a novel, exhaustive classification of existing and future control-planes (e.g., OSPF, EIGRP, and Open-Flow) based on fundamental control-plane properties that we identify. Our properties are general enough to study centralized and distributed control-planes under a common framework. We show that multiple uncoordinated control-planes can cause forwarding anomalies whose type solely depends on the identified properties. To show the wide applicability of our framework, we leverage our theoretical insight to (i) provide sufficient conditions to avoid anomalies, (ii) propose configuration guidelines, and (iii) define a provably-safe procedure for reconfigurations from any (combination of) control-planes to any other. Finally, we discuss prominent consequences of our findings on the deployment of new paradigms (notably, SDN) and previous research works."
]
} |
1604.05273 | 2950138363 | We introduce a setting for learning possibilistic logic theories from defaults of the form "if alpha then typically beta". We first analyse this problem from the point of view of machine learning theory, determining the VC dimension of possibilistic stratifications as well as the complexity of the associated learning problems, after which we present a heuristic learning algorithm that can easily scale to thousands of defaults. An important property of our approach is that it is inherently able to handle noisy and conflicting sets of defaults. Among others, this allows us to learn possibilistic logic theories from crowdsourced data and to approximate propositional Markov logic networks using heuristic MAP solvers. We present experimental results that demonstrate the effectiveness of this approach. | Reasoning with defaults of the form if @math then typically @math '', denoted as @math , has been widely studied @cite_2 @cite_0 @cite_8 @cite_14 @cite_27 @cite_23 . A central problem in this context is to determine what other defaults can be derived from a given input set. Note, however, that the existing approaches for reasoning about default rules all require some form of consistency (e.g. the input set cannot contain both @math and @math ). As a result, these approaches cannot directly be used for reasoning about noisy crowdsourced defaults. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_2"
],
"mid": [
"2035416838",
"2167907160",
"2143336154",
"2136661655",
"2003374324",
"2080142817"
],
"abstract": [
"Abstract In recent years, two conceptually different interpretations of default expressions have been advanced: extensional interpretations, in which defaults are regarded as prescriptions for extending one's set of beliefs, and conditional interpretations, in which defaults are regarded as beliefs whose validity is bound to a particular context. The two interpretations possess virtues and limitations that are practically orthogonal to each other. The conditional interpretations successfully resolve arguments of different “specificity” (e.g., “penguins don't fly in spite of being birds”) but fail to capture arguments of “irrelevance” (e.g., concluding “ red birds fly” from “birds fly”). The opposite is true for the extensional interpretations. This paper develops a new account of defaults, called conditional entailment , which combines the benefits of the two interpretations. Like prioritized circumscriptions, conditional entailment resolves arguments by enforcing priorities among defaults. However, instead of having to be specified by the user, these priorities are extracted automatically from the knowledge base. Similarly, conditional entailment possesses a sound and complete proof theory, based on interacting arguments and amenable to implementation in conventional ATMSs.",
"Abstract This paper presents a logical approach to nonmonotonic reasoning based on the notion of a nonmonotonic consequence relation. A conditional knowledge base, consisting of a set of conditional assertions of the type if … then … , represents the explicit defeasible knowledge an agent has about the way the world generally behaves. We look for a plausible definition of the set of all conditional assertions entailed by a conditional knowledge base. In a previous paper, Kraus and the authors defined and studied preferential consequence relations. They noticed that not all preferential relations could be considered as reasonable inference procedures. This paper studies a more restricted class of consequence relations, rational relations. It is argued that any reasonable nonmonotonic inference procedure should define a rational relation. It is shown that the rational relations are exactly those that may be represented by a ranked preferential model, or by a (nonstandard) probabilistic model. The rational closure of a conditional knowledge base is defined and shown to provide an attractive answer to the question of the title. Global properties of this closure operation are proved: it is a cumulative operation. It is also computationally tractable. This paper assumes the underlying language is propositional.",
"Recent progress towards unifying the probabilistic and preferential models semantics for non-monotonic reasoning has led to a remarkable observation: Any consistent system of default rules imposes an unambiguous and natural ordering on these rules which, to emphasize its simple and basic character, we term \"Z-ordering.\" This ordering can be used with various levels of refinement, to prioritize conflicting arguments, to rank the degree of abnormality of states of the world, and to define plausible consequence relationships. This paper defines the Z-ordering, briefly mentions its semantical origins, and illustrates two simple entailment relationships induced by the ordering. Two extensions are then described, maximum-entropy and conditional entailment, which trade in computational simplicity for semantic refinements.",
"An approach to nonmonotonic reasoning that combines the principle of infinitesimal probabilities with that of maximum entropy, thus extending the inferential power of the probabilistic interpretation of defaults, is proposed. A precise formalization of the consequences entailed by a conditional knowledge base is provided, the computational machinery necessary for drawing these consequences is developed, and the behavior of the maximum entropy approach is compared to related work in default reasoning. The resulting formalism offers a compromise between two extremes: the cautious approach based on the conditional interpretations of defaults and the bold approach based on minimizing abnormalities. >",
"Abstract This short paper relates the conditional object-based and possibility theory-based approaches for reasoning with conditional statements pervaded with exceptions, to other methods in nonmonotonic reasoning which have been independently proposed: namely, Lehmann's preferential and rational closure entailments which obey normative postulates, the infinitesimal probability approach, and the conditional (modal) logics-based approach. All these methods are shown to be equivalent with respect to their capabilities for reasoning with conditional knowledge although they are based on different modeling frameworks. It thus provides a unified understanding of nonmonotonic consequence relations. More particularly, conditional objects, a purely qualitative counterpart to conditional probabilities, offer a very simple semantics, based on a 3-valued calculus, for the preferential entailment, while in the purely ordinal setting of possibility theory both the preferential and the rational closure entailments can be represented.",
"Abstract Many systems that exhibit nonmonotonic behavior have been described and studied already in the literature. The general notion of nonmonotonic reasoning, though, has almost always been described only negatively, by the property it does not enjoy, i.e. monotonicity. We study here general patterns of nonmonotonic reasoning and try to isolate properties that could help us map the field of nonmonotonic reasoning by reference to positive properties. We concentrate on a number of families of nonmonotonic consequence relations, defined in the style of Gentzen [13]. Both proof-theoretic and semantic points of view are developed in parallel. The former point of view was pioneered by Gabbay [10], while the latter has been advocated by Shoham [38]. Five such families are defined and characterized by representation theorems, relating the two points of view. One of the families of interest, that of preferential relations, turns out to have been studied by Adams [2]. The preferential models proposed here are a much stronger tool than Adams' probabilistic semantics. The basic language used in this paper is that of propositional logic. The extension of our results to first-order predicate calculi and the study of the computational complexity of the decision problems described in this paper will be treated in another paper."
]
} |
1604.05472 | 2341306988 | Due to the environmental impact of fossil fuels and high variability in their prices, there is rising interest in adopting electric vehicles (EVs) by both individuals and governments. Despite the advances in vehicle efficiency and battery capacity, a key hurdle is the inherent interdependence between EV adoption and charging station deployment–EV adoption (and hence, charging demand) increases with the availability of charging stations (operated by service providers) and vice versa. Thus, effective placement of charging stations plays a key role in EV adoption. In the placement problem, given a set of candidate sites, an optimal subset needs to be selected with respect to the concerns of both (a) the charging station service provider, such as the demand at the candidate sites and the budget for deployment, and (b) the EV user, such as charging station reachability and short waiting times at the station. This work addresses these concerns, making the following three novel contributions: (i) a supervised multi-view learning framework using Canonical Correlation Analysis (CCA) for demand prediction at candidate sites, using multiple datasets such as points of interest information, traffic density, and the historical usage at existing charging stations; (ii) a “mixed-packing-and-covering” optimization framework that models competing concerns of the service provider and EV users, which is also extended to optimize government grant allocation to multiple service providers; (iii) an iterative heuristic to solve these problems by alternately invoking knapsack and set cover algorithms. The performance of the demand prediction model and the placement optimization heuristic are evaluated using real world data. In most cases, and especially when budget is scarce, our heuristic achieves an improvement of 10-20 over a naive heuristic, both in finding feasible solutions and maximizing demand. | With the recent increase in the adoption of EVs, the charging station placement problem has received significant attention. One of the prerequisites of an effective charging station placement is the availability of estimated charging demand at candidate sites. In this context, prior literature has studied the impact of external information on charging station demand. For example, parking demand is combined with facility location problem in @cite_13 , and the effect of demographic features is studied using regression models in @cite_6 . However, these approaches lack a principled multi-view learning framework to integrate heterogeneous data sets to predict EV charging demand. In the machine learning literature, CCA-based models have been used for multi-variate regression @cite_3 @cite_19 @cite_1 ; in this paper, we propose a novel variant of CCA-based multivariate regression. | {
"cite_N": [
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_13"
],
"mid": [
"2144903813",
"2113616339",
"199364773",
"1984983329",
"2052144254"
],
"abstract": [
"Canonical correlation analysis (CCA) is a classical method for seeking correlations between two multivariate data sets. During the last ten years, it has received more and more attention in the machine learning community in the form of novel computational formulations and a plethora of applications. We review recent developments in Bayesian models and inference methods for CCA which are attractive for their potential in hierarchical extensions and for coping with the combination of large dimensionalities and small sample sizes. The existing methods have not been particularly successful in fulfilling the promise yet; we introduce a novel efficient solution that imposes group-wise sparsity to estimate the posterior of an extended model which not only extracts the statistical dependencies (correlations) between data sets but also decomposes the data into shared and data set-specific components. In statistics literature the model is known as inter-battery factor analysis (IBFA), for which we now provide a Bayesian treatment.",
"Canonical Correlation Analysis (CCA) is a useful technique for modeling dependencies between two (or more) sets of variables. Building upon the recently suggested probabilistic interpretation of CCA, we propose a nonparametric, fully Bayesian framework that can automatically select the number of correlation components, and effectively capture the sparsity underlying the projections. In addition, given (partially) labeled data, our algorithm can also be used as a (semi)supervised dimensionality reduction technique, and can be applied to learn useful predictive features in the context of learning a set of related tasks. Experimental results demonstrate the efficacy of the proposed approach for both CCA as a stand-alone problem, and when applied to multi-label prediction.",
"Electric mobility is considered to be an essential building block of a sustainable transportation paradigm. Particularly for the urban metropolis of the 21st century electric vehicles embody a promise of decreased air pollution and increased quality of living. However, city planners need to muster enormous investments into urban charging infrastructure in advance. We develop a decision support system to help city planners use these funds more efficiently and place charging infrastructure where it is truly needed. We construct a regression model to determine factors that influence the utilization of charge points in Amsterdam, one of the most \"electrified\" cities in the world. We include data on more than 50,000 points of interest, for instance shopping malls and museums, as well as charging data from 273 charging points with 427 individual outlets in the city of Amsterdam. From these influences we derive a decision support system that places charge points where they are expected to experience the highest utilization.",
"Canonical Correlation Analysis (CCA) is a well-known technique for finding the correlations between two sets of multidimensional variables. It projects both sets of variables onto a lower-dimensional space in which they are maximally correlated. CCA is commonly applied for supervised dimensionality reduction in which the two sets of variables are derived from the data and the class labels, respectively. It is well-known that CCA can be formulated as a least-squares problem in the binary class case. However, the extension to the more general setting remains unclear. In this paper, we show that under a mild condition which tends to hold for high-dimensional data, CCA in the multilabel case can be formulated as a least-squares problem. Based on this equivalence relationship, efficient algorithms for solving least-squares problems can be applied to scale CCA to very large data sets. In addition, we propose several CCA extensions, including the sparse CCA formulation based on the 1-norm regularization. We further extend the least-squares formulation to partial least squares. In addition, we show that the CCA projection for one set of variables is independent of the regularization on the other set of multidimensional variables, providing new insights on the effect of regularization on CCA. We have conducted experiments using benchmark data sets. Experiments on multilabel data sets confirm the established equivalence relationships. Results also demonstrate the effectiveness and efficiency of the proposed CCA extensions.",
"Access to electric vehicle (EV) charging stations will affect EV adoption rates, use decisions, electrified mile shares, petroleum demand, and power consumption across times of day. Parking information from more than 30,000 records of personal trips in the Puget Sound, Washington, Regional Council's 2006 Household Activity Survey is used to determine public (nonresidential) parking locations and durations. Regression equations predict parking demand variables (i.e., total vehicle hours per zone or neighborhood and parked time per vehicle trip) as a function of site accessibility, local job and population density, trip attributes, and other variables available in most regions and travel surveys. Several of these variables are key inputs to a mixed-integer programming problem developed to determine optimal location assignments for EV charging stations. The algorithm minimizes EV users' costs for station access while penalizing unmet demand. This useful specification is used to determine the top locations fo..."
]
} |
1604.05472 | 2341306988 | Due to the environmental impact of fossil fuels and high variability in their prices, there is rising interest in adopting electric vehicles (EVs) by both individuals and governments. Despite the advances in vehicle efficiency and battery capacity, a key hurdle is the inherent interdependence between EV adoption and charging station deployment–EV adoption (and hence, charging demand) increases with the availability of charging stations (operated by service providers) and vice versa. Thus, effective placement of charging stations plays a key role in EV adoption. In the placement problem, given a set of candidate sites, an optimal subset needs to be selected with respect to the concerns of both (a) the charging station service provider, such as the demand at the candidate sites and the budget for deployment, and (b) the EV user, such as charging station reachability and short waiting times at the station. This work addresses these concerns, making the following three novel contributions: (i) a supervised multi-view learning framework using Canonical Correlation Analysis (CCA) for demand prediction at candidate sites, using multiple datasets such as points of interest information, traffic density, and the historical usage at existing charging stations; (ii) a “mixed-packing-and-covering” optimization framework that models competing concerns of the service provider and EV users, which is also extended to optimize government grant allocation to multiple service providers; (iii) an iterative heuristic to solve these problems by alternately invoking knapsack and set cover algorithms. The performance of the demand prediction model and the placement optimization heuristic are evaluated using real world data. In most cases, and especially when budget is scarce, our heuristic achieves an improvement of 10-20 over a naive heuristic, both in finding feasible solutions and maximizing demand. | A useful charging station placement solution needs to take into account the charging demand, budget, coverage, and waiting time at the stations. Existing work has not considered all of these factors simultaneously. For example, placement of charging stations based on predicted demand has been studied in @cite_24 @cite_9 @cite_6 @cite_13 @cite_23 @cite_28 , but they do not simultaneously consider constraints on budget, coverage, and waiting times. Coverage requirements in charging station placement has been modeled as path cover in @cite_2 , and reachability cover in @cite_26 over the city road network, but they do not consider the demand satisfied by the deployment and the waiting time faced by the users. | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_9",
"@cite_6",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_13"
],
"mid": [
"2181447759",
"2046612982",
"1017683716",
"199364773",
"955844860",
"1979416562",
"2248285432",
"2052144254"
],
"abstract": [
"The short cruising range due to the limited battery supply of current Electric Vehicles (EVs) is one of the main obstacles for a complete transition to E-mobility. Until batteries of higher energy storage density have been developed, it is of utmost importance to deliberately plan the locations of new loading stations for best possible coverage. Ideally the network of loading stations should allow driving from anywhere to anywhere (and back) without running out of energy. We show that minimizing the number of necessary loading stations to achieve this goal is NP-hard and even worse, we can rule out polynomial-time constant approximation algorithms. Hence algorithms with better approximation guarantees have to make use of the special structure of road networks (which is not obvious how to do it). On the positive side, we show with instance based lower bounds that our heuristic algorithms achieve provably good solutions on real-world problem instances.",
"One of the most critical barriers to widespread adoption of electric cars is the lack of charging station infrastructure. Although it is expected that a sufficient number of charging stations will be constructed eventually, due to various practical reasons they may have to be introduced gradually over time. In this paper, we formulate a multi-period optimization model based on a flow-refueling location model for strategic charging station location planning. We also propose two myopic methods and develop a case study based on the real traffic flow data of the Korean Expressway network in 2011. We discuss the performance of the three proposed methods.",
"Certain examples provide systems and methods to identify placement for an electric charging station infrastructure. Certain examples provide systems and methods to generate a deployment plan for one or more electric vehicle charging stations. An example method includes gathering data for a specified geographic area and forecasting a demand for electric vehicles for the specified area. The example method includes modeling driving patterns in the specified area using available data and improving a charging infrastructure model based on the driving pattern and demand forecast information for the specified area. The example method includes generating and providing a recommendation regarding an electric vehicle charging infrastructure and deployment strategy for the specified area based on the improved charging infrastructure model.",
"Electric mobility is considered to be an essential building block of a sustainable transportation paradigm. Particularly for the urban metropolis of the 21st century electric vehicles embody a promise of decreased air pollution and increased quality of living. However, city planners need to muster enormous investments into urban charging infrastructure in advance. We develop a decision support system to help city planners use these funds more efficiently and place charging infrastructure where it is truly needed. We construct a regression model to determine factors that influence the utilization of charge points in Amsterdam, one of the most \"electrified\" cities in the world. We include data on more than 50,000 points of interest, for instance shopping malls and museums, as well as charging data from 273 charging points with 427 individual outlets in the city of Amsterdam. From these influences we derive a decision support system that places charge points where they are expected to experience the highest utilization.",
"A system, method and program product for siting charging stations. A mobility detection module collects traffic data from vehicle sensors distributed in an area. A map matching module maps detected traffic in the area. A vehicle flow module temporally characterizes mapped traffic flow. An electric vehicle (EV) requirements (EVR) evaluator determines an optimal number of charging stations and respective locations for siting the charging stations.",
"Transportation electrification is one of the essential components in the future smart city planning and electric vehicles (EVs) will be integrated into the transportation system seamlessly. Charging stations are the main source of energy for EVs and their locations are critical to the accessibility of EVs in a city. They should be carefully situated so that an EV can access a charging station within its driving range and cruise around anywhere in the city upon being recharged. In this paper, we formulate the Electric Vehicle Charging Station Placement Problem, in which we minimize the total construction cost subject to the constraints for the charging station coverage and the convenience of the drivers for EV charging. We study the properties of the problem, especially its NP-hardness, and propose an efficient greedy algorithm to tackle the problem. We perform a series of simulation whose results show that the greedy algorithm can result in solutions comparable to the mixed-integer programming approach and its computation time is much shorter.",
"Compared to conventional cars, electric vehicles (EVs) still suffer from considerably shorter cruising ranges. Combined with the sparsity of battery loading stations, the complete transition to E-mobility still seems a long way to go. In this paper, we consider the problem of placing as few loading stations as possible so that on any shortest path there are sufficiently many not to run out of energy. We show how to model this problem and introduce heuristics which provide close-to-optimal solutions even in large road networks.",
"Access to electric vehicle (EV) charging stations will affect EV adoption rates, use decisions, electrified mile shares, petroleum demand, and power consumption across times of day. Parking information from more than 30,000 records of personal trips in the Puget Sound, Washington, Regional Council's 2006 Household Activity Survey is used to determine public (nonresidential) parking locations and durations. Regression equations predict parking demand variables (i.e., total vehicle hours per zone or neighborhood and parked time per vehicle trip) as a function of site accessibility, local job and population density, trip attributes, and other variables available in most regions and travel surveys. Several of these variables are key inputs to a mixed-integer programming problem developed to determine optimal location assignments for EV charging stations. The algorithm minimizes EV users' costs for station access while penalizing unmet demand. This useful specification is used to determine the top locations fo..."
]
} |
1604.05472 | 2341306988 | Due to the environmental impact of fossil fuels and high variability in their prices, there is rising interest in adopting electric vehicles (EVs) by both individuals and governments. Despite the advances in vehicle efficiency and battery capacity, a key hurdle is the inherent interdependence between EV adoption and charging station deployment–EV adoption (and hence, charging demand) increases with the availability of charging stations (operated by service providers) and vice versa. Thus, effective placement of charging stations plays a key role in EV adoption. In the placement problem, given a set of candidate sites, an optimal subset needs to be selected with respect to the concerns of both (a) the charging station service provider, such as the demand at the candidate sites and the budget for deployment, and (b) the EV user, such as charging station reachability and short waiting times at the station. This work addresses these concerns, making the following three novel contributions: (i) a supervised multi-view learning framework using Canonical Correlation Analysis (CCA) for demand prediction at candidate sites, using multiple datasets such as points of interest information, traffic density, and the historical usage at existing charging stations; (ii) a “mixed-packing-and-covering” optimization framework that models competing concerns of the service provider and EV users, which is also extended to optimize government grant allocation to multiple service providers; (iii) an iterative heuristic to solve these problems by alternately invoking knapsack and set cover algorithms. The performance of the demand prediction model and the placement optimization heuristic are evaluated using real world data. In most cases, and especially when budget is scarce, our heuristic achieves an improvement of 10-20 over a naive heuristic, both in finding feasible solutions and maximizing demand. | The optimization problem considered in this work contains both packing and covering constraints, with integer decision variables. Mixed packing and covering problems have been studied for fractional decision variables @cite_33 @cite_15 , as well as for integer decision variables @cite_17 @cite_25 . However, this work introduces an optimization problem with a knapsack packing constraint and a set covering constraint, which has not been studied earlier. | {
"cite_N": [
"@cite_15",
"@cite_25",
"@cite_33",
"@cite_17"
],
"mid": [
"1806207712",
"1545232017",
"2163656645",
"2241989584"
],
"abstract": [
"Recent work has shown that the classical framework of solving optimization problems by obtaining a fractional solution to a linear program (LP) and rounding it to an integer solution can be extended to the online setting using primal-dual techniques. The success of this new framework for online optimization can be gauged from the fact that it has led to progress in several longstanding open questions. However, to the best of our knowledge, this framework has previously been applied to LPs containing only packing or only covering constraints, or minor variants of these. We extend this framework in a fundamental way by demonstrating that it can be used to solve mixed packing and covering LPs online, where packing constraints are given offline and covering constraints are received online. The objective is to minimize the maximum multiplicative factor by which any packing constraint is violated, while satisfying the covering constraints. Our results represent the first algorithm that obtains a polylogarithmic competitive ratio for solving mixed LPs online. We then consider two canonical examples of mixed LPs: unrelated machine scheduling with startup costs, and capacity constrained facility location. We use ideas generated from our result for mixed packing and covering to obtain polylogarithmic-competitive algorithms for these problems. We also give lower bounds to show that the competitive ratios of our algorithms are nearly tight.",
"We consider the problem of fair resource allocation for tasks where a resource can be assigned to at most one task, without any fractional allocation. The system is heterogeneous: capacity and cost may vary across resources, and different tasks may have different resource demand. Due to heterogeneity of resources, the cost of allocating a task in isolation, without any other competing task, may differ significantly from its allocation cost when the task is allocated along with other tasks. In this context, we consider the problem of allocating resource to tasks, while ensuring that the cost is distributed fairly across the tasks, namely, the ratio of allocation cost of a task to its isolation cost is minimized over all tasks. We show that this fair resource allocation problem is strongly NP-Hard even when the resources are of unit size by a reduction from 3-partition. Our central results are an LP rounding based algorithm with an approximation ratio of 2+ O(a#x03B5;) for the problem when resources are of unit size, and a near-optimal greedy algorithm for a more restricted version. The above fair allocation problem arises for resource allocation in various context, such as, allocating computing resources for reservations requests from tenants in a data centre, allocating resources to computing tasks in grid computing, or allocating personnel for tasks in service delivery organizations.",
"We describe sequential and parallel algorithms that approximately solve linear programs with no negative coefficients (a.k.a. mixed packing and covering problems).For explicitly given problems, our fastest sequential algorithm returns a solution satisfying all constraints within a 1 factor in 0(md (m) ^2 ) time, where m is the number of constraints and d is the maximum number of constraints any variable appears in.Our parallel algorithm runs in time polylogarithmic in the input size times ^4 and uses a total number of operations comparable to the sequential algorithm.The main contribution is that the algorithms solve mixed packing and covering problems (in contrast to pure packing or pure covering problems, which have only or only inequalities, but not both) and run in time independent of the so-called width of the problem.",
"We consider the Knapsack Covering problem subject to a matroid constraint. In this problem, we are given an universe U of n items where item i has attributes: a cost c(i) and a size s(i). We also have a demand D. We are also given a matroid M = (U, I) on the set U. A feasible solution S to the problem is one such that (i) the cumulative size of the items chosen is at least D, and (ii) the set S is independent in the matroid M (i.e. S is in I). The objective is to minimize the total cost of the items selected, sum_ i in S c(i). Our main result proves a 2-factor approximation for this problem. The problem described above falls in the realm of mixed packing covering problems. We also consider packing extensions of certain other covering problems and prove that in such cases it is not possible to derive any constant factor pproximations."
]
} |
1604.05471 | 2339598427 | With the increase in adoption of Electric Vehicles (EVs), proper utilization of the charging infrastructure is an emerging challenge for service providers. Overstaying of an EV after a charging event is a key contributor to low utilization. Since overstaying is easily detectable by monitoring the power drawn from the charger, managing this problem primarily involves designing an appropriate "penalty" during the overstaying period. Higher penalties do discourage overstaying; however, due to uncertainty in parking duration, less people would find such penalties acceptable, leading to decreased utilization (and revenue). To analyze this central trade-off, we develop a novel framework that integrates models for realistic user behavior into queueing dynamics to locate the optimal penalty from the points of view of utilization and revenue, for different values of the external charging demand. Next, when the model parameters are unknown, we show how an online learning algorithm, such as UCB, can be adapted to learn the optimal penalty. Our experimental validation, based on charging data from London, shows that an appropriate penalty can increase both utilization and revenue while significantly reducing overstaying. | Dynamic pricing of parking has been an area of active study in the transportation literature. @cite_16 @cite_5 , the authors present dynamic pricing schemes for regular parking based on estimated demand to reduce both congestion and underuse. On the other hand, significant work has also been done on dynamic pricing of electric vehicle charging. @cite_12 , the authors model the problem as a Stackelberg game between the smart grid as the leader setting the price, and the vehicle owner as the follower deciding their charging strategies. @cite_17 @cite_4 @cite_13 , dynamic pricing of electricity for charging EVs is proposed based on the usage data in a given location and time. None of the above work, however, consider the problem of overstaying EVs. To the best of our knowledge, this is the first paper to investigate designing penalty schemes for park-and-charge facilities to combat overstaying EVs. | {
"cite_N": [
"@cite_4",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"1532975880",
"1501139254",
"1967761239",
"1735563731",
"2065343413",
"1588879559"
],
"abstract": [
"An expert system manages a power grid wherein charging stations are connected to the power grid, with electric vehicles connected to the charging stations, whereby the expert system selectively backfills power from connected electric vehicles to the power grid through a grid tie inverter (if present) within the charging stations. In more traditional usage, the expert system allows for electric vehicle charging, coupled with user preferences as to charge time, charge cost, and charging station capabilities, without exceeding the power grid capacity at any point. A robust yet accurate state of charge (SOC) calculation method is also presented, whereby initially an open circuit voltage (OCV) based on sampled battery voltages and currents is calculated, and then the SOC is obtained based on a mapping between a previously measured reference OCV (ROCV) and SOC. The OCV-SOC calculation method accommodates likely any battery type with any current profile.",
"A parking locator system providing variably priced parking includes one or more parking sensors and one or more parking kiosks. The parking sensors may generate parking information identifying one or more occupied parking spaces and one or more unoccupied parking spaces within a vicinity. Promotion information including one or more discounts to parking fees may be received at the parking kiosks. The parking kiosks may display the discounted parking fees and receive payment of the same. The discounts to parking fees may expire at particular times. The size of the discounts may be set based on traffic congestion, which may be determined based on the number of occupied parking spaces within the vicinity.",
"On-street parking, just as any publicly owned utility, is used inefficiently if access is free or priced very far from market rates. This paper introduces a novel demand management solution: using data from dedicated occupancy sensors an iteration scheme updates parking rates to better match demand. The new rates encourage parkers to avoid peak hours and peak locations and reduce congestion and underuse. The solution is deliberately simple so that it is easy to understand, easily seen to be fair and leads to parking policies that are easy to remember and act upon. We study the convergence properties of the iteration scheme and prove that it converges to a reasonable distribution for a very large class of models. The algorithm is in use to change parking rates in over 6000 spaces in downtown Los Angeles since June 2012 as part of the LA Express Park project. Initial results are encouraging with a reduction of congestion and underuse, while in more locations rates were decreased than increased.",
"A computer implemented method, apparatus, and computer program product for automatically managing incentives associated with an electric vehicle charging transaction is provided. Incentives are received from a set of sources to form received incentives, by an incentive service. Applicable incentives are selected from the received incentives based on an identification of an electric vehicle, a charging station, and a set of principals associated with the electric vehicle charging transaction, by the incentive service. A set of selected incentives is identified from the received incentives for utilization in the electric vehicle charging transaction. The set of selected incentives is sent to an energy transaction planner, wherein the energy transaction planner incorporates the set of selected incentives into an energy transaction plan that is used to control the electric vehicle charging transaction.",
"In this paper, the problem of grid-to-vehicle energy exchange between a smart grid and plug-in electric vehicle groups (PEVGs) is studied using a noncooperative Stackelberg game. In this game, on the one hand, the smart grid, which acts as a leader, needs to decide on its price so as to optimize its revenue while ensuring the PEVGs' participation. On the other hand, the PEVGs, which act as followers, need to decide on their charging strategies so as to optimize a tradeoff between the benefit from battery charging and the associated cost. Using variational inequalities, it is shown that the proposed game possesses a socially optimal Stackelberg equilibrium in which the grid optimizes its price while the PEVGs choose their equilibrium strategies. A distributed algorithm that enables the PEVGs and the smart grid to reach this equilibrium is proposed and assessed by extensive simulations. Further, the model is extended to a time-varying case that can incorporate and handle slowly varying environments.",
"Some example embodiments include a method for recharging a number of battery electric vehicles. The method include receiving (by a control module configured to control an electrical grid system that include a number of recharging stations that are configured to recharge the number of battery electric vehicles and from the number of battery electric vehicles) usage data that comprises a current charge level, a current location, and a planned itinerary that includes a destination. The method includes determining anticipated electrical loads in the number of sectors of the electrical grid system based on the usage data of the number of battery electric vehicles. The method also includes redistributing the electrical supply on the electrical grid system to at least one recharging station of the number of recharging stations based on the anticipated electrical loads, prior to actual usage defined by the usage data by the number of battery electrical vehicles."
]
} |
1604.05471 | 2339598427 | With the increase in adoption of Electric Vehicles (EVs), proper utilization of the charging infrastructure is an emerging challenge for service providers. Overstaying of an EV after a charging event is a key contributor to low utilization. Since overstaying is easily detectable by monitoring the power drawn from the charger, managing this problem primarily involves designing an appropriate "penalty" during the overstaying period. Higher penalties do discourage overstaying; however, due to uncertainty in parking duration, less people would find such penalties acceptable, leading to decreased utilization (and revenue). To analyze this central trade-off, we develop a novel framework that integrates models for realistic user behavior into queueing dynamics to locate the optimal penalty from the points of view of utilization and revenue, for different values of the external charging demand. Next, when the model parameters are unknown, we show how an online learning algorithm, such as UCB, can be adapted to learn the optimal penalty. Our experimental validation, based on charging data from London, shows that an appropriate penalty can increase both utilization and revenue while significantly reducing overstaying. | To address the situation where the probability distributions required for modelling user behaviour remain unavailable to the system until the users are actually observed, and the user behaviour needs to be learned over time, we model our problem as a Multi-Armed Bandit (MAB) problem, which has been well studied @cite_6 @cite_3 @cite_10 @cite_8 and applied to a wide variety of domains such as crowdsourcing, online advertising, dynamic pricing, and smart grids. However, we are not aware of any application of MABs to the park-and-charge scenario. | {
"cite_N": [
"@cite_3",
"@cite_10",
"@cite_6",
"@cite_8"
],
"mid": [
"2168405694",
"",
"2950929549",
"2000080679"
],
"abstract": [
"Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.",
"",
"Multi-armed bandit problems are the most basic examples of sequential decision problems with an exploration-exploitation trade-off. This is the balance between staying with the option that gave highest payoffs in the past and exploring new options that might give higher payoffs in the future. Although the study of bandit problems dates back to the Thirties, exploration-exploitation trade-offs arise in several modern applications, such as ad placement, website optimization, and packet routing. Mathematically, a multi-armed bandit is defined by the payoff process associated with each option. In this survey, we focus on two extreme cases in which the analysis of regret is particularly simple and elegant: i.i.d. payoffs and adversarial payoffs. Besides the basic setting of finitely many actions, we also analyze some of the most important variants and extensions, such as the contextual bandit model.",
"We consider a non-Bayesian infinite horizon version of the multi-armed bandit problem with the objective of designing simple policies whose regret increases sldwly with time. In their seminal work on this problem, Lai and Robbins had obtained a O(logn) lower bound on the regret with a constant that depends on the KullbackLeibler number. They also constructed policies for some specific families of probability distributions (including exponential families) that achieved the lower bound. In this paper we construct index policies that depend on the rewards from each arm only through their sample mean. These policies are computationally much simpler and are also applicable much more generally. They achieve a O(logn) regret with a constant that is also based on the Kullback-Leibler number. This constant turns out to be optimal for one-parameter exponential families; however, in general it is derived from the optimal one via a 'contraction' principle. Our results rely entirely on a few key lemmas from the theory of large deviations."
]
} |
1604.05499 | 2341695656 | Many natural language processing (NLP) tasks can be generalized into segmentation problem. In this paper, we combine semi-CRF with neural network to solve NLP segmentation tasks. Our model represents a segment both by composing the input units and embedding the entire segment. We thoroughly study different composition functions and different segment embeddings. We conduct extensive experiments on two typical segmentation tasks: named entity recognition (NER) and Chinese word segmentation (CWS). Experimental results show that our neural semi-CRF model benefits from representing the entire segment and achieves the stateof-the-art performance on CWS benchmark dataset and competitive results on the CoNLL03 dataset. | Domain specific knowledge like capitalization has been proved effective in named entity recognition @cite_13 . Segment-level abstraction like whether the segment matches a lexicon entry also leads performance improvement @cite_5 . To keep the simplicity of our model, we didn't employ such features in our NER experiments. But our model can easily take these features and it is hopeful the NER performance can be further improved. | {
"cite_N": [
"@cite_5",
"@cite_13"
],
"mid": [
"2158899491",
"2004763266"
],
"abstract": [
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.",
"We analyze some of the fundamental design challenges and misconceptions that underlie the development of an efficient and robust NER system. In particular, we address issues such as the representation of text chunks, the inference approach needed to combine local NER decisions, the sources of prior knowledge and how to use them within an NER system. In the process of comparing several solutions to these challenges we reach some surprising conclusions, as well as develop an NER system that achieves 90.8 F1 score on the CoNLL-2003 NER shared task, the best reported result for this dataset."
]
} |
1604.05243 | 2356150345 | We revisit the problem of designing strategyproof mechanisms for allocating divisible items among two agents who have linear utilities, where payments are disallowed and there is no prior information on the agents' preferences. The objective is to design strategyproof mechanisms which are competitive against the most efficient (but not strategyproof) mechanism. For the case with two items: (1) We provide a set of sufficient conditions for strategyproofness. (2) We use an analytic approach to derive strategyproof mechanisms which are more competitive than all prior strategyproof mechanisms. (3) We improve the linear-program-based proof of Guo and Conitzer to show new upper bounds on competitive ratios. (4) We provide the first "mathematical" upper bound proof. For the cases with any number of items: (1) We build on the Partial Allocation mechanisms introduced by to design a strategyproof mechanism which is 0.67776-competitive, breaking the 2 3 barrier. (2) We propose a new subclass of strategyproof mechanisms called Dynamical-Increasing-Price mechanisms, where each agent purchases the items using virtual money, and the prices of the items depend on other agents' preferences. | Guo and Conitzer @cite_4 considered a sub-class of SP mechanisms called Swap-Dictatorial mechanisms, in which each agent is a with probability @math , who chooses her favourite allocation from a predefined set of allowable allocations, and the other agent is allocated the remaining items. Swap-Dictatorial mechanisms can be generalized to more than two agents, by first generating a random order of the agents, and then each agent takes turn to choose her favourite allocation. They studied two sub-classes of Swap-Dictatorial mechanisms, Increasing-Price (IP) mechanisms and Linear-Increasing-Price (LIP) mechanisms. For the case with two items, they showed that there is a LIP mechanism which is @math -competitive against the first-best mechanism; they used a linear program to show that no SP mechanism can be better than @math -competitive. They also showed that as the number of items goes to infinity, IP and LIP mechanisms have maximal competitiveness of @math . | {
"cite_N": [
"@cite_4"
],
"mid": [
"49938292"
],
"abstract": [
"We investigate the problem of allocating items (private goods) among competing agents in a setting that is both prior-free and payment-free. Specificall, we focus on allocating multiple heterogeneous items between two agents with additive valuation functions. Our objective is to design strategy-proof mechanisms that are competitive against the most efficien (first-best allocation. We introduce the family of linear increasing-price (LIP) mechanisms. The LIP mechanisms are strategy-proof, prior-free, and payment-free, and they are exactly the increasing-price mechanisms satisfying a strong responsiveness property. We show how to solve for competitive mechanisms within the LIP family. For the case of two items, we fin a LIP mechanism whose competitive ratio is near optimal (the achieved competitive ratio is 0.828, while any strategy-proof mechanism is at most 0.841-competitive). As the number of items goes to infinit, we prove a negative result that any increasing-price mechanism (linear or nonlinear) has a maximal competitive ratio of 0.5. Our results imply that in some cases, it is possible to design good allocation mechanisms without payments and without priors."
]
} |
1604.05243 | 2356150345 | We revisit the problem of designing strategyproof mechanisms for allocating divisible items among two agents who have linear utilities, where payments are disallowed and there is no prior information on the agents' preferences. The objective is to design strategyproof mechanisms which are competitive against the most efficient (but not strategyproof) mechanism. For the case with two items: (1) We provide a set of sufficient conditions for strategyproofness. (2) We use an analytic approach to derive strategyproof mechanisms which are more competitive than all prior strategyproof mechanisms. (3) We improve the linear-program-based proof of Guo and Conitzer to show new upper bounds on competitive ratios. (4) We provide the first "mathematical" upper bound proof. For the cases with any number of items: (1) We build on the Partial Allocation mechanisms introduced by to design a strategyproof mechanism which is 0.67776-competitive, breaking the 2 3 barrier. (2) We propose a new subclass of strategyproof mechanisms called Dynamical-Increasing-Price mechanisms, where each agent purchases the items using virtual money, and the prices of the items depend on other agents' preferences. | @cite_13 showed a number of upper bound results on the competitiveness of SP mechanisms, when the numbers of agents and or items increase. In particular, they showed that no swap-dictatorial mechanism can be better than @math -competitive for @math items. In addition, they proved the following characterization result: in the case with two items, if a mechanism @math is symmetric and second order continuously differentiable, then @math is SP if and only if @math is swap-dictatorial. | {
"cite_N": [
"@cite_13"
],
"mid": [
"176406899"
],
"abstract": [
"In this paper we study the problem of allocating divisible items to agents without payments. We assume no prior knowledge about the agents. The utility of an agent is additive. The social welfare of a mechanism is defined as the overall utility of all agents. This model is first defined by Guo and Conitzer [7]. Here we are interested in strategy-proof mechanisms that have a good competitive ratio, that is, those that are able to achieve social welfare close to the maximal social welfare in all cases. First, for the setting of n agents and m items, we prove that there is no (1 m+e)-competitive strategy-proof mechanism, for any e>0. And, no mechanism can achieve a competitive ratio better than @math , when @math . Next we study the setting of two agents and m items, which is also the focus of [7]. We prove that the competitive ratio of any swap-dictatorial mechanism is no greater than @math . Then we give a characterization result: for the case of 2 items, if the mechanism is strategy-proof, symmetric and second order continuously differentiable, then it is always swap-dictatorial. In the end we consider a setting where an agent's valuation of each item is bounded by C m, where C is an arbitrary constant. We show a mechanism that is (1 2+e(C))-competitive, where e(C)>0."
]
} |
1604.05243 | 2356150345 | We revisit the problem of designing strategyproof mechanisms for allocating divisible items among two agents who have linear utilities, where payments are disallowed and there is no prior information on the agents' preferences. The objective is to design strategyproof mechanisms which are competitive against the most efficient (but not strategyproof) mechanism. For the case with two items: (1) We provide a set of sufficient conditions for strategyproofness. (2) We use an analytic approach to derive strategyproof mechanisms which are more competitive than all prior strategyproof mechanisms. (3) We improve the linear-program-based proof of Guo and Conitzer to show new upper bounds on competitive ratios. (4) We provide the first "mathematical" upper bound proof. For the cases with any number of items: (1) We build on the Partial Allocation mechanisms introduced by to design a strategyproof mechanism which is 0.67776-competitive, breaking the 2 3 barrier. (2) We propose a new subclass of strategyproof mechanisms called Dynamical-Increasing-Price mechanisms, where each agent purchases the items using virtual money, and the prices of the items depend on other agents' preferences. | Cole, Gkatzelis and Goel @cite_5 proposed another sub-class of SP mechanisms (for any number of agents) called (PA) mechanisms, which are not swap-dictatorial. They showed in another work @cite_6 that a variant of PA mechanism is @math -competitive for two agents and any number of items. | {
"cite_N": [
"@cite_5",
"@cite_6"
],
"mid": [
"2084685902",
"2107435"
],
"abstract": [
"We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option. For a multi-dimensional domain with an arbitrary number of agents and items, and for the very large class of homogeneous valuation functions, we prove that our mechanism provides every agent with at least a 1 e ≈ 0.368 fraction of her Proportionally Fair valuation. To the best of our knowledge, this is the first result that gives a constant factor approximation to every agent for the Proportionally Fair solution. To complement this result, we show that no truthful mechanism can guarantee more than 0.5 approximation, even for the restricted class of additive linear valuations. In addition to this, we uncover a connection between the Partial Allocation mechanism and VCG-based mechanism design. We also ask whether better approximation ratios are possible in more restricted settings. In particular, motivated by the massive privatization auction in the Czech republic in the early 90s we provide another mechanism for additive linear valuations that works really well when all the items are highly demanded.",
"Consider the problem of allocating multiple divisible goods to two agents in a strategy-proof fashion without the use of payments or priors. Previous work has aimed at implementing allocations that are competitive with respect to an appropriately defined measure of social welfare. These results have mostly been negative, proving that no dictatorial mechanism can achieve an approximation factor better than 0.5, and leaving open the question of whether there exists a non-dictatorial mechanism that outperforms this bound. We provide a positive answer to this question by presenting an interesting non-dictatorial mechanism that achieves an approximation factor of 2 3 for this measure of social welfare. In proving this bound we also touch on the issue of fairness: we show that the proportionally fair solution, a well known fairness concept for money-free settings, is highly competitive with respect to social welfare. We then show how to use the proportionally fair solution to design our non-dictatorial strategy-proof mechanism."
]
} |
1604.05001 | 2338770003 | This paper considers the joint fronthaul compression and transmit beamforming design for the uplink cloud radio access network (C-RAN), in which multi-antenna users communicate with a cloud-computing based centralized processor (CP) through multi-antenna base-stations (BSs) serving as relay nodes. A compress-and-forward relaying strategy, named the virtual multiple-access channel (VMAC) scheme, is employed, in which the BSs can either perform single-user compression or Wyner-Ziv coding to quantize the received signals and send the quantization bits to the CP via capacity-limited fronthaul links; the CP performs successive decoding with either successive interference cancellation (SIC) receiver or linear minimum-mean square-error (MMSE) receiver. Under this setup, this paper investigates the joint optimization of the transmit beamformers and the quantization noise covariance matrices for maximizing the network utility. A novel weighted minimum-mean-square-error successive convex approximation (WMMSE-SCA) algorithm is first proposed for maximizing the weighted sum rate under the user transmit power and fronthaul capacity constraints with single-user compression. Assuming a heuristic decompression order, the proposed algorithm is then adapted for optimizing the transmit beamforming and fronthaul compression under Wyner-Ziv coding. This paper also proposes a low-complexity separate design consisting of optimizing transmit beamformers for the Gaussian vector multiple-access channel along with per-antenna quantizers with uniform quantization noise levels across the antennas at each BS. Numerical results show that majority of the performance gain brought by C-RAN comes from the implementation of SIC at the CP. Furthermore, the low complexity separate design already performs very close to the optimized joint design in regime of practical interest. | This paper studies the linear transceiver and fronthaul compression design in the VMAC scheme for the uplink multiple-input-multiple-output (MIMO) C-RAN model. As a generalization of @cite_2 which considers the SISO case only, this paper considers the MIMO case where both the users and the BSs are equipped with multiple antennas. The main difference between the SISO case and the MIMO case is the impact of transmitter optimization at the user terminals. For example, in the SISO case, to maximize the sum rate of the VMAC scheme for a single-cluster C-RAN system, all the users within the cluster need to transmit at their maximum powers (assuming that SIC receiver is implemented at the CP). However, in the MIMO case, the users are capable of doing transmit beamforming, so the optimal transmit beamforming design is more involved. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1970767514"
],
"abstract": [
"This paper studies the uplink of a cloud radio access network (C-RAN) where the cell sites are connected to a cloud-computing-based central processor (CP) with noiseless backhaul links with finite capacities. We employ a simple compress-and-forward scheme in which the base stations (BSs) quantize the received signals and send the quantized signals to the CP using either distributed Wyner-Ziv coding or single-user compression. The CP first decodes the quantization codewords and then decodes the user messages as if the remote users and the cloud center form a virtual multiple-access channel (VMAC). This paper formulates the problem of optimizing the quantization noise levels for weighted sum rate maximization under a sum backhaul capacity constraint. We propose an alternating convex optimization approach to find a local optimum solution to the problem efficiently, and more importantly, to establish that setting the quantization noise levels to be proportional to the background noise levels is near optimal for sum-rate maximization when the signal-to-quantization-noise-ratio (SQNR) is high. In addition, with Wyner-Ziv coding, the approximate quantization noise level is shown to achieve the sum-capacity of the uplink C-RAN model to within a constant gap. With single-user compression, a similar constant-gap result is obtained under a diagonal dominant channel condition. These results lead to an efficient algorithm for allocating the backhaul capacities in C-RAN. The performance of the proposed scheme is evaluated for practical multicell and heterogeneous networks. It is shown that multicell processing with optimized quantization noise levels across the BSs can significantly improve the performance of wireless cellular networks."
]
} |
1604.05001 | 2338770003 | This paper considers the joint fronthaul compression and transmit beamforming design for the uplink cloud radio access network (C-RAN), in which multi-antenna users communicate with a cloud-computing based centralized processor (CP) through multi-antenna base-stations (BSs) serving as relay nodes. A compress-and-forward relaying strategy, named the virtual multiple-access channel (VMAC) scheme, is employed, in which the BSs can either perform single-user compression or Wyner-Ziv coding to quantize the received signals and send the quantization bits to the CP via capacity-limited fronthaul links; the CP performs successive decoding with either successive interference cancellation (SIC) receiver or linear minimum-mean square-error (MMSE) receiver. Under this setup, this paper investigates the joint optimization of the transmit beamformers and the quantization noise covariance matrices for maximizing the network utility. A novel weighted minimum-mean-square-error successive convex approximation (WMMSE-SCA) algorithm is first proposed for maximizing the weighted sum rate under the user transmit power and fronthaul capacity constraints with single-user compression. Assuming a heuristic decompression order, the proposed algorithm is then adapted for optimizing the transmit beamforming and fronthaul compression under Wyner-Ziv coding. This paper also proposes a low-complexity separate design consisting of optimizing transmit beamformers for the Gaussian vector multiple-access channel along with per-antenna quantizers with uniform quantization noise levels across the antennas at each BS. Numerical results show that majority of the performance gain brought by C-RAN comes from the implementation of SIC at the CP. Furthermore, the low complexity separate design already performs very close to the optimized joint design in regime of practical interest. | From a broader perspective, this paper studies the radio resource allocation optimization for uplink fronthaul-constrained C-RAN @cite_33 . As related work, we mention @cite_24 which proposes to utilize the signal sparsity in C-RAN to improve the performance of the fronthaul compression and user detection. The fronthaul compression can also be designed for enhancing synchronization in C-RAN @cite_7 . Additionally, we mention briefly that the latency issue in the C-RAN design has been studied in @cite_4 , which introduces a delay-optimal fronthaul allocation strategy for the latency control. Finally, we point out that with proper modification the design idea of fronthaul-aware beamforming proposed in this paper can be extended to the more recently proposed heterogeneous C-RAN @cite_10 and Fog RAN @cite_32 architectures. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_32",
"@cite_24",
"@cite_10"
],
"mid": [
"1499871957",
"2019905317",
"2257709327",
"1788288079",
"1996587581",
"2132647318"
],
"abstract": [
"In this paper, we consider the delay-optimal fronthaul allocation problem for cloud radio access networks (CRANs). The stochastic optimization problem is formulated as an infinite horizon average cost Markov decision process. To deal with the curse of dimensionality, we derive a closed-form approximate priority function and the associated error bound using perturbation analysis. Based on the closed-form approximate priority function, we propose a low-complexity delay-optimal fronthaul allocation algorithm solving the per-stage optimization problem. The proposed solution is further shown to be asymptotically optimal for sufficiently small cross link path gains. Finally, the proposed fronthaul allocation algorithm is compared with various baselines through simulations, and it is shown that significant performance gain can be achieved.",
"As a promising paradigm for fifth generation wireless communication systems, cloud radio access networks (C-RANs) have been shown to reduce both capital and operating expenditures, as well as to provide high spectral efficiency (SE) and energy efficiency (EE). The fronthaul in such networks, defined as the transmission link between the baseband unit and the remote radio head, requires a high capacity, but is often constrained. This article comprehensively surveys recent advances in fronthaul-constrained CRANs, including system architectures and key techniques. Particularly, major issues relating to the impact of the constrained fronthaul on SE EE and quality of service for users, including compression and quantization, large-scale coordinated processing and clustering, and resource allocation optimization, are discussed together with corresponding potential solutions. Open issues in terms of software-defined networking, network function virtualization, and partial centralization are also identified.",
"A key problem in the design of cloud radio access networks (CRANs) is that of devising effective baseband compression strategies for transmission on the fronthaul links connecting a remote radio head (RRH) to the managing central unit (CU). Most theoretical works on the subject implicitly assume that the RRHs, and hence the CU, are able to perfectly recover time synchronization from the baseband signals received in the uplink, and focus on the compression of the data fields. This paper instead dose not assume a priori synchronization of RRHs and CU, and considers the problem of fronthaul compression design at the RRHs with the aim of enhancing the performance of time and phase synchronization at the CU. The problem is tackled by analyzing the impact of the synchronization error on the performance of the link and by adopting information and estimationtheoretic performance metrics such as the rate-distortion function and the Cramer-Rao bound (CRB). The proposed algorithm is based on the Charnes-Cooper transformation and on the Difference of Convex (DC) approach, and is shown via numerical results to outperform conventional solutions.",
"An F-RAN is presented in this article as a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency. The core idea is to take full advantage of local radio signal processing, cooperative radio resource management, and distributed storing capabilities in edge devices, which can decrease the heavy burden on fronthaul and avoid large-scale radio signal processing in the centralized baseband unit pool. This article comprehensively presents the system architecture and key techniques of F-RANs. In particular, key techniques and their corresponding solutions, including transmission mode selection and interference suppression, are discussed. Open issues in terms of edge caching, software-defined networking, and network function virtualization are also identified.",
"The cloud radio access network (C-RAN) is a promising network architecture for future mobile communications, and one practical hurdle for its large scale implementation is the stringent requirement of high capacity and low latency fronthaul connecting the distributed remote radio heads (RRH) to the centralized baseband pools (BBUs) in the C-RAN. To improve the scalability of C-RAN networks, it is very important to take the fronthaul loading into consideration in the signal detection, and it is very desirable to reduce the fronthaul loading in C-RAN systems. In this paper, we consider uplink C-RAN systems and we propose a distributed fronthaul compression scheme at the distributed RRHs and a joint recovery algorithm at the BBUs by deploying the techniques of distributed compressive sensing (CS). Different from conventional distributed CS, the CS problem in C-RAN system needs to incorporate the underlying effect of multi-access fading for the end-to-end recovery of the transmitted signals from the users. We analyze the performance of the proposed end-to-end signal recovery algorithm and we show that the aggregate measurement matrix in C-RAN systems, which contains both the distributed fronthaul compression and multiaccess fading, can still satisfy the restricted isometry property with high probability. Based on these results, we derive tradeoff results between the uplink capacity and the fronthaul loading in C-RAN systems.",
"To mitigate the severe inter-tier interference and enhance the limited cooperative gains resulting from the constrained and non-ideal transmissions between adjacent base stations in HetNets, H-CRANs are proposed as cost-efficient potential solutions through incorporating cloud computing into HetNets. In this article, state-of-the-art research achievements and challenges of H-CRANs are surveyed. In particular, we discuss issues of system architectures, spectral and energy efficiency performance, and promising key techniques. A great emphasis is given toward promising key techniques in HCRANs to improve both spectral and energy efficiencies, including cloud-computing-based coordinated multipoint transmission and reception, large-scale cooperative multiple antenna, cloud-computing-based cooperative radio resource management, and cloud-computingbased self-organizing networks in cloud converging scenarios. The major challenges and open issues in terms of theoretical performance with stochastic geometry, fronthaul-constrained resource allocation, and standard development that may block the promotion of H-CRANs are discussed as well."
]
} |
1604.05151 | 2338757405 | At millimeter wave (mmW) frequencies, beamforming and large antenna arrays are an essential requirement to combat the high path loss for mmW communication. Moreover, at these frequencies, very large bandwidths are available to fulfill the data rate requirements of future wireless networks. However, utilization of these large bandwidths and of large antenna arrays can result in a high power consumption which is an even bigger concern for mmW receiver design. In a mmW receiver, the analog-to-digital converter (ADC) is generally considered as the most power consuming block. In this paper, primarily focusing on the ADC power, we analyze and compare the total power consumption of the complete analog chain for Analog, Digital and Hybrid beamforming (ABF, DBF and HBF) based receiver design. We show how power consumption of these beamforming schemes varies with a change in the number of antennas, the number of ADC bits (b) and the bandwidth (B). Moreover, we compare low power (as in [1]) and high power (as in [2]) ADC models, and show that for a certain range of number of antennas, b and B, DBF may actually have a comparable and lower power consumption than ABF and HBF, respectively. In addition, we also show how the choice of an appropriate beamforming scheme depends on the signal-to-noise ratio regime. | To the best of our knowledge, there is no previous work that compares the power consumption of ABF, HBF and DBF based receiver design for different values of the number of receive antennas @math , the number of ADC bits @math , and the bandwidth @math . In this paper, we provide a comprehensive comparison of total power consumption of different beamforming schemes while considering both high power @cite_13 and low power @cite_9 ADC models. Moreover, we discuss the relationship of the signal-to-noise ratio (SNR) with the ADC resolution and how it affects the choice of an appropriate beamforming scheme. | {
"cite_N": [
"@cite_9",
"@cite_13"
],
"mid": [
"1892643793",
"1921359321"
],
"abstract": [
"The wide bandwidth and large number of antennas used in millimeter wave systems put a heavy burden on the power consumption at the receiver. In this paper, using an additive quantization noise model, the effect of analog-digital conversion (ADC) resolution and bandwidth on the achievable rate is investigated for a multi-antenna system under a receiver power constraint. Two receiver architectures, analog and digital combining, are compared in terms of performance. Results demonstrate that: (i) For both analog and digital combining, there is a maximum bandwidth beyond which the achievable rate decreases; (ii) Depending on the operating regime of the system, analog combiner may have higher rate but digital combining uses less bandwidth when only ADC power consumption is considered, (iii) digital combining may have higher rate when power consumption of all the components in the receiver front-end are taken into account.",
"Precoding combining and large antenna arrays are essential in millimeter wave (mmWave) systems. In traditional MIMO systems, precoding combining is usually done digitally at baseband with one radio frequency (RF) chain and one analog-to-digital converter (ADC) per antenna. The high cost and power consumption of RF chains and ADCs at mmWave frequencies make an all-digital processing approach prohibitive. When only a limited number of RF chains is available, hybrid architectures that split the precoding combining processing into the analog and digital domains are attractive. A previously proposed hybrid solution employs phase shifters and mixers in the RF precoding combining stage. It obtains near optimal spectral efficiencies with a reduced number of RF channels. In this paper we propose a different hybrid architecture, which simplifies the hardware at the receiver by replacing the phase shifters with switches. We present a new approach for compressed sensing based channel estimation for the hybrid architectures. Given the channel estimate, we propose a novel algorithm that jointly designs the antenna subsets selected and the baseband combining. Using power consumption calculations and achievable rates, we compare the performance of hybrid combining with antenna switching and phase shifting, showing that antenna selection is preferred in a range of operating conditions."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.