aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1609.01366
2512398210
We present a method for discovering and exploiting object specific deep learning features and use face detection as a case study. Motivated by the observation that certain convolutional channels of a Convolutional Neural Network (CNN) exhibit object specific responses, we seek to discover and exploit the convolutional channels of a CNN in which neurons are activated by the presence of specific objects in the input image. A method for explicitly fine-tuning a pre-trained CNN to induce an object specific channel (OSC) and systematically identifying it for the human face object has been developed. Based on the basic OSC features, we introduce a multi-resolution approach to constructing robust face heatmaps for fast face detection in unconstrained settings. We show that multi-resolution OSC can be used to develop state of the art face detectors which have the advantage of being simple and compact.
Face detection models in the literature can be divided into four categories: Cascade-based model, Deformable Part Models (DPM)-based model, Exemplar-based model and Neural-Networks-based model. The most famous cascade-base model is the VJ detector @cite_26 based on Haar-like features, which have demonstrated excellent performance in frontal face detection. However Harr-like features have limited representation ability to deal with variational settings. Some works try to improve VJ-detector via using more complicated features such as SURF @cite_6 , HoG @cite_21 and polygonal Haar-like features @cite_30 . Aggregate channel features @cite_37 are also introduced for solving multi-view face detection problems.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_21", "@cite_6" ], "mid": [ "2153461700", "2041497292", "2164598857", "2100807570", "2099355420" ], "abstract": [ "The integral image is typically used for fast integrating a function over a rectangular region in an image. We propose a method that extends the integral image to do fast integration over the interior of any polygon that is not necessarily rectilinear. The integration time of the method is fast, independent of the image resolution, and only linear to the polygon's number of vertices. We apply the method to Viola and Jones' object detection framework, in which we propose to improve classical Haar-like features with polygonal Haar-like features. We show that the extended feature set improves object detection's performance. The experiments are conducted in three domains: frontal face detection, fixed-pose hand detection, and rock detection for Mars' surface terrain assessment.", "Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can’t meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multiscale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in ViolaJones framework, the multi-view face detector using aggregate channel features surpasses current state-of-the-art detectors on AFW and FDDB testsets, while runs at 42 FPS", "This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.", "This paper presents a novel learning framework for training boosting cascade based object detector from large scale dataset. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by three key differences. First, the proposed framework adopts multi-dimensional SURF features instead of single dimensional Haar features to describe local patches. In this way, the number of used local patches can be reduced from hundreds of thousands to several hundreds. Second, it adopts logistic regression as weak classifier for each local patch instead of decision trees in the VJ framework. Third, we adopt AUC as a single criterion for the convergence test during cascade training rather than the two trade-off criteria (false-positive-rate and hit-rate) in the VJ framework. The benefit is that the false-positive-rate can be adaptive among different cascade stages, and thus yields much faster convergence speed of SURF cascade. Combining these points together, the proposed approach has three good properties. First, the boosting cascade can be trained very efficiently. Experiments show that the proposed approach can train object detectors from billions of negative samples within one hour even on personal computers. Second, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed. Third, the built detector is small in model-size due to short cascade stages.", "We integrate the cascade-of-rejectors approach with the Histograms of Oriented Gradients (HoG) features to achieve a fast and accurate human detection system. The features used in our system are HoGs of variable-size blocks that capture salient features of humans automatically. Using AdaBoost for feature selection, we identify the appropriate set of blocks, from a large set of possible blocks. In our system, we use the integral image representation and a rejection cascade which significantly speed up the computation. For a 320 × 280 image, the system can process 5 to 30 frames per second depending on the density in which we scan the image, while maintaining an accuracy level similar to existing methods." ] }
1609.01366
2512398210
We present a method for discovering and exploiting object specific deep learning features and use face detection as a case study. Motivated by the observation that certain convolutional channels of a Convolutional Neural Network (CNN) exhibit object specific responses, we seek to discover and exploit the convolutional channels of a CNN in which neurons are activated by the presence of specific objects in the input image. A method for explicitly fine-tuning a pre-trained CNN to induce an object specific channel (OSC) and systematically identifying it for the human face object has been developed. Based on the basic OSC features, we introduce a multi-resolution approach to constructing robust face heatmaps for fast face detection in unconstrained settings. We show that multi-resolution OSC can be used to develop state of the art face detectors which have the advantage of being simple and compact.
Another category is DPM-base model @cite_10 , which treats face as a collection of small parts. DPM-base model can benefit from the fact that different facial parts independently have lower visual variations, so it is reasonable to build robust detectors by combining different models trained for individual parts. For example, Part-based structural models @cite_28 @cite_42 @cite_8 have achieved success in face detection and a vanilla DPM can achieve top performance over the more sophisticated DPM variants @cite_39 .
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_42", "@cite_39", "@cite_10" ], "mid": [ "2056025798", "2034025266", "2047508432", "", "2168356304" ], "abstract": [ "This paper solves the speed bottleneck of deformable part model (DPM), while maintaining the accuracy in detection on challenging datasets. Three prohibitive steps in cascade version of DPM are accelerated, including 2D correlation between root filter and feature map, cascade part pruning and HOG feature extraction. For 2D correlation, the root filter is constrained to be low rank, so that 2D correlation can be calculated by more efficient linear combination of 1D correlations. A proximal gradient algorithm is adopted to progressively learn the low rank filter in a discriminative manner. For cascade part pruning, neighborhood aware cascade is proposed to capture the dependence in neighborhood regions for aggressive pruning. Instead of explicit computation of part scores, hypotheses can be pruned by scores of neighborhoods under the first order approximation. For HOG feature extraction, look-up tables are constructed to replace expensive calculations of orientation partition and magnitude with simpler matrix index operations. Extensive experiments show that (a) the proposed method is 4 times faster than the current fastest DPM method with similar accuracy on Pascal VOC, (b) the proposed method achieves state-of-the-art accuracy on pedestrian and face detection task with frame-rate speed.", "Despite the successes in the last two decades, the state-of-the-art face detectors still have problems in dealing with images in the wild due to large appearance variations. Instead of leaving appearance variations directly to statistical learning algorithms, we propose a hierarchical part based structural model to explicitly capture them. The model enables part subtype option to handle local appearance variations such as closed and open month, and part deformation to capture the global appearance variations such as pose and expression. In detection, candidate window is fitted to the structural model to infer the part location and part subtype, and detection score is then computed based on the fitted configuration. In this way, the influence of appearance variation is reduced. Besides the face model, we exploit the co-occurrence between face and body, which helps to handle large variations, such as heavy occlusions, to further boost the face detection performance. We present a phrase based representation for body detection, and propose a structural context model to jointly encode the outputs of face detector and body detector. Benefit from the rich structural face and body information, as well as the discriminative structural learning algorithm, our method achieves state-of-the-art performance on FDDB, AFW and a self-annotated dataset, under wide comparisons with commercial and academic methods. (C) 2013 Elsevier B.V. All rights reserved.", "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function." ] }
1609.01366
2512398210
We present a method for discovering and exploiting object specific deep learning features and use face detection as a case study. Motivated by the observation that certain convolutional channels of a Convolutional Neural Network (CNN) exhibit object specific responses, we seek to discover and exploit the convolutional channels of a CNN in which neurons are activated by the presence of specific objects in the input image. A method for explicitly fine-tuning a pre-trained CNN to induce an object specific channel (OSC) and systematically identifying it for the human face object has been developed. Based on the basic OSC features, we introduce a multi-resolution approach to constructing robust face heatmaps for fast face detection in unconstrained settings. We show that multi-resolution OSC can be used to develop state of the art face detectors which have the advantage of being simple and compact.
Exemplar-based detectors @cite_3 @cite_4 @cite_9 try to bring image retrieval techniques into face detection to avoid explicitly modelling different face variations in unconstrained settings. Specifically, each exemplar casts a vote following the Bag-of-Words (BOW) @cite_22 retrieval framework to get a voting map and uses generalized Hough Voting @cite_1 to locate the faces in the input image. As a result, faces can be effectively detected in many challenging settings. However, a considerable amount of exemplars are required to cover all kinds of variations.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_9", "@cite_1", "@cite_3" ], "mid": [ "1966822758", "2158102958", "2200108381", "2164877691", "2015268479" ], "abstract": [ "Despite the fact that face detection has been studied intensively over the past several decades, the problem is still not completely solved. Challenging conditions, such as extreme pose, lighting, and occlusion, have historically hampered traditional, model-based methods. In contrast, exemplar-based face detection has been shown to be effective, even under these challenging conditions, primarily because a large exemplar database is leveraged to cover all possible visual variations. However, relying heavily on a large exemplar database to deal with the face appearance variations makes the detector impractical due to the high space and time complexity. We construct an efficient boosted exemplar-based face detector which overcomes the defect of the previous work by being faster, more memory efficient, and more accurate. In our method, exemplars as weak detectors are discriminatively trained and selectively assembled in the boosting framework which largely reduces the number of required exemplars. Notably, we propose to include non-face images as negative exemplars to actively suppress false detections to further improve the detection accuracy. We verify our approach over two public face detection benchmarks and one personal photo album, and achieve significant improvement over the state-of-the-art algorithms in terms of both accuracy and efficiency.", "This paper presents a Bag of Visual Words (BoVW) based approach to retrieve similar word images from a large database, efficiently and accurately. We show that a text retrieval system can be adapted to build a word image retrieval solution. This helps in achieving scalability. We demonstrate the method on more than 1 Million word images with a sub-second retrieval time. We validate the method on four Indian languages, and report a mean average precision of more than 0.75. We represent the word images as histogram of visual words present in the image. Visual words are quantized representation of local regions, and for this work, SIFT descriptors at interest points are used as feature vectors. To address the lack of spatial structure in the BoVW representation, we re-rank the retrieved list. This significantly improves the performance.", "Recently, exemplar based approaches have been successfully applied for face detection in the wild. Contrary to traditional approaches that model face variations from a large and diverse set of training examples, exemplar-based approaches use a collection of discriminatively trained exemplars for detection. In this paradigm, each exemplar casts a vote using retrieval framework and generalized Hough voting, to locate the faces in the target image. The advantage of this approach is that by having a large database that covers all possible variations, faces in challenging conditions can be detected without having to learn explicit models for different variations. Current schemes, however, make an assumption of independence between the visual words, ignoring their relations in the process. They also ignore the spatial consistency of the visual words. Consequently, every exemplar word contributes equally during voting regardless of its location. In this paper, we propose a novel approach that incorporates higher order information in the voting process. We discover visual phrases that contain semantically related visual words and exploit them for detection along with the visual words. For spatial consistency, we estimate the spatial distribution of visual words and phrases from the entire database and then weigh their occurrence in exemplars. This ensures that a visual word or a phrase in an exemplar makes a major contribution only if it occurs at its semantic location, thereby suppressing the noise significantly. We perform extensive experiments on standard FDDB, AFW and G-album datasets and show significant improvement over previous exemplar approaches.", "We present a method for object categorization in real-world scenes. Following a common consensus in the field, we do not assume that a figure- ground segmentation is available prior to recognition. However, in contrast to most standard approaches for object class recognition, our approach automati- cally segments the object as a result of the categorization. This combination of recognition and segmentation into one process is made pos- sible by our use of an Implicit Shape Model, which integrates both into a common probabilistic framework. In addition to the recognition and segmentation result, it also generates a per-pixel confidence measure specifying the area that supports a hypothesis and how much it can be trusted. We use this confidence to derive a nat- ural extension of the approach to handle multiple objects in a scene and resolve ambiguities between overlapping hypotheses with a novel MDL-based criterion. In addition, we present an extensive evaluation of our method on a standard dataset for car detection and compare its performance to existing methods from the literature. Our results show that the proposed method significantly outper- forms previously published methods while needing one order of magnitude less training examples. Finally, we present results for articulated objects, which show that the proposed method can categorize and segment unfamiliar objects in differ- ent articulations and with widely varying texture patterns, even under significant partial occlusion.", "Detecting faces in uncontrolled environments continues to be a challenge to traditional face detection methods due to the large variation in facial appearances, as well as occlusion and clutter. In order to overcome these challenges, we present a novel and robust exemplar-based face detector that integrates image retrieval and discriminative learning. A large database of faces with bounding rectangles and facial landmark locations is collected, and simple discriminative classifiers are learned from each of them. A voting-based method is then proposed to let these classifiers cast votes on the test image through an efficient image retrieval technique. As a result, faces can be very efficiently detected by selecting the modes from the voting maps, without resorting to exhaustive sliding window-style scanning. Moreover, due to the exemplar-based framework, our approach can detect faces under challenging conditions without explicitly modeling their variations. Evaluation on two public benchmark datasets shows that our new face detection approach is accurate and efficient, and achieves the state-of-the-art performance. We further propose to use image retrieval for face validation (in order to remove false positives) and for face alignment landmark localization. The same methodology can also be easily generalized to other face-related tasks, such as attribute recognition, as well as general object detection." ] }
1609.01366
2512398210
We present a method for discovering and exploiting object specific deep learning features and use face detection as a case study. Motivated by the observation that certain convolutional channels of a Convolutional Neural Network (CNN) exhibit object specific responses, we seek to discover and exploit the convolutional channels of a CNN in which neurons are activated by the presence of specific objects in the input image. A method for explicitly fine-tuning a pre-trained CNN to induce an object specific channel (OSC) and systematically identifying it for the human face object has been developed. Based on the basic OSC features, we introduce a multi-resolution approach to constructing robust face heatmaps for fast face detection in unconstrained settings. We show that multi-resolution OSC can be used to develop state of the art face detectors which have the advantage of being simple and compact.
Neural-Networks-based detectors are usually based on deep convolutional neural networks. Faceness @cite_14 tries to find faces through scoring facial parts responses by their spatial structure and arrangement, and different facial parts correspond to different CNNs. A two-stage approach is also proposed by combining multi-patch deep CNNs and deep metric learning @cite_31 . The CCF detector @cite_5 uses an integrated method called Convolutional Channel Features, transferring low-level features extracted from pre-trained CNN models to a boosting forest model. Cascade architectures based on CNNs @cite_32 have been also designed to help reject background regions at low resolution, and select face area carefully at high resolution. The DDFD detector @cite_35 uses a single model based on deep convolutional neural networks for multi-view face detection, and points out that CNNs can benefit from better sampling and more sophisticated data augmentation techniques.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_32", "@cite_5", "@cite_31" ], "mid": [ "1970456555", "2950557924", "1934410531", "345900524", "1703179648" ], "abstract": [ "In this paper we consider the problem of multi-view face detection. While there has been significant research on this problem, current state-of-the-art approaches for this task require annotation of facial landmarks, e.g. TSM [25], or annotation of face poses [28, 22]. They also require training dozens of models to fully capture faces in all orientations, e.g. 22 models in HeadHunter method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method that does not require pose landmark annotation and is able to detect faces in a wide range of orientations using a single model based on deep convolutional neural networks. The proposed method has minimal complexity; unlike other recent deep learning object detection methods [9], it does not require additional components such as segmentation, bounding-box regression, or SVM classifiers. Furthermore, we analyzed scores of the proposed face detector for faces in different orientations and found that 1) the proposed method is able to detect faces from different angles and can handle occlusion to some extent, 2) there seems to be a correlation between distribution of positive examples in the training set and scores of the proposed face detector. The latter suggests that the proposed method's performance can be further improved by using better sampling strategies and more sophisticated data augmentation techniques. Evaluations on popular face detection benchmark datasets show that our single-model face detector algorithm has similar or better performance compared to the previous methods, which are more complex and require annotations of either different poses or facial landmarks.", "In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.", "Deep learning methods are powerful tools but often suffer from expensive computation and limited flexibility. An alternative is to combine light-weight models with deep representations. As successful cases exist in several visual problems, a unified framework is absent. In this paper, we revisit two widely used approaches in computer vision, namely filtered channel features and Convolutional Neural Networks (CNN), and absorb merits from both by proposing an integrated method called Convolutional Channel Features (CCF). CCF transfers low-level features from pre-trained CNN models to feed the boosting forest model. With the combination of CNN features and boosting forest, CCF benefits from the richer capacity in feature representation compared with channel features, as well as lower cost in computation and storage compared with end-to-end CNN methods. We show that CCF serves as a good way of tailoring pre-trained CNN models to diverse tasks without fine-tuning the whole network to each task by achieving state-of-the-art performances in pedestrian detection, face detection, edge detection and object proposal generation.", "Face Recognition has been studied for many decades. As opposed to traditional hand-crafted features such as LBP and HOG, much more sophisticated features can be learned automatically by deep learning methods in a data-driven way. In this paper, we propose a two-stage approach that combines a multi-patch deep CNN and deep metric learning, which extracts low dimensional but very discriminative features for face verification and recognition. Experiments show that this method outperforms other state-of-the-art methods on LFW dataset, achieving 99.77 pair-wise verification accuracy and significantly better accuracy under other two more practical protocols. This paper also discusses the importance of data size and the number of patches, showing a clear path to practical high-performance face recognition systems in real world." ] }
1609.01366
2512398210
We present a method for discovering and exploiting object specific deep learning features and use face detection as a case study. Motivated by the observation that certain convolutional channels of a Convolutional Neural Network (CNN) exhibit object specific responses, we seek to discover and exploit the convolutional channels of a CNN in which neurons are activated by the presence of specific objects in the input image. A method for explicitly fine-tuning a pre-trained CNN to induce an object specific channel (OSC) and systematically identifying it for the human face object has been developed. Based on the basic OSC features, we introduce a multi-resolution approach to constructing robust face heatmaps for fast face detection in unconstrained settings. We show that multi-resolution OSC can be used to develop state of the art face detectors which have the advantage of being simple and compact.
We try to directly use object specific channel to produce face response heatmap, which can be used to quickly locate potential face area. The heatmap is similar to the voting map in exemplar-based approach @cite_4 @cite_9 , but the difference is that the voting map is produced by the Bag-of-Words (BOW) @cite_22 retrieval framework, while our heatmap is directly extracted from a convolutional channel.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_22" ], "mid": [ "2200108381", "1966822758", "2158102958" ], "abstract": [ "Recently, exemplar based approaches have been successfully applied for face detection in the wild. Contrary to traditional approaches that model face variations from a large and diverse set of training examples, exemplar-based approaches use a collection of discriminatively trained exemplars for detection. In this paradigm, each exemplar casts a vote using retrieval framework and generalized Hough voting, to locate the faces in the target image. The advantage of this approach is that by having a large database that covers all possible variations, faces in challenging conditions can be detected without having to learn explicit models for different variations. Current schemes, however, make an assumption of independence between the visual words, ignoring their relations in the process. They also ignore the spatial consistency of the visual words. Consequently, every exemplar word contributes equally during voting regardless of its location. In this paper, we propose a novel approach that incorporates higher order information in the voting process. We discover visual phrases that contain semantically related visual words and exploit them for detection along with the visual words. For spatial consistency, we estimate the spatial distribution of visual words and phrases from the entire database and then weigh their occurrence in exemplars. This ensures that a visual word or a phrase in an exemplar makes a major contribution only if it occurs at its semantic location, thereby suppressing the noise significantly. We perform extensive experiments on standard FDDB, AFW and G-album datasets and show significant improvement over previous exemplar approaches.", "Despite the fact that face detection has been studied intensively over the past several decades, the problem is still not completely solved. Challenging conditions, such as extreme pose, lighting, and occlusion, have historically hampered traditional, model-based methods. In contrast, exemplar-based face detection has been shown to be effective, even under these challenging conditions, primarily because a large exemplar database is leveraged to cover all possible visual variations. However, relying heavily on a large exemplar database to deal with the face appearance variations makes the detector impractical due to the high space and time complexity. We construct an efficient boosted exemplar-based face detector which overcomes the defect of the previous work by being faster, more memory efficient, and more accurate. In our method, exemplars as weak detectors are discriminatively trained and selectively assembled in the boosting framework which largely reduces the number of required exemplars. Notably, we propose to include non-face images as negative exemplars to actively suppress false detections to further improve the detection accuracy. We verify our approach over two public face detection benchmarks and one personal photo album, and achieve significant improvement over the state-of-the-art algorithms in terms of both accuracy and efficiency.", "This paper presents a Bag of Visual Words (BoVW) based approach to retrieve similar word images from a large database, efficiently and accurately. We show that a text retrieval system can be adapted to build a word image retrieval solution. This helps in achieving scalability. We demonstrate the method on more than 1 Million word images with a sub-second retrieval time. We validate the method on four Indian languages, and report a mean average precision of more than 0.75. We represent the word images as histogram of visual words present in the image. Visual words are quantized representation of local regions, and for this work, SIFT descriptors at interest points are used as feature vectors. To address the lack of spatial structure in the BoVW representation, we re-rank the retrieved list. This significantly improves the performance." ] }
1609.01693
2949502065
This work advocates Eulerian motion representation learning over the current standard Lagrangian optical flow model. Eulerian motion is well captured by using phase, as obtained by decomposing the image through a complex-steerable pyramid. We discuss the gain of Eulerian motion in a set of practical use cases: (i) action recognition, (ii) motion prediction in static images, (iii) motion transfer in static images and, (iv) motion transfer in video. For each task we motivate the phase-based direction and provide a possible approach.
Eulerian motion modeling has shown remarkable results for motion magnification @cite_18 where a phase-based approach significantly improves the quality @cite_4 and broadens its application @cite_2 @cite_3 . A phase-based video interpolation is proposed in @cite_15 and a phase-based optical flow estimation is proposed in @cite_0 . Inspired by the these work, we advocate the use of the Eulerian model as exemplified by phase for learning motion representations.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_3", "@cite_0", "@cite_2", "@cite_15" ], "mid": [ "1998391547", "1990370049", "", "2165406874", "2028660383", "1905052409" ], "abstract": [ "Our goal is to reveal temporal variations in videos that are difficult or impossible to see with the naked eye and display them in an indicative manner. Our method, which we call Eulerian Video Magnification, takes a standard video sequence as input, and applies spatial decomposition, followed by temporal filtering to the frames. The resulting signal is then amplified to reveal hidden information. Using our method, we are able to visualize the flow of blood as it fills the face and also to amplify and reveal small motions. Our technique can run in real time to show phenomena occurring at the temporal frequencies selected by the user.", "We introduce a technique to manipulate small movements in videos based on an analysis of motion in complex-valued image pyramids. Phase variations of the coefficients of a complex-valued steerable pyramid over time correspond to motion, and can be temporally processed and amplified to reveal imperceptible motions, or attenuated to remove distracting changes. This processing does not involve the computation of optical flow, and in comparison to the previous Eulerian Video Magnification method it supports larger amplification factors and is significantly less sensitive to noise. These improved capabilities broaden the set of applications for motion processing in videos. We demonstrate the advantages of this approach on synthetic and natural video sequences, and explore applications in scientific analysis, visualization and video enhancement.", "", "We introduce a new technique for estimating the optical flow field, starting from image sequences. As suggested by Fleet and Jepson (1990), we track contours of constant phase over time, since these are more robust to variations in lighting conditions and deviations from pure translation than contours of constant amplitude. Our phase-based approach proceeds in three stages. First, the image sequence is spatially filtered using a bank of quadrature pairs of Gabor filters, and the temporal phase gradient is computed, yielding estimates of the velocity component in directions orthogonal to the filter pairs' orientations. Second, a component velocity is rejected if the corresponding filter pair's phase information is not linear over a given time span. Third, the remaining component velocities at a single spatial location are combined and a recurrent neural network is used to derive the full velocity. We test our approach on several image sequences, both synthetic and realistic.", "Video cameras offer the unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors for structures because of their commonplace availability, simplicity, and potentially low cost. An issue is that video data is difficult to interpret into a format familiar to engineers such as displacement. A methodology called motion magnification has been developed for visualizing exaggerated versions of small displacements with an extension of the methodology to obtain the optical flow to measure displacements. In this paper, these methods are extended to modal identification in structures and the measurement of structural vibrations. Camera-based measurements of displacement are compared against laser vibrometer and accelerometer measurements for verification. The methodology is demonstrated on simple structures, a cantilever beam and a pipe, to identify and visualize the operational deflection shapes. Suggestions for applications of this methodology and challenges in real-world implementation are given.", "Standard approaches to computing interpolated (in-between) frames in a video sequence require accurate pixel correspondences between images e.g. using optical flow. We present an efficient alternative by leveraging recent developments in phase-based methods that represent motion in the phase shift of individual pixels. This concept allows in-between images to be generated by simple per-pixel phase modification, without the need for any form of explicit correspondence estimation. Up until now, such methods have been limited in the range of motion that can be interpolated, which fundamentally restricts their usefulness. In order to reduce these limitations, we introduce a novel, bounded phase shift correction method that combines phase information across the levels of a multi-scale pyramid. Additionally, we propose extensions for phase-based image synthesis that yield smoother transitions between the interpolated images. Our approach avoids expensive global optimization typical of optical flow methods, and is both simple to implement and easy to parallelize. This allows us to interpolate frames at a fraction of the computational cost of traditional optical flow-based solutions, while achieving similar quality and in some cases even superior results. Our method fails gracefully in difficult interpolation settings, e.g., significant appearance changes, where flow-based methods often introduce serious visual artifacts. Due to its efficiency, our method is especially well suited for frame interpolation and retiming of high resolution, high frame rate video." ] }
1609.01693
2949502065
This work advocates Eulerian motion representation learning over the current standard Lagrangian optical flow model. Eulerian motion is well captured by using phase, as obtained by decomposing the image through a complex-steerable pyramid. We discuss the gain of Eulerian motion in a set of practical use cases: (i) action recognition, (ii) motion prediction in static images, (iii) motion transfer in static images and, (iv) motion transfer in video. For each task we motivate the phase-based direction and provide a possible approach.
Optical flow-based motion features have been extensively employed for action recognition in works such as @cite_24 @cite_30 @cite_9 @cite_5 . These works, use hand crafted features extracted from the optical flow. Instead, we propose to input phase-based motion measurements to a CNN to reap the benefits of deep feature representation learning methods.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_9", "@cite_30" ], "mid": [ "1996904744", "1944615693", "2131042978", "2175354415" ], "abstract": [ "Several recent works on action recognition have attested the importance of explicitly integrating motion characteristics in the video description. This paper establishes that adequately decomposing visual motion into dominant and residual motions, both in the extraction of the space-time trajectories and for the computation of descriptors, significantly improves action recognition algorithms. Then, we design a new motion descriptor, the DCS descriptor, based on differential motion scalar quantities, divergence, curl and shear features. It captures additional information on the local motion patterns enhancing results. Finally, applying the recent VLAD coding technique proposed in image retrieval provides a substantial improvement for action recognition. Our three contributions are complementary and lead to outperform all reported results by a significant margin on three challenging datasets, namely Hollywood 2, HMDB51 and Olympic Sports.", "Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features [31] and deep-learned features [24]. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMD-B51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features [31] and deep-learned features [24]. Our method also achieves superior performance to the state of the art on these datasets.", "Action recognition in uncontrolled video is an important and challenging computer vision problem. Recent progress in this area is due to new local features and models that capture spatio-temporal structure between local features, or human-object interactions. Instead of working towards more complex models, we focus on the low-level features and their encoding. We evaluate the use of Fisher vectors as an alternative to bag-of-word histograms to aggregate a small set of state-of-the-art low-level descriptors, in combination with linear classifiers. We present a large and varied set of evaluations, considering (i) classification of short actions in five datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that for basic action recognition and localization MBH features alone are enough for state-of-the-art performance. For complex events we find that SIFT and MFCC features provide complementary cues. On all three problems we obtain state-of-the-art results, while using fewer features and less complex models.", "This paper is on action localization in video with the aid of spatio-temporal proposals. To alleviate the computational expensive segmentation step of existing proposals, we propose bypassing the segmentations completely by generating proposals directly from the dense trajectories used to represent videos during classification. Our Action localization Proposals from dense Trajectories (APT) use an efficient proposal generation algorithm to handle the high number of trajectories in a video. Our spatio-temporal proposals are faster than current methods and outperform the localization and classification accuracy of current proposals on the UCF Sports, UCF 101, and MSR-II video datasets. Corrected version: we fixed a mistake in our UCF-101 ground truth. Numbers are different; conclusions are unchanged" ] }
1609.01693
2949502065
This work advocates Eulerian motion representation learning over the current standard Lagrangian optical flow model. Eulerian motion is well captured by using phase, as obtained by decomposing the image through a complex-steerable pyramid. We discuss the gain of Eulerian motion in a set of practical use cases: (i) action recognition, (ii) motion prediction in static images, (iii) motion transfer in static images and, (iv) motion transfer in video. For each task we motivate the phase-based direction and provide a possible approach.
Using pre-trained networks is possible in the two-stream network approaches proposed in @cite_22 @cite_16 @cite_28 . This combines a multi-frame optical flow network stream with an appearance stream and obtains competitive results in practice. The appearance stream can employ a pretrained network. Similarly, we also consider the combination of appearance and motion in a two-stream fashion, but with innate phase information rather than using a hand-crafted optical flow.
{ "cite_N": [ "@cite_28", "@cite_16", "@cite_22" ], "mid": [ "2156303437", "2953084276", "2513482975" ], "abstract": [ "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters; (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.", "The video and action classification have extremely evolved by deep neural networks specially with two stream CNN using RGB and optical flow as inputs and they present outstanding performance in terms of video analysis. One of the shortcoming of these methods is handling motion information extraction which is done out side of the CNNs and relatively time consuming also on GPUs. So proposing end-to-end methods which are exploring to learn motion representation, like 3D-CNN can achieve faster and accurate performance. We present some novel deep CNNs using 3D architecture to model actions and motion representation in an efficient way to be accurate and also as fast as real-time. Our new networks learn distinctive models to combine deep motion features into appearance model via learning optical flow features inside the network." ] }
1609.01693
2949502065
This work advocates Eulerian motion representation learning over the current standard Lagrangian optical flow model. Eulerian motion is well captured by using phase, as obtained by decomposing the image through a complex-steerable pyramid. We discuss the gain of Eulerian motion in a set of practical use cases: (i) action recognition, (ii) motion prediction in static images, (iii) motion transfer in static images and, (iv) motion transfer in video. For each task we motivate the phase-based direction and provide a possible approach.
The temporal frame ordering is exploited in @cite_31 , where the parameters of a ranking machine are used for video description. While in @cite_14 @cite_7 @cite_20 recurrent neural networks are proposed for improving action recognition. In this paper we also model the temporal aspect, although we add the benefit of a two-stream approach by separating appearance and phase variation over time.
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_20", "@cite_7" ], "mid": [ "1926645898", "2951183276", "2952453038", "2464235600" ], "abstract": [ "In this paper we present a method to capture video-wide temporal information for action recognition. We postulate that a function capable of ordering the frames of a video temporally (based on the appearance) captures well the evolution of the appearance within the video. We learn such ranking functions per video via a ranking machine and use the parameters of these as a new video representation. The proposed method is easy to interpret and implement, fast to compute and effective in recognizing a wide variety of actions. We perform a large number of evaluations on datasets for generic action recognition (Hollywood2 and HMDB51), fine-grained actions (MPII- cooking activities) and gestures (Chalearn). Results show that the proposed method brings an absolute improvement of 7–10 , while being compatible with and complementary to further improvements in appearance and local motion based methods.", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "We present a new architecture for end-to-end sequence learning of actions in video, we call VideoLSTM. Rather than adapting the video to the peculiarities of established recurrent or convolutional architectures, we adapt the architecture to fit the requirements of the video medium. Starting from the soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video has a spatial layout. To exploit the spatial correlation we hardwire convolutions in the soft-Attention LSTM architecture. Second, motion not only informs us about the action content, but also guides better the attention towards the relevant spatio-temporal locations. We introduce motion-based attention. And finally, we demonstrate how the attention from VideoLSTM can be used for action localization by relying on just the action class label. Experiments and comparisons on challenging datasets for action classification and localization support our claims." ] }
1609.01693
2949502065
This work advocates Eulerian motion representation learning over the current standard Lagrangian optical flow model. Eulerian motion is well captured by using phase, as obtained by decomposing the image through a complex-steerable pyramid. We discuss the gain of Eulerian motion in a set of practical use cases: (i) action recognition, (ii) motion prediction in static images, (iii) motion transfer in static images and, (iv) motion transfer in video. For each task we motivate the phase-based direction and provide a possible approach.
Predicting the future RGB frame from the current RGB frame is proposed in @cite_17 in the context of action prediction. Similar to this work, we also start from an input appearance and obtain an output appearance image, however in our case the learning part learns the mapping from input phase information to future phase.
{ "cite_N": [ "@cite_17" ], "mid": [ "1599058448" ], "abstract": [ "In many computer vision applications, machines will need to reason beyond the present, and predict the future. This task is challenging because it requires leveraging extensive commonsense knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently obtaining this knowledge is through the massive amounts of readily available unlabeled video. In this paper, we present a large scale framework that capitalizes on temporal structure in unlabeled video to learn to anticipate both actions and objects in the future. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. We experimentally validate this idea on two challenging \"in the wild\" video datasets, and our results suggest that learning with unlabeled videos significantly helps forecast actions and anticipate objects." ] }
1609.01693
2949502065
This work advocates Eulerian motion representation learning over the current standard Lagrangian optical flow model. Eulerian motion is well captured by using phase, as obtained by decomposing the image through a complex-steerable pyramid. We discuss the gain of Eulerian motion in a set of practical use cases: (i) action recognition, (ii) motion prediction in static images, (iii) motion transfer in static images and, (iv) motion transfer in video. For each task we motivate the phase-based direction and provide a possible approach.
Animating a static image by transferring the motion from an input video is related to the notion of artistic style transfer @cite_29 @cite_21 @cite_26 . The style transfer aims at changing an input image or video such that the artistic style matches the one of a provided target image. Here, instead, we consider the motion transfer --- given an input image, transfer the phase-based motion from the video to the image.
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_26" ], "mid": [ "2461230277", "1924619199", "2344328033" ], "abstract": [ "This note presents an extension to the neural artistic style transfer algorithm (). The original algorithm transforms an image to have the style of another given image. For example, a photograph can be transformed to have the style of a famous painting. Here we address a potential shortcoming of the original method: the algorithm transfers the colors of the original painting, which can alter the appearance of the scene in undesirable ways. We describe simple linear methods for transferring style while preserving colors.", "In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.", "In the past, manually re-drawing an image in a certain artistic style required a professional artist and a long time. Doing this for a video sequence single-handed was beyond imagination. Nowadays computers provide new possibilities. We present an approach that transfers the style from one image (for example, a painting) to a whole video sequence. We make use of recent advances in style transfer in still images and propose new initializations and loss functions applicable to videos. This allows us to generate consistent and stable stylized video sequences, even in cases with large motion and strong occlusion. We show that the proposed method clearly outperforms simpler baselines both qualitatively and quantitatively." ] }
1609.01693
2949502065
This work advocates Eulerian motion representation learning over the current standard Lagrangian optical flow model. Eulerian motion is well captured by using phase, as obtained by decomposing the image through a complex-steerable pyramid. We discuss the gain of Eulerian motion in a set of practical use cases: (i) action recognition, (ii) motion prediction in static images, (iii) motion transfer in static images and, (iv) motion transfer in video. For each task we motivate the phase-based direction and provide a possible approach.
Additionally, we also consider video-to-video transfer where the style of performing a certain action is transferred from a target video to the input video. In @cite_19 the authors allow the users to change the video by adding plausible object manipulations in the video. Similar to this work, we also want to change the video motion after the recording is done, by adjusting the style of the action being performed.
{ "cite_N": [ "@cite_19" ], "mid": [ "2084583166" ], "abstract": [ "We present algorithms for extracting an image-space representation of object structure from video and using it to synthesize physically plausible animations of objects responding to new, previously unseen forces. Our representation of structure is derived from an image-space analysis of modal object deformation: projections of an object's resonant modes are recovered from the temporal spectra of optical flow in a video, and used as a basis for the image-space simulation of object dynamics. We describe how to extract this basis from video, and show that it can be used to create physically-plausible animations of objects without any knowledge of scene geometry or material properties." ] }
1609.01273
2508375329
We consider the problem of embedding one i.i.d. collection of Bernoulli random variables indexed by @math into an independent copy in an injective @math -Lipschitz manner. For the case @math , it was shown by Basu and Sly (PTRF, 2014) to be possible almost surely for sufficiently large @math . In this paper we provide a multi-scale argument extending this result to higher dimensions.
In early 1990s Winkler introduced a fascinating class of dependent percolation problems, the so-called co-ordinate percolation problems, where the vertices are open or closed depending on variables on co-ordinate axes. Long-range dependence makes these problems not amenable to the tools of Bernoulli percolation. It turns out that several natural questions about embedding one random sequence into another following certain rules can be reformulated as problems in this class (see e.g. @cite_7 @cite_11 @cite_12 @cite_22 ). In particular, Grimmett, Liggett and Richthammer @cite_2 asked whether there exists a Lipschitz embedding of one Bernoulli sequence (indexed by @math ) into an independent copy. This question was recently answered in @cite_16 (see also @cite_17 ). The problem we investigate in this paper is a natural generalisation of the above question to higher dimensions.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_17", "@cite_2", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2086906020", "2037520821", "2109948302", "1990978180", "1971206237", "1786941226", "1964535518" ], "abstract": [ "", "For a natural number k, define an oriented site percolation on ℤ2 as follows. Let x i , y j be independent random variables with values uniformly distributed in 1, …, k . Declare a site (i, j) ∈ℤ2closed if x i = y j , and open otherwise. Peter Winkler conjectured some years ago that if k≥ 4 then with positive probability there is an infinite oriented path starting at the origin, all of whose sites are open. I.e., there is an infinite path P = (i0, j0)(i1, j1) · · · such that 0 = i0≤i1≤· · ·, 0 = j0≤j1≤· · ·, and each site (i n , j n ) is open. Rather surprisingly, this conjecture is still open: in fact, it is not known whether the conjecture holds for any value of k. In this note, we shall prove the weaker result that the corresponding assertion holds in the unoriented case: if k≤ 4 then the probability that there is an infinite path that starts at the origin and consists only of open sites is positive. Furthermore, we shall show that our method can be applied to a wide variety of distributions of (x i ) and (y j ). Independently, Peter Winkler [14] has recently proved a variety of similar assertions by different methods.", "Let v, w be infinite 0-1 sequences, and mi¾? a positive integer. We say that w is mi¾?-embeddable in v, if there exists an increasing sequence ni:ii¾?0 of integers with n0=0, such that 1i¾?ni-ni-1i¾?mi¾?, wi=vni for all ii¾?1. Let X and Y be coin-tossing sequences. We will show that there is an mi¾? with the property that Y is mi¾?-embeddable into X with positive probability. This answers a question that was open for a while. The proof generalizes somewhat the hierarchical method of an earlier paper of the author on dependent percolation. © 2014 Wiley Periodicals, Inc. Random Struct. Alg., 47, 520-560, 2015", "We consider a type of long-range percolation problem on the positive integers, motivated by earlier work of others on the appearance of (in)finite words within a site percolation model. The main issue is whether a given infinite binary word appears within an iid Bernoulli sequence at locations that satisfy certain constraints. We settle the issue in some cases, and we provide partial results in others. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 2010", "We develop a new multi-scale framework flexible enough to solve a number of problems involving embedding random sequences into random sequences. (Random Str Algorithm 37(1):85–99, 2010) asked whether there exists an increasing (M )-Lipschitz embedding from one i.i.d. Bernoulli sequence into an independent copy with positive probability. We give a positive answer for large enough (M ). A closely related problem is to show that two independent Poisson processes on ( R ) are roughly isometric (or quasi-isometric). Our approach also applies in this case answering a conjecture of Szegedy and of Peled (Ann Appl Probab 20:462–494, 2010). Our theorem also gives a new proof to Winkler’s compatible sequences problem. Our approach does not explicitly depend on the particular geometry of the problems and we believe it will be applicable to a range of multi-scale and random embedding problems.", "A number of tricky problems in probability are discussed, having in common one or more infinite sequences of coin tosses, and a representation as a problem in dependent percolation. Three of these problems are of Winkler' type, that is, they ask about what can be achieved by a clairvoyant demon.", "A token located at some vertex @math of a connected, undirected graph G on n vertices is said to be taking a “random walk” on G if, whenever it is instructed to move, it moves with equal probability to any of the neighbors of @math . The authors consider the following problem: Suppose that two tokens are placed on G, and at each tick of the clock a certain demon decides which of them is to make the next move. The demon is trying to keep the tokens apart as long as possible. What is the expected time M before they meet?The problem arises in the study of self-stabilizing systems, a topic of recent interest in distributed computing. Since previous upper bounds for M were exponential in n, the issue was to obtain a polynomial bound. The authors use a novel potential function argument to show that in the worst case @math ." ] }
1609.01273
2508375329
We consider the problem of embedding one i.i.d. collection of Bernoulli random variables indexed by @math into an independent copy in an injective @math -Lipschitz manner. For the case @math , it was shown by Basu and Sly (PTRF, 2014) to be possible almost surely for sufficiently large @math . In this paper we provide a multi-scale argument extending this result to higher dimensions.
In a related direction, a series of works by Grimmett, Holroyd and their collaborators @cite_25 @cite_12 @cite_8 @cite_23 @cite_13 investigated a number of problems including when one can embed @math into site percolation in @math and showed that this was possible almost surely for @math when @math and the the site percolation parameter was sufficiently large but almost surely impossible for any @math when @math . Recently Holroyd and Martin @cite_5 showed that a comb can be embedded in @math . Another series of work in this area involves embedding words into higher dimensional percolation clusters @cite_9 @cite_3 @cite_0 @cite_6 .
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_3", "@cite_6", "@cite_0", "@cite_23", "@cite_5", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2953245962", "", "", "", "2164348421", "", "2951102908", "2146896147", "", "1786941226" ], "abstract": [ "We prove several facts concerning Lipschitz percolation, including the following. The critical probability p_L for the existence of an open Lipschitz surface in site percolation on Z^d with d 2 satisfies the improved bound p_L 1-1 [8(d-1)]. Whenever p > p_L, the height of the lowest Lipschitz surface above the origin has an exponentially decaying tail. The lowest surface is dominated stochastically by the boundary of a union of certain independent, identically distributed random subsets of Z^d. As a consequence, for p sufficiently close to 1, the connected regions of Z^ d-1 above which the surface has height 2 or more exhibit stretched-exponential tail behaviour.", "", "", "", "We consider critical site percolation on the triangular lattice, that is, we choose @math or 1 with probability 1 2 each, independently for all vertices @math of the triangular lattice. We say that a word @math is seen in the percolation configuration if there exists a selfavoiding path @math on the triangular lattice with @math . We prove that with probability 1 \"almost all\" words, as well as all periodic words, except the two words @math and @math , are seen. \"Almost all\" words here means almost all with respect to the measure @math under which the @math are i.i.d. with @math (for an arbitrary @math ).", "", "There exists a Lipschitz embedding of a d-dimensional comb graph (consisting of infinitely many parallel copies of Z^ d-1 joined by a perpendicular copy) into the open set of site percolation on Z^d, whenever the parameter p is close enough to 1 or the Lipschitz constant is sufficiently large. This is proved using several new results and techniques involving stochastic domination, in contexts that include a process of independent overlapping intervals on Z, and first-passage percolation on general graphs.", "Does there exist a Lipschitz injection of ℤd into the open set of a site percolation process on ℤD, if the percolation parameter p is sufficiently close to 1? We prove a negative answer when d = D and also when d ≥ 2 if the Lipschitz constant M is required to be 1. Earlier work of Dirr, Dondl, Grimmett, Holroyd and Scheutzow yields a positive answer for d < D and M = 2. As a result, the above question is answered for all d, D and M. Our proof in the case d = D uses Tucker’s lemma from topological combinatorics, together with the aforementioned result for d < D. One application is an affirmative answer to a question of Peled concerning embeddings of random patterns in two and more dimensions.", "", "A number of tricky problems in probability are discussed, having in common one or more infinite sequences of coin tosses, and a representation as a problem in dependent percolation. Three of these problems are of Winkler' type, that is, they ask about what can be achieved by a clairvoyant demon." ] }
1609.01326
2517064749
Computer graphics can not only generate synthetic images and ground truth but it also offers the possibility of constructing virtual worlds in which: (i) an agent can perceive, navigate, and take actions guided by AI algorithms, (ii) properties of the worlds can be modified (e.g., material and reflectance), (iii) physical simulations can be performed, and (iv) algorithms can be learnt and evaluated. But creating realistic virtual worlds is not easy. The game industry, however, has spent a lot of effort creating 3D worlds, which a player can interact with. So researchers can build on these resources to create virtual worlds, provided we can access and modify the internal data structures of the games. To enable this we created an open-source plugin UnrealCV (this http URL) for a popular game engine Unreal Engine 4 (UE4). We show two applications: (i) a proof of concept image dataset, and (ii) linking Caffe with the virtual world to test deep network algorithms.
Virtual worlds have been widely used in robotics research and many robotics simulators have been built @cite_3 @cite_6 . But these focus more on physical accuracy than visual realism, which makes them less suitable for computer vision researchers. Unreal Engine 2 (UE2) was used for robotics simulation in USARSim @cite_14 , but UE2 is no longer available and USARSim is no longer actively maintained.
{ "cite_N": [ "@cite_14", "@cite_6", "@cite_3" ], "mid": [ "2126791158", "2158782408", "2167340365" ], "abstract": [ "This paper presents USARSim, an open source high fidelity robot simulator that can be used both for research and education. USARSim offers many characteristics that differentiate it from most existing simulators. Most notably, it constitutes the simulation engine used to run the virtual robots competition within the Robocup initiative. We describe its general architecture, describe examples of utilization, and provide a comprehensive overview for those interested in robot simulations for education, research and competitions.", "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.", "Simulators have played a critical role in robotics research as tools for quick and efficient testing of new concepts, strategies, and algorithms. To date, most simulators have been restricted to 2D worlds, and few have matured to the point where they are both highly capable and easily adaptable. Gazebo is designed to fill this niche by creating a 3D dynamic multi-robot environment capable of recreating the complex worlds that would be encountered by the next generation of mobile robots. Its open source status, fine grained control, and high fidelity place Gazebo in a unique position to become more than just a stepping stone between the drawing board and real hardware: data visualization, simulation of remote environments, and even reverse engineering of blackbox systems are all possible applications. Gazebo is developed in cooperation with the Player and Stage projects (Gerkey, B. P., et al, July 2003), (Gerkey, B. P., et al, May 2001), (Vaughan, R. T., et al, Oct. 2003), and is available from http: playerstage.sourceforge.net gazebo gazebo.html." ] }
1609.01326
2517064749
Computer graphics can not only generate synthetic images and ground truth but it also offers the possibility of constructing virtual worlds in which: (i) an agent can perceive, navigate, and take actions guided by AI algorithms, (ii) properties of the worlds can be modified (e.g., material and reflectance), (iii) physical simulations can be performed, and (iv) algorithms can be learnt and evaluated. But creating realistic virtual worlds is not easy. The game industry, however, has spent a lot of effort creating 3D worlds, which a player can interact with. So researchers can build on these resources to create virtual worlds, provided we can access and modify the internal data structures of the games. To enable this we created an open-source plugin UnrealCV (this http URL) for a popular game engine Unreal Engine 4 (UE4). We show two applications: (i) a proof of concept image dataset, and (ii) linking Caffe with the virtual world to test deep network algorithms.
Computer vision researchers have created large 3D repositories and virtual scenes @cite_16 @cite_17 @cite_8 @cite_13 . Note that these 3D resources can be used in the combined Unreal Engine and UnrealCV system.
{ "cite_N": [ "@cite_8", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2301880263", "2190691619", "2253156915", "2283234189" ], "abstract": [ "What happens if one pushes a cup sitting on a table toward the edge of the table? How about pushing a desk against a wall? In this paper, we study the problem of understanding the movements of objects as a result of applying external forces to them. For a given force vector applied to a specific location in an image, our goal is to predict long-term sequential movements caused by that force. Doing so entails reasoning about scene geometry, objects, their attributes, and the physical rules that govern the movements of objects. We design a deep neural network model that learns long-term sequential dependencies of object movements while taking into account the geometry and appearance of the scene by combining Convolutional and Recurrent Neural Networks. Training our model requires a large-scale dataset of object movements caused by external forces. To build a dataset of forces in scenes, we reconstructed all images in SUN RGB-D dataset in a physics simulator to estimate the physical movements of objects caused by external forces applied to them. Our Forces in Scenes (ForScene) dataset contains 65,000 object movements in 3D which represent a variety of external forces applied to different types of objects. Our experimental evaluations show that the challenging task of predicting long-term movements of objects as their reaction to external forces is possible from a single image. The code and dataset are available at: http: allenai.org plato forces.", "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans.", "We have created a dataset of more than ten thousand 3D scans of real objects. To create the dataset, we recruited 70 operators, equipped them with consumer-grade mobile 3D scanning setups, and paid them to scan objects in their environments. The operators scanned objects of their choosing, outside the laboratory and without direct supervision by computer vision professionals. The result is a large and diverse collection of object scans: from shoes, mugs, and toys to grand pianos, construction vehicles, and large outdoor sculptures. We worked with an attorney to ensure that data acquisition did not violate privacy constraints. The acquired data was placed irrevocably in the public domain and is available freely at http: redwood-data.org 3dscan.", "Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data --- performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the-art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset. Additionally, we offer a route to generating synthesized frame or video data, and understanding of different factors influencing performance gains." ] }
1609.01326
2517064749
Computer graphics can not only generate synthetic images and ground truth but it also offers the possibility of constructing virtual worlds in which: (i) an agent can perceive, navigate, and take actions guided by AI algorithms, (ii) properties of the worlds can be modified (e.g., material and reflectance), (iii) physical simulations can be performed, and (iv) algorithms can be learnt and evaluated. But creating realistic virtual worlds is not easy. The game industry, however, has spent a lot of effort creating 3D worlds, which a player can interact with. So researchers can build on these resources to create virtual worlds, provided we can access and modify the internal data structures of the games. To enable this we created an open-source plugin UnrealCV (this http URL) for a popular game engine Unreal Engine 4 (UE4). We show two applications: (i) a proof of concept image dataset, and (ii) linking Caffe with the virtual world to test deep network algorithms.
Games and movies have already been used in computer vision research. An optical flow dataset was generated from the open source movie Sintel @cite_11 . TORCS, an open source racing game, was converted into a virtual world and used to train an autonomous driving system @cite_2 . City scenes were built @cite_0 @cite_4 using the Unity game engine to produce synthetic images. By contrast, UnrealCV extends the functions of Unreal Engine and provides a tool for creating virtual worlds instead of generating a synthetic image video dataset or producing a single virtual world.
{ "cite_N": [ "@cite_0", "@cite_2", "@cite_4", "@cite_11" ], "mid": [ "", "2119112357", "2949907962", "1513100184" ], "abstract": [ "", "Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.", "Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called Virtual KITTI (see this http URL), automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking.", "Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available." ] }
1609.01184
2952793629
We consider a scheduling problem where machines need to be rented from the cloud in order to process jobs. There are two types of machines available which can be rented for machine-type dependent prices and for arbitrary durations. However, a machine-type dependent setup time is required before a machine is available for processing. Jobs arrive online over time, have machine-type dependent sizes and have individual deadlines. The objective is to rent machines and schedule jobs so as to meet all deadlines while minimizing the rental cost. Since we observe the slack of jobs to have a fundamental influence on the competitiveness, we study the model when instances are parameterized by their (minimum) slack. An instance is called to have a slack of @math if, for all jobs, the difference between the job's release time and the latest point in time at which it needs to be started is at least @math . While for @math no finite competitiveness is possible, our main result is an @math -competitive online algorithm for @math with @math , where @math and @math denotes the largest setup time and the cost ratio of the machine-types, respectively. It is complemented by a lower bound of @math .
A different, but closely related problem is that of . In this problem, @math jobs with release times and deadlines are considered and the objective is to finish each job before its deadline while minimizing the number of machines used. This problem has been studied in online and offline settings for the general and different special cases. The first result is due to @cite_2 where an offline algorithm with approximation factor of @math is given. This was later improved to @math by @cite_13 . Better bounds have been achieved for special cases; if all jobs have a common release date or equal processing times, constant approximation factors are achieved @cite_10 . In the online case, a lower bound of @math and an algorithm matching this bound is given in @cite_14 . For jobs of equal size, an optimal @math -competitive algorithm is presented in @cite_6 .
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_6", "@cite_2", "@cite_10" ], "mid": [ "1596968903", "2267750869", "", "2022191808", "1974725056" ], "abstract": [ "The problem of scheduling jobs with interval constraints is a well-studied classical scheduling problem. The input to the problem is a collection of n jobs where each job has a set of intervals on which it can be scheduled. The goal is to minimize the total number of machines needed to schedule all jobs subject to these interval constraints. In the continuous version, the allowed intervals associated with a job form a continuous time segment, described by a release date and a deadline. In the discrete version of the problem, the set of allowed intervals for a job is given explicitly. So far, only an O(log n ( log log n))-approximation is known for either version of the problem, obtained by a randomized rounding of a natural linear programming relaxation of the problem. In fact, we show here that this analysis is tight for both versions of the problem by providing a matching lower bound on the integrality gap of the linear program. Moreover, even when all jobs can be scheduled on a single machine, the discrete case has recently been shown to be spl Omega (log log n)-hard to approximate. In this paper, we provide improved approximation factors for the number of machines needed to schedule all jobs in the continuous version of the problem. Our main result is an O(1)-approximation algorithm when the optimal number of machines needed is bounded by a fixed constant. Thus, our results separate the approximability of the continuous and the discrete cases of the problem. For general instances, we strengthen the natural linear programming relaxation in a recursive manner by forbidding certain configurations which cannot arise in an integral feasible solution. This yields an O(OPT)-approximation, where OPT denotes the number of machines needed by an optimal solution. Combined with earlier results, our work implies an O( spl radic log n (log log n))-approximation for any value of OPT.", "We consider the problem of efficiently scheduling jobs on data centers to minimize the cost of renting machines from \"the cloud.\" In the most basic cloud service model, cloud providers offer computers on demand from large pools installed in data centers. Clients pay for use at an hourly rate. In order to minimize cost, each client needs to decide on the number of machines to be rented and the duration of renting each machine. This suggests the following optimization problem, which we call Rent Minimization. There is a set J= j_1,j_2,...,j_n of n jobs. Job j_i is released at time r_i >= 0, has a deadline of d_i, and requires p_i>0 contiguous processing time, r_i,d_i,p_i in R. The jobs need to be scheduled on identical parallel machines. Machines may be rented for any length of time; however, the cost of renting a machine for l>=0 time units is [l D] (the smallest integer >= l D) dollars, for some given large real D; in particular, one pays dollar 2 whether the machine is rented for D+1 or 2D time units. The goal is to schedule all the jobs in a way that minimizes the incurred rental cost. In this paper, we develop offline and online algorithms for Rent Minimization problem. The algorithms achieve a constant factor approximation for the offline version and O(log(p_max p_min)) for the online version, where p_max and p_min are the maximum and minimum processing time of the jobs respectively. We also show that no deterministic online algorithm can achieve an approximation factor better than log_ 3 (p_max p_min) within a constant factor. Both of these algorithms use the well-studied problem of Machine Minimization as a subroutine. Machine Minimization is a special case of Rent Minimization where D = max_ i d_i . In the process of solving the Rent Minimization problem, in this paper, we also develop the first online algorithm for Machine Minimization.", "", "We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance.", "We investigate the scheduling problem with release dates and deadlines on a minimum number of machines. In the case of equal release dates, we present a 2-approximation algorithm. We also show that Greedy Best-Fit (GBF) is a 6-approximation algorithm for the case of equal processing times." ] }
1609.01184
2952793629
We consider a scheduling problem where machines need to be rented from the cloud in order to process jobs. There are two types of machines available which can be rented for machine-type dependent prices and for arbitrary durations. However, a machine-type dependent setup time is required before a machine is available for processing. Jobs arrive online over time, have machine-type dependent sizes and have individual deadlines. The objective is to rent machines and schedule jobs so as to meet all deadlines while minimizing the rental cost. Since we observe the slack of jobs to have a fundamental influence on the competitiveness, we study the model when instances are parameterized by their (minimum) slack. An instance is called to have a slack of @math if, for all jobs, the difference between the job's release time and the latest point in time at which it needs to be started is at least @math . While for @math no finite competitiveness is possible, our main result is an @math -competitive online algorithm for @math with @math , where @math and @math denotes the largest setup time and the cost ratio of the machine-types, respectively. It is complemented by a lower bound of @math .
A further area of research that studies rental leasing problems from an algorithmic perspective and which recently gained attention is that of . Its focus is on infrastructure problems and while classically these problems are modeled such that resources are bought and then available all the time, in their leasing counterparts resources are assumed to be rented only for certain durations. In contrast to our model, in the leasing framework resources can not be rented for arbitrary durations. Instead, there is a given number @math of different leases with individual costs and durations for which a resource can be leased. The leasing model was introduced by Meyerson @cite_11 and problems like or have been studied in @cite_4 @cite_7 @cite_9 afterward.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_7", "@cite_11" ], "mid": [ "2120620", "1824000689", "1522297331", "1518018875" ], "abstract": [ "In the leasing variant of Set Cover presented by [1], elements (U ) arrive over time and must be covered by sets from a family (F ) of subsets of (U ). Each set can be leased for (K ) different periods of time. Let ( | U | = n ) and ( | F | = m ). Leasing a set (S ) for a period (k ) incurs a cost (c_ S ^ k ) and allows (S ) to cover its elements for the next (l_k ) time steps. The objective is to minimize the total cost of the sets leased, such that elements arriving at any time (t ) are covered by sets which contain them and are leased during time (t ). [1] gave an optimal (O( n) )-approximation for the problem in the offline setting, unless ( P = NP ) [22]. In this paper, we give randomized algorithms for variants of Set Cover Leasing in the online setting, including a generalization of Online Set Cover with Repetitions presented by [2], where elements appear multiple times and must be covered by a different set at each arrival. Our results improve the ( O ( ^2 (mn)) ) competitive factor of Online Set Cover with Repetitions [2] to ( O ( d (dn)) = O ( m (mn)) ), where (d ) is the maximum number of sets an element belongs to.", "Consider the following Steiner Tree leasing problem. Given a graph G= (V,E) with root r, and a sequence of terminal sets D t i¾? Vfor each day ti¾? [T]. A feasible solution to the problem is a set of edges E t for each tconnecting D t to r. Instead of obtaining edges for a single day at a time, or for infinitely long (both of which give Steiner tree problems), we leaseedges for say, a day, a week, a month, a year . Naturally, leasing an edge for a longer period costs less per unit of time. What is a good leasing strategy? In this paper, we give a general approach to solving a wide class of such problems by showing a close connection between deterministic leasing problems and problems in multistage stochastic optimization. All our results are in the offline setting.", "We consider an online facility location problem where clients arrive over time and their demands have to be served by opening facilities and assigning the clients to opened facilities. When opening a facility we must choose one of K different lease types to use. A lease type k has a certain lease length lk. Opening a facility i using lease type k causes a cost of @math and ensures that i is open for the next lk time steps. In addition to costs for opening facilities, we have to take connection costs cij into account when assigning a client j to facility i. We develop and analyze the first online algorithm for this problem that has a time-independent competitive factor. This variant of the online facility location problem was introduced by [7] and is strongly related to both the online facility problem by [5] and the parking permit problem by [6]. Nagarajan and Williamson gave a 3-approximation algorithm for the offline problem and an O (Klogn)-competitive algorithm for the online variant. Here, n denotes the total number of clients arriving over time. We extend their result by removing the dependency on n (and thereby on the time). In general, our algorithm is @math -competitive. Here @math denotes the maximum lease length. Moreover, we prove that it is @math -competitive for many \"natural\" cases. Such cases include, for example, situations where the number of clients arriving in each time step does not vary too much, or is non-increasing, or is polynomially bounded in @math .", "We consider online problems where purchases have time durations which expire regardless of whether the purchase is used or not. The parking permit problem is the natural analog of the well-studied ski rental problem in this model, and we provide matching upper and lower bounds on the competitive ratio for this problem. By extending the techniques thus developed, we give an online-competitive algorithm for the problem of renting steiner forest edges with time durations." ] }
1609.01000
2516287206
We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented as a low-rank matrix, which can be relaxed to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layer-wise manner. Empirically, CCNNs achieve performance competitive with CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.
Another line of work is devoted to understanding the energy landscape of a neural network. Under certain assumptions on the data distribution, it can be shown that any local minimum of a two-layer fully connected neural network has an objective value that is close to the global minimum value @cite_12 @cite_22 . If this property holds, then gradient descent can find a solution that is good enough''. Similar results have also been established for over-specified neural networks @cite_28 , or neural networks that has a certain parallel topology @cite_16 . However, these results are not applicable to a CNN, since the underlying assumptions are not satisfied by CNNs.
{ "cite_N": [ "@cite_28", "@cite_16", "@cite_22", "@cite_12" ], "mid": [ "2243086562", "813605148", "2751811815", "2146989110" ], "abstract": [ "Deep learning, in the form of artificial neural networks, has achieved remarkable practical success in recent years, for a variety of difficult machine learning applications. However, a theoretical explanation for this remains a major open problem, since training neural networks involves optimizing a highly non-convex objective function, and is known to be computationally hard in the worst case. In this work, we study the structure of the associated non-convex objective function, in the context of ReLU networks and starting from a random initialization of the network parameters. We identify some conditions under which it becomes more favorable to optimization, in the sense of (i) High probability of initializing at a point from which there is a monotonically decreasing path to a global minimum; and (ii) High probability of initializing at a basin (suitably defined) with a small minimal objective value. A common theme in our results is that such properties are more likely to hold for larger (\"overspecified\") networks, which accords with some recent empirical and theoretical observations.", "Techniques involving factorization are found in a wide range of applications and have enjoyed significant empirical success in many fields. However, common to a vast majority of these problems is the significant disadvantage that the associated optimization problems are typically non-convex due to a multilinear form or other convexity destroying transformation. Here we build on ideas from convex relaxations of matrix factorizations and present a very general framework which allows for the analysis of a wide range of non-convex factorization problems - including matrix factorization, tensor factorization, and deep neural network training formulations. We derive sufficient conditions to guarantee that a local minimum of the non-convex optimization problem is a global minimum and show that if the size of the factorized variables is large enough then from any initialization it is possible to find a global minimizer using a purely local descent algorithm. Our framework also provides a partial theoretical justification for the increasingly common use of Rectified Linear Units (ReLUs) in deep neural networks and offers guidance on deep network architectures and regularization strategies to facilitate efficient optimization.", "", "A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance." ] }
1609.01000
2516287206
We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented as a low-rank matrix, which can be relaxed to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layer-wise manner. Empirically, CCNNs achieve performance competitive with CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.
Past work has studied learning translation invariant features without backpropagation. present convolutional kernel networks. They propose a translation-invariant kernel whose feature mapping can be approximated by a composition of the convolution, non-linearity and pooling operators, obtained through unsupervised learning. However, this method is not equipped with the optimality guarantees that we have provided for CCNNs in this paper, even for learning one convolution layer. The ScatNet method @cite_3 uses translation and deformation-invariant filters constructed by wavelet analysis; however, these filters are independent of the data, unlike the analysis in this paper. show that a randomly initialized CNN can extract features as powerful as kernel methods, but it is not clear how to provably improve the model from a random initialization.
{ "cite_N": [ "@cite_3" ], "mid": [ "2072072671" ], "abstract": [ "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier." ] }
1609.01344
2513907049
User engagement modeling for manipulating actions in vision-based interfaces is one of the most important case studies of user mental state detection. In a Virtual Reality environment that employs camera sensors to recognize human activities, we have to know when user intends to perform an action and when not. Without a proper algorithm for recognizing engagement status, any kind of activities could be interpreted as manipulating actions, called "Midas Touch" problem. Baseline approach for solving this problem is activating gesture recognition system using some focus gestures such as waiving or raising hand. However, a desirable natural user interface should be able to understand user's mental status automatically. In this paper, a novel multi-modal model for engagement detection, DAIA, is presented. using DAIA, the spectrum of mental status for performing an action is quantized in a finite number of engagement states. For this purpose, a Finite State Transducer (FST) is designed. This engagement framework shows how to integrate multi-modal information from user biometric data streams such as 2D and 3D imaging. FST is employed to make the state transition smoothly using combination of several boolean expressions. Our FST true detection rate is 92.3 in total for four different states. Results also show FST can segment user hand gestures more robustly.
Body posture gives important information about engagement. Various approaches have been investigated based on body language analysis to improve human computer interaction. Intention to engage with an agent e.g. a robot @cite_12 @cite_14 , or interactive display @cite_21 , are some of these studies. Measuring the engagement intent is used in service robots to identify relevant gestures from irrelevant gestures which is known as Midas Touch Problem @cite_12 @cite_5 . Another research interest is related to learner engagement with robotic companions or interfaces @cite_13 .In addition, Intention to engage with a display for improving user identification is addressed in @cite_21 .
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_5", "@cite_13", "@cite_12" ], "mid": [ "", "2138207021", "2099840956", "2105410877", "2086012981" ], "abstract": [ "", "Vision-based interfaces, such as those made popular by the Microsoft Kinect, suffer from the Midas Touch problem: every user motion can be interpreted as an interaction. In response, we developed an algorithm that combines facial features, body pose and motion to approximate a user's intention to interact with the system. We show how this can be used to determine when to pay attention to a user's actions and when to ignore them. To demonstrate the value of our approach, we present results from a 30-person lab study conducted to compare four engagement algorithms in single and multi-user scenarios. We found that combining intention to interact with a 'raise an open hand in front of you' gesture yielded the best results. The latter approach offers a 12 improvement in accuracy and a 20 reduction in time to engage over a baseline 'wave to engage' gesture currently used on the Xbox 360.", "Even if a socially interactive robot has perfect information about the location, pose, and movement of humans in the environment, it is unclear how this information should be used to enable the initiation, maintenance, and termination of social interactions. We review models that have been developed to describe social engagement based on spatial relationships and describe a system developed for use on a robotic receptionist. The system uses spatial information from a laser tracker and head pose information from a camera to classify people in a categorical model of engagement. The robot's behaviors are determined by the presence of people in these different levels. We evaluate the system using observational behavioral analysis of recorded interactions between the robot and humans. This analysis suggests improvements to the current system: namely, to put a stronger emphasis on movement in the estimation of social engagement and to vary the timing of interactive behaviors", "The design of an affect recognition system for socially perceptive robots relies on representative data: human-robot interaction in naturalistic settings requires an affect recognition system to be trained and validated with contextualised affective expressions, that is, expressions that emerge in the same interaction scenario of the target application. In this paper we propose an initial computational model to automatically analyse human postures and body motion to detect engagement of children playing chess with an iCat robot that acts as a game companion. Our approach is based on vision-based automatic extraction of expressive postural features from videos capturing the behaviour of the children from a lateral view. An initial evaluation, conducted by training several recognition models with contextualised affective postural expressions, suggests that patterns of postural behaviour can be used to accurately predict the engagement of the children with the robot, thus making our approach suitable for integration into an affect recognition system for a game companion in a real world scenario.", "Recognition of intentions is a subconscious cognitive process vital to human communication. This skill enables anticipation and increases the quality of interactions between humans. Within the context of engagement, non-verbal signals are used to communicate the intention of starting the interaction with a partner. In this paper, we investigated methods to detect these signals in order to allow a robot to know when it is about to be addressed. Originality of our approach resides in taking inspiration from social and cognitive sciences to perform our perception task. We investigate meaningful features, i.e. human readable features, and elicit which of these are important for recognizing someone's intention of starting an interaction. Classically, spatial information like the human position and speed, the human-robot distance are used to detect the engagement. Our approach integrates multimodal features gathered using a companion robot equipped with a Kinect. The evaluation on our corpus collected in spontaneous conditions highlights its robustness and validates the use of such a technique in a real environment. Experimental validation shows that multimodal features set gives better precision and recall than using only spatial and speed features. We also demonstrate that 7 selected features are sufficient to provide a good starting engagement detection score. In our last investigation, we show that among our full 99 features set, the space reduction is not a solved task. This result opens new researches perspectives on multimodal engagement detection. Multimodal approach for starting engagement detection using non-explicit cues.Results show that our approach performs better than spatial one in all conditions.MRMR strategy reduces the features space to 7 features without a performance loss.Validation of Schegloff (sociologist) meaningful features for engagement detection.A robot centered labeled corpus of 4 hours in a home-like environment." ] }
1609.01344
2513907049
User engagement modeling for manipulating actions in vision-based interfaces is one of the most important case studies of user mental state detection. In a Virtual Reality environment that employs camera sensors to recognize human activities, we have to know when user intends to perform an action and when not. Without a proper algorithm for recognizing engagement status, any kind of activities could be interpreted as manipulating actions, called "Midas Touch" problem. Baseline approach for solving this problem is activating gesture recognition system using some focus gestures such as waiving or raising hand. However, a desirable natural user interface should be able to understand user's mental status automatically. In this paper, a novel multi-modal model for engagement detection, DAIA, is presented. using DAIA, the spectrum of mental status for performing an action is quantized in a finite number of engagement states. For this purpose, a Finite State Transducer (FST) is designed. This engagement framework shows how to integrate multi-modal information from user biometric data streams such as 2D and 3D imaging. FST is employed to make the state transition smoothly using combination of several boolean expressions. Our FST true detection rate is 92.3 in total for four different states. Results also show FST can segment user hand gestures more robustly.
A variety of studies strives for a multi-modal approach using some features of facial expression, body motion, voice, or seat pressure to elucidate on mental states. @cite_16 is discussing gaze and upper body posture for engagement detetion. Schwarz et. al @cite_21 used combination of gaze, upper body and arm position for the purpose of intention detction in engagement. Vaufreydaz et. al @cite_12 used gaze and proxemics and Salam et. al @cite_0 employed human state observation for engagement detection. used gaze and facial expression @cite_23 and @cite_22 employed weight, head, and upper body motion; @cite_2 and Dael et. al @cite_6 discuss voice, face, posture. Using Finite State Machine (FSM) for multi-modal system modeling is addressed in multiple research @cite_15 , @cite_24 , @cite_19 , @cite_17 .
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_6", "@cite_0", "@cite_24", "@cite_19", "@cite_23", "@cite_2", "@cite_15", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "2071284865", "2138207021", "2085003717", "1619154196", "2083325968", "2105940896", "2070486188", "", "2126891226", "", "2086012981", "89399862" ], "abstract": [ "The battlefield of the future will require the warfighter to multitask in numerous ways, seriously taxing the cognitive and perceptual capabilities of even the most advanced warrior. A principal concern in developing a better understanding of how current and proposed computational technologies can supplement and augment human performance in this and other environments is determining when such assistance is required. This challenge can be parsed into 2 components: determining what set of measurements accurately reflects cognitive state, and identifying techniques for synthesizing this set of measurements into a single collective cognitive state variable. The primary thesis of this proposal is that automatic human behavioral responses serve as inherent probes for cognitive state. Further, the human perception-action system is uniquely designed to capture, process, integrate, and act on an extraordinarily diverse range of information freely available in the natural environment. Together, this system and the ...", "Vision-based interfaces, such as those made popular by the Microsoft Kinect, suffer from the Midas Touch problem: every user motion can be interpreted as an interaction. In response, we developed an algorithm that combines facial features, body pose and motion to approximate a user's intention to interact with the system. We show how this can be used to determine when to pay attention to a user's actions and when to ignore them. To demonstrate the value of our approach, we present results from a 30-person lab study conducted to compare four engagement algorithms in single and multi-user scenarios. We found that combining intention to interact with a 'raise an open hand in front of you' gesture yielded the best results. The latter approach offers a 12 improvement in accuracy and a 20 reduction in time to engage over a baseline 'wave to engage' gesture currently used on the Xbox 360.", "Emotion communication research strongly focuses on the face and voice as expressive modalities, leaving the rest of the body relatively understudied. Contrary to the early assumption that body movement only indicates emotional intensity, recent studies have shown that body movement and posture also conveys emotion specific information. However, a deeper understanding of the underlying mechanisms is hampered by a lack of production studies informed by a theoretical framework. In this research we adopted the Body Action and Posture (BAP) coding system to examine the types and patterns of body movement that are employed by 10 professional actors to portray a set of 12 emotions. We investigated to what extent these expression patterns support explicit or implicit predictions from basic emotion theory, bidimensional theory, and componential appraisal theory. The overall results showed partial support for the different theoretical approaches. They revealed that several patterns of body movement systematically occur in portrayals of specific emotions, allowing emotion differentiation. Although a few emotions were prototypically expressed by one particular pattern, most emotions were variably expressed by multiple patterns, many of which can be explained as reflecting functional components of emotion such as modes of appraisal and action readiness. It is concluded that further work in this largely underdeveloped area should be guided by an appropriate theoretical framework to allow a more systematic design of experiments and clear hypothesis testing.", "In this paper, we consider engagement in the context of Human-Robot Interaction (HRI). Previous studies in HRI relate engagement to emotion and attention independently from the context. We propose a model of engagement in Human- Robot Interaction depending on the context in which human and robot act. In our model, the mental and emotional states of the user related to engagement vary during the interaction according to the current context. Knowing the context of the interaction, the robot would know what to expect regarding the mental and the emotional state of the user and thus if it perceives a state that is not in accordance with its expectations, this might signal disengagement.", "Multimodal interfaces require effective parsing and understanding of utterances whose content is distributed across multiple input modes. Johnston 1998 presents an approach in which strategies for multimodal integration are stated declaratively using a unification-based grammar that is used by a multi-dimensional chart parser to compose inputs. This approach is highly expressive and supports a broad class of interfaces, but offers only limited potential for mutual compensation among the input modes, is subject to significant concerns in terms of computational complexity, and complicates selection among alternative multimodal interpretations of the input. In this paper, we present an alternative approach in which multimodal parsing and understanding are achieved using a weighted finite-state device which takes speech and gesture streams as inputs and outputs their joint interpretation. This approach is significantly more efficient, enables tight-coupling of multimodal understanding with speech recognition, and provides a general probabilistic framework for multimodal ambiguity resolution.", "We introduce a face detector for wearable computers that exploits constraints in face scale and orientation imposed by the proximity of participants in near social interactions. Using this method we describe a wearable system that perceives \"social engagement,\" i.e., when the wearer begins to interact with other individuals. Our experimental system proved >90 accurate when tested on wearable video data captured at a professional conference. Over 300 individuals were captured during social engagement, and the data was separated into independent training and test sets. A metric for balancing the performance of face detection, localization, and recognition in the context of a wearable interface is discussed. Recognizing social engagement with a user's wearable computer provides context data that can be useful in determining when the user is interruptible. In addition, social engagement detection may be incorporated into a user interface to improve the quality of mobile face recognition software. For example, the user may cue the face recognition system in a socially graceful way by turning slightly away and then toward a speaker when conditions for recognition are favorable.", "The perception of facial expression and gaze-direction are important aspects of non-verbal communication. Expressions communicate the internal emotional state of others while gaze-direction offers clues to their attentional focus and future intentions. Cortical regions in the superior temporal sulcus (STS) play a central role in the perception of expression and gaze, but the extent to which the neural representations of these facial gestures are overlapping is unknown. In the current study 12 subjects observed neutral faces with direct-gaze, neutral faces with averted-gaze, or emotionally expressive faces with direct-gaze while we scanned their brains with functional magnetic resonance imaging (fMRI), allowing a comparison of the hemodynamic responses evoked by perception of expression and averted-gaze. The inferior occipital gyri, fusiform gyri, STS and inferior frontal gyrus were more strongly activated when subjects saw facial expressions than when they saw neutral faces. The right STS was more strongly activated by the perception of averted-gaze than direct-gaze faces. A comparison of the responses within right STS revealed that expression and averted-gaze activated distinct, though overlapping, regions of cortex. We propose that gaze-direction and expression are represented by dissociable overlapping neural systems.", "", "Mobile interfaces need to allow the user and system to adapt their choice of communication modes according to user preferences, the task at hand, and the physical and social environment. We describe a multimodal application architecture which combines finite-state multimodal language processing, a speech-act based multimodal dialogue manager, dynamic multimodal output generation, and user-tailored text planning to enable rapid prototyping of multimodal interfaces with flexible input and adaptive output. Our testbed application MATCH (Multimodal Access To City Help) provides a mobile multimodal speech-pen interface to restaurant and sub-way information for New York City.", "", "Recognition of intentions is a subconscious cognitive process vital to human communication. This skill enables anticipation and increases the quality of interactions between humans. Within the context of engagement, non-verbal signals are used to communicate the intention of starting the interaction with a partner. In this paper, we investigated methods to detect these signals in order to allow a robot to know when it is about to be addressed. Originality of our approach resides in taking inspiration from social and cognitive sciences to perform our perception task. We investigate meaningful features, i.e. human readable features, and elicit which of these are important for recognizing someone's intention of starting an interaction. Classically, spatial information like the human position and speed, the human-robot distance are used to detect the engagement. Our approach integrates multimodal features gathered using a companion robot equipped with a Kinect. The evaluation on our corpus collected in spontaneous conditions highlights its robustness and validates the use of such a technique in a real environment. Experimental validation shows that multimodal features set gives better precision and recall than using only spatial and speed features. We also demonstrate that 7 selected features are sufficient to provide a good starting engagement detection score. In our last investigation, we show that among our full 99 features set, the space reduction is not a solved task. This result opens new researches perspectives on multimodal engagement detection. Multimodal approach for starting engagement detection using non-explicit cues.Results show that our approach performs better than spatial one in all conditions.MRMR strategy reduces the features space to 7 features without a performance loss.Validation of Schegloff (sociologist) meaningful features for engagement detection.A robot centered labeled corpus of 4 hours in a home-like environment.", "Designing and implementing multimodal applications that take advantage of several recognitionbased interaction techniques (e.g. speech and gesture recognition) is a difficult task. The goal of our research is to explore how simple modelling techniques and tools can be used to support the designers and developers of multimodal systems. In this paper, we discuss the use of finite state machines (FSMs) for the design and prototyping of multimodal commands. In particular, we show that FSMs can help designers in reasoning about synchronization patterns problems. Finally, we describe an implementation of our FSM-based approach, in a toolkit whose aim is to facilitate the iterative process of designing, prototyping and testing multimodality." ] }
1609.01331
2513532224
In this work, we propose a joint audio-video fingerprint Automatic Content Recognition (ACR) technology for media retrieval. The problem is focused on how to balance the query accuracy and the size of fingerprint, and how to allocate the bits of the fingerprint to video frames and audio frames to achieve the best query accuracy. By constructing a novel concept called Coverage, which is highly correlated to the query accuracy, we are able to form a rate-coverage model to translate the original problem into an optimization problem that can be resolved by dynamic programming. To the best of our knowledge, this is the first work that uses joint audio-video fingerprint ACR technology for media retrieval with a theoretical problem formulation. Experimental results indicate that compared to reference algorithms, the proposed method has up to 25 query accuracy improvement while using 60 overall bit-rates, and 25 bit-rate reduction while achieving 85 accuracy, and it significantly outperforms the solution with single audio or video source fingerprint.
The research community has addressed the problem of ACR by two approaches: and . Watermarking is about inserting specific identifier and time-stamp information in the content which can be viewed later. Fingerprinting, on the other hand, analyzes and compares unique content characteristic with reference database. Fingerprinting does not require the source media to be modified in any way, enables service providers to prepare indexing system for any broadcast or TV programs with no change in the content creator’s or broadcaster’s work flow. Another advantage of fingerprinting is the additive property: unlike watermarking, fingerprinting requires no copyright to the media since it does not change the original content, making it possible to be broadly used across as many applications as possible. As a result, fingerprinting system is the most widely used content recognition technique in ACR. More specifically, fingerprints, also referred to as signatures or features, are intrinsic data to characterize the video content for indexing, searching, and ranking @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "2127554718" ], "abstract": [ "Determining similarities among data objects is a core task of content-based multimedia retrieval systems. Approximating data object contents via flexible feature representations, such as feature signatures, multimedia retrieval systems frequently determine similarities among data objects by applying distance functions. In this paper, we compare major state-of-the-art similarity measures applicable to flexible feature signatures with respect to their qualities of effectiveness and efficiency. Furthermore, we study the behavior of the similarity measures by discussing their properties. Our findings can be used in guiding the development of content-based retrieval applications for numerous domains." ] }
1609.01331
2513532224
In this work, we propose a joint audio-video fingerprint Automatic Content Recognition (ACR) technology for media retrieval. The problem is focused on how to balance the query accuracy and the size of fingerprint, and how to allocate the bits of the fingerprint to video frames and audio frames to achieve the best query accuracy. By constructing a novel concept called Coverage, which is highly correlated to the query accuracy, we are able to form a rate-coverage model to translate the original problem into an optimization problem that can be resolved by dynamic programming. To the best of our knowledge, this is the first work that uses joint audio-video fingerprint ACR technology for media retrieval with a theoretical problem formulation. Experimental results indicate that compared to reference algorithms, the proposed method has up to 25 query accuracy improvement while using 60 overall bit-rates, and 25 bit-rate reduction while achieving 85 accuracy, and it significantly outperforms the solution with single audio or video source fingerprint.
In the last decade, Content-based Image Retrieval (CBIR) has concerned voluminous research paving way for enlargement of numerous techniques and systems besides creating interest on fields that aid these systems. In the field of content-based image retrieval @cite_17 @cite_61 @cite_53 , the feature space frequently comprises position, color, or texture dimensions @cite_25 @cite_22 where each image pixel is mapped to a single feature in the corresponding feature space. At this level, features are derived from the media without considering external semantics. A variety of low level feature methods have been developed. For example, color correlograms has been developed in CueVideo @cite_43 . Location, color, and texture are jointly considered in @cite_25 @cite_23 . Low level features provide the first and enabling step for subsequent high-level feature analysis. High-level features are also called semantic features, such as timbre, rhythm, instruments, and events @cite_49 . The preferred characteristics of CBIR system are high retrieval efficiency and less computational complexity and they are the key purpose in the design of CBIR system @cite_36 .
{ "cite_N": [ "@cite_61", "@cite_22", "@cite_36", "@cite_53", "@cite_43", "@cite_23", "@cite_49", "@cite_25", "@cite_17" ], "mid": [ "2130660124", "", "2077529516", "2112360012", "", "2153166546", "", "2031476339", "" ], "abstract": [ "Presents a review of 200 references in content-based image retrieval. The paper starts with discussing the working conditions of content-based retrieval: patterns of use, types of pictures, the role of semantics, and the sensory gap. Subsequent sections discuss computational steps for image retrieval systems. Step one of the review is image processing for retrieval sorted by color, texture, and local geometry. Features for retrieval are discussed next, sorted by: accumulative and global features, salient points, object and shape features, signs, and structural combinations thereof. Similarity of pictures and objects in pictures is reviewed for each of the feature types, in close connection to the types and means of feedback the user of the systems is capable of giving by interaction. We briefly discuss aspects of system engineering: databases, system architecture, and evaluation. In the concluding section, we present our view on: the driving force of the field, the heritage from computer vision, the influence on computer vision, the role of similarity and of interaction, the need for databases, the problem of evaluation, and the role of the semantic gap.", "", "This paper addresses content-based image retrieval in general, and in particular, focuses on developing a hidden semantic concept discovery methodology to address effective semantics-intensive image retrieval. In our approach, each image in the database is segmented into regions associated with homogenous color, texture, and shape features. By exploiting regional statistical information in each image and employing a vector quantization method, a uniform and sparse region-based representation is achieved. With this representation, a probabilistic model based on statistical-hidden-class assumptions of the image database is obtained, to which the expectation-maximization technique is applied to analyze semantic concepts hidden in the database. An elaborated retrieval algorithm is designed to support the probabilistic model. The semantic similarity is measured through integrating the posterior probabilities of the transformed query image, as well as a constructed negative example, to the discovered semantic concepts. The proposed approach has a solid statistical foundation; the experimental evaluations on a database of 10 000 general-purposed images demonstrate its promise and effectiveness", "Image and video retrieval continues to be one of the most exciting and fastest-growing research areas in the field of multimedia technology. What are the main challenges in image and video retrieval? Despite the sustained efforts in the last years, we think that the paramount challenge remains bridging the semantic gap. By this we mean that low level features are easily measured and computed, but the starting point of the retrieval process is typically the high level query from a human. Translating or converting the question posed by a human to the low level features seen by the computer illustrates the problem in bridging the semantic gap. However, the semantic gap is not merely translating high level features to low level features. The essence of a semantic query is understanding the meaning behind the query. This can involve understanding both the intellectual and emotional sides of the human, not merely the distilled logical portion of the query but also the personal preferences and emotional subtons of the query and the preferential form of the results.", "", "In many areas of commerce, government, academia, and hospitals, large collections of digital im- ages are being created. Many of these collections are the product of digitizing existing collections of analogue photographs, diagrams, drawings, paintings, and prints. Usually, the only way of search- ing these collections was by keyword indexing, or simply by browsing. Digital images databases however, open the way to content-based searching. In this paper we survey some technical aspects of current content-based image retrieval systems.", "", "An experimental comparison of a large number of different image descriptors for content-based image retrieval is presented. Many of the papers describing new techniques and descriptors for content-based image retrieval describe their newly proposed methods as most appropriate without giving an in-depth comparison with all methods that were proposed earlier. In this paper, we first give an overview of a large variety of features for content-based image retrieval and compare them quantitatively on four different tasks: stock photo retrieval, personal photo collection retrieval, building retrieval, and medical image retrieval. For the experiments, five different, publicly available image databases are used and the retrieval performance of the features is analyzed in detail. This allows for a direct comparison of all features considered in this work and furthermore will allow a comparison of newly proposed features to these in the future. Additionally, the correlation of the features is analyzed, which opens the way for a simple and intuitive method to find an initial set of suitable features for a new task. The article concludes with recommendations which features perform well for what type of data. Interestingly, the often used, but very simple, color histogram performs well in the comparison and thus can be recommended as a simple baseline for many applications.", "" ] }
1609.01345
2513231403
Airborne acquisition and on-road mobile mapping provide complementary 3D information of an urban landscape: the former acquires roof structures, ground, and vegetation at a large scale, but lacks the facade and street-side details, while the latter is incomplete for higher floors and often totally misses out on pedestrian-only areas or undriven districts. In this work, we introduce an approach that efficiently unifies a detailed street-side Structure-from-Motion (SfM) or Multi-View Stereo (MVS) point cloud and a coarser but more complete point cloud from airborne acquisition in a joint surface mesh. We propose a point cloud blending and a volumetric fusion based on ray casting across a 3D tetrahedralization (3DT), extended with data reduction techniques to handle large datasets. To the best of our knowledge, we are the first to adopt a 3DT approach for airborne street-side data fusion. Our pipeline exploits typical characteristics of airborne and ground data, and produces a seamless, watertight mesh that is both complete and detailed. Experiments on 3D urban data from multiple sources and different data densities show the effectiveness and benefits of our approach.
Explicit methods directly construct mesh faces over the input points or depth data or join multiple meshes by zippering them, e.g. @cite_15 @cite_3 @cite_10 . These are often interpolatory, sensitive to noise and can result in open meshes with holes and non-manifold areas. Implicit methods extract the surface as a level-set of a volumetric function evaluated over a spatial grid. The seminal work @cite_30 integrates range images into a voxel grid via a weighted average over Truncated Signed Distance Functions (TSDF). It has inspired many indoor fusion techniques, e.g. KinectFusion @cite_16 . Poisson reconstruction @cite_7 is another popular approach due to its robustness to noise. In general, SDF-based methods tend to oscillate around noisy input. Thus, @cite_21 @cite_0 drop the sign of the distance function and find the surface as an s t-cut in a graph over voxels, with regularization for a smooth result. Both only show results for small objects. Convex variational methods have also been applied for fusing noisy depth maps over a 3D voxel grid @cite_6 or a 2.5D height map @cite_26 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_7", "@cite_21", "@cite_3", "@cite_6", "@cite_0", "@cite_15", "@cite_16", "@cite_10" ], "mid": [ "2009422376", "", "2008073424", "2086550580", "2157270587", "2167335287", "", "2014997627", "2004546930", "2171056981" ], "abstract": [ "A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scan-convert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a run-length encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the final manifold by extracting an isosurface from the volumetric grid. We show that under certain assumptions, this isosurface is optimal in the least squares sense. To fill gaps in the model, we tessellate over the boundaries between regions seen to be empty and regions never observed. Using this method, we are able to integrate a large number of range images (as many as 70) yielding seamless, high-detail models of up to 2.6 million triangles.", "", "We show that surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, our Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. We describe a spatially adaptive multiscale algorithm whose time and space complexities are proportional to the size of the reconstructed model. Experimenting with publicly available scan data, we demonstrate reconstruction of surfaces with greater detail than previously achievable.", "We present a new volumetric method for reconstructing watertight triangle meshes from arbitrary, unoriented point clouds. While previous techniques usually reconstruct surfaces as the zero level-set of a signed distance function, our method uses an unsigned distance function and hence does not require any information about the local surface orientation. Our algorithm estimates local surface confidence values within a dilated crust around the input samples. The surface which maximizes the global confidence is then extracted by computing the minimum cut of a weighted spatial graph structure. We present an algorithm, which efficiently converts this cut into a closed, manifold triangle mesh with a minimal number of vertices. The use of an unsigned distance function avoids the topological noise artifacts caused by misalignment of 3D scans, which are common to most volumetric reconstruction techniques. Due to a hierarchical approach our method efficiently produces solid models of low genus even for noisy and highly irregular data containing large holes, without loosing fine details in densely sampled regions. We show several examples for different application settings such as model generation from raw laser-scanned data, image-based 3D reconstruction, and mesh repair.", "We describe an approach to register and merge detailed facade models with a complementary airborne model. The airborne modeling process provides a half-meter resolution model with a bird's-eye view of the entire area, containing terrain profile and building tops. The ground-based modeling process results in a detailed model of the building facades. Using the DSM obtained from airborne laser scans, we localize the acquisition vehicle and register the ground-based facades to the airborne model by means of Monte Carlo localization (MCL). We merge the two models with different resolutions to obtain a 3D model.", "Robust integration of range images is an important task for building high-quality 3D models. Since range images, and in particular range maps from stereo vision, may have a substantial amount of outliers, any integration approach aiming at high-quality models needs an increased level of robustness. Additionally, a certain level of regularization is required to obtain smooth surfaces. Computational efficiency and global convergence are further preferable properties. The contribution of this paper is a unified framework to solve all these issues. Our method is based on minimizing an energy functional consisting of a total variation (TV) regularization force and an L1 data fidelity term. We present a novel and efficient numerical scheme, which combines the duality principle for the TV term with a point-wise optimization step. We demonstrate the superior performance of our algorithm on the well-known Middlebury multi-view database and additionally on real-world multi-view images.", "", "In this paper, we develop a set of data processing algorithms for generating textured facade meshes of cities from a series of vertical 2D surface scans and camera images, obtained by a laser scanner and digital camera while driving on public roads under normal traffic conditions. These processing steps are needed to cope with imperfections and non-idealities inherent in laser scanning systems such as occlusions and reflections from glass surfaces. The data is divided into easy-to-handle quasi-linear segments corresponding to approximately straight driving direction and sequential topological order of vertical laser scans; each segment is then transformed into a depth image. Dominant building structures are detected in the depth images, and points are classified into foreground and background layers. Large holes in the background layer, caused by occlusion from foreground layer objects, are filled in by planar or horizontal interpolation. The depth image is further processed by removing isolated points and filling remaining small holes. The foreground objects also leave holes in the texture of building facades, which are filled by horizontal and vertical interpolation in low frequency regions, or by a copy-paste method otherwise. We apply the above steps to a large set of data of downtown Berkeley with several million 3D points, in order to obtain texture-mapped 3D models.", "We present KinectFusion, a system that takes live depth data from a moving Kinect camera and in real-time creates high-quality, geometrically accurate, 3D models. Our system allows a user holding a Kinect camera to move quickly within any indoor space, and rapidly scan and create a fused 3D model of the whole room and its contents within seconds. Even small motions, caused for example by camera shake, lead to new viewpoints of the scene and thus refinements of the 3D model, similar to the effect of image super-resolution. As the camera is moved closer to objects in the scene more detail can be added to the acquired 3D model.", "We present a viewpoint-based approach for the quick fusion of multiple stereo depth maps. Our method selects depth estimates for each pixel that minimize violations of visibility constraints and thus remove errors and inconsistencies from the depth maps to produce a consistent surface. We advocate a two-stage process in which the first stage generates potentially noisy, overlapping depth maps from a set of calibrated images and the second stage fuses these depth maps to obtain an integrated surface with higher accuracy, suppressed noise, and reduced redundancy. We show that by dividing the processing into two stages we are able to achieve a very high throughput because we are able to use a computationally cheap stereo algorithm and because this architecture is amenable to hardware-accelerated (GPU) implementations. A rigorous formulation based on the notion of stability of a depth estimate is presented first. It aims to determine the validity of a depth estimate by rendering multiple depth maps into the reference view as well as rendering the reference depth map into the other views in order to detect occlusions and free- space violations. We also present an approximate alternative formulation that selects and validates only one hypothesis based on confidence. Both formulations enable us to perform video-based reconstruction at up to 25 frames per second. We show results on the multi-view stereo evaluation benchmark datasets and several outdoors video sequences. Extensive quantitative analysis is performed using an accurately surveyed model of a real building as ground truth." ] }
1609.01345
2513231403
Airborne acquisition and on-road mobile mapping provide complementary 3D information of an urban landscape: the former acquires roof structures, ground, and vegetation at a large scale, but lacks the facade and street-side details, while the latter is incomplete for higher floors and often totally misses out on pedestrian-only areas or undriven districts. In this work, we introduce an approach that efficiently unifies a detailed street-side Structure-from-Motion (SfM) or Multi-View Stereo (MVS) point cloud and a coarser but more complete point cloud from airborne acquisition in a joint surface mesh. We propose a point cloud blending and a volumetric fusion based on ray casting across a 3D tetrahedralization (3DT), extended with data reduction techniques to handle large datasets. To the best of our knowledge, we are the first to adopt a 3DT approach for airborne street-side data fusion. Our pipeline exploits typical characteristics of airborne and ground data, and produces a seamless, watertight mesh that is both complete and detailed. Experiments on 3D urban data from multiple sources and different data densities show the effectiveness and benefits of our approach.
Last but not least, there exist only a few works that combine street-side and aerial data for joint mesh reconstruction @cite_3 @cite_23 @cite_9 . Fr "uh and Zakhor @cite_3 construct meshes over street-side LiDAR range maps and over a large-scale Digital Surface Model (DSM). Unlike in our approach, they reconstruct a facade and an airborne mesh separately without topological fusion. @cite_9 solves the problem by directly applying Poisson surface reconstruction @cite_7 over the joint dense point cloud computed by patch-based MVS @cite_13 without a cross-consistency check between airborne and street-side data. @cite_23 integrate over 200M points from a tripod-mounted ground LiDAR and an aerial DSM, by using a distance field over an octree and an out-of-core dual contouring approach. Despite the quite complex algorithm, results are noisy and contain many large holes. In contrast, our approach performs cross-consistency filtering and produces a good quality watertight surface.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_3", "@cite_23", "@cite_13" ], "mid": [ "2008073424", "2059483036", "2157270587", "2143801484", "2129404737" ], "abstract": [ "We show that surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, our Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. We describe a spatially adaptive multiscale algorithm whose time and space complexities are proportional to the size of the reconstructed model. Experimenting with publicly available scan data, we demonstrate reconstruction of surfaces with greater detail than previously achievable.", "We present the first large scale system for capturing and rendering relight able scene reconstructions from massive unstructured photo collections taken under different illumination conditions and viewpoints. We combine photos taken from many sources, Flickr-Based ground-level imagery, oblique aerial views, and street view, to recover models that are significantly more complete and detailed than previously demonstrated. We demonstrate the ability to match both the viewpoint and illumination of arbitrary input photos, enabling a Visual Turing Test in which photo and rendering are viewed side-by-side and the observer has to guess which is which. While we cannot yet fool human perception, the gap is closing.", "We describe an approach to register and merge detailed facade models with a complementary airborne model. The airborne modeling process provides a half-meter resolution model with a bird's-eye view of the entire area, containing terrain profile and building tops. The ground-based modeling process results in a detailed model of the building facades. Using the DSM obtained from airborne laser scans, we localize the acquisition vehicle and register the ground-based facades to the airborne model by means of Monte Carlo localization (MCL). We merge the two models with different resolutions to obtain a 3D model.", "This paper presents techniques for the merging of 3D data coming from different sensors, such as ground and aerial laser range scans. The 3D models created are reconstructed to give a photo-realistic scene enabling interactive virtual walkthroughs, measurements and scene change analysis. The reconstructed model is based on a weighted integration of all available data based on sensor-specific parameters such as noise level, accuracy, inclination and reflectivity of the target, spatial distribution of points. The geometry is robustly reconstructed with a volumetric approach. Once registered and weighed, all data is re-sampled in a multi-resolution distance field using out-of-core techniques. The final mesh is extracted by contouring the iso-surface with a feature preserving dual contouring algorithm. The paper shows results of the above technique applied to Verona (Italy) city centre.", "This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and \"crowded\" scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [1] shows that the proposed method outperforms all others submitted so far for four out of the six data sets." ] }
1609.00616
2516216068
The Strict Avalanche Criterion (SAC) is a measure of both confusion and diffusion, which are key properties of a cryptographic hash function. This work provides a working definition of the SAC, describes an experimental methodology that can be used to statistically evaluate whether a cryptographic hash meets the SAC, and uses this to investigate the degree to which compression function of the SHA-1 hash meets the SAC. The results ( @math ) are heartening: SHA-1 closely tracks the SAC after the first 24 rounds, and demonstrates excellent properties of confusion and diffusion throughout.
A simple example clarifies the difference. Babbage @cite_18 uses Lloyd's @cite_5 definition of the SAC and defines a SAC-compliant function:
{ "cite_N": [ "@cite_5", "@cite_18" ], "mid": [ "2231898687", "1997281127" ], "abstract": [ "The strict avalanche criterion was introduced by Webster and Tavares [3] in order to combine the ideas of completeness and the avalanche effect. A cryptographic transformation is complete if each output bit depends on all the input bits, and it exhibits the avalanche effect if an average of one half of the output bits change whenever a single input bit is complemented. To fulfil the strict avalanche criterion, each output bit should change with probability one half whenever a single input bit is complemented. This means, in particular, that there is no good lower order (fewer bits) approximation to the function. This is clearly a desirable cryptographic property since such an approximation would enable a corresponding reduction in the amount of work needed for an exhaustive search.", "Some recent work concerning the strict avalanche criterion for a Boolean function has been motivated by the claim that a certain cryptographically useful property will be true of any function satisfying the criterion. In the letter it is observed that not only is this claim untrue, but that possession of the property in question is in fact precluded by satisfaction of the strict avalanche criterion." ] }
1609.00616
2516216068
The Strict Avalanche Criterion (SAC) is a measure of both confusion and diffusion, which are key properties of a cryptographic hash function. This work provides a working definition of the SAC, describes an experimental methodology that can be used to statistically evaluate whether a cryptographic hash meets the SAC, and uses this to investigate the degree to which compression function of the SHA-1 hash meets the SAC. The results ( @math ) are heartening: SHA-1 closely tracks the SAC after the first 24 rounds, and demonstrates excellent properties of confusion and diffusion throughout.
It is worth noting that the original definition, as per Webster & Tavares @cite_21 , is slightly ambiguous. They state that the probability that each bit in @math is equal to 1 should over the set of all possible plaintext vectors @math and @math ''; however, they also state that to satisfy the strict avalanche criterion, every element must have a value '' (emphasis mine). Under Lloyd's interpretation, the SAC is only satisfied when an element changes with a probability of precisely 0.5. This is an unnecessarily binary criterion, as it seems to be more useful (and more in line with the original definition) to understand how far a particular sample from the SAC. Therefore, this paper regards the SAC as a continuum but takes Lloyd's formulation as the definition of what it means to meet'' the SAC.
{ "cite_N": [ "@cite_21" ], "mid": [ "1670558497" ], "abstract": [ "The ideas of completeness and the avalanche effect were first introduced by Kam and Davida [1] and Feistel [2], respectively. If a cryptographic transformation is complete, then each ciphertext bit must depend on all of the plaintext bits. Thus, if it were possible to find the simplest Boolean expression for each ciphertext bit in terms of the plaintext bits, each of those expressions would have to contain all of the plaintext bits if the function was complete. Alternatively, if there is at least one pair of n-bit plaintext vectors X and Xi that differ only in bit i, and f(X) and f(Xi) differ at least in bit j for all @math then the function f must be complete." ] }
1609.00616
2516216068
The Strict Avalanche Criterion (SAC) is a measure of both confusion and diffusion, which are key properties of a cryptographic hash function. This work provides a working definition of the SAC, describes an experimental methodology that can be used to statistically evaluate whether a cryptographic hash meets the SAC, and uses this to investigate the degree to which compression function of the SHA-1 hash meets the SAC. The results ( @math ) are heartening: SHA-1 closely tracks the SAC after the first 24 rounds, and demonstrates excellent properties of confusion and diffusion throughout.
It can be seen that the SAC is equivalent to @math . The same work defines an which regards the SAC as a continuum. Much of the subsequent work ( @cite_17 @cite_23 @cite_2 @cite_7 @cite_13 @cite_12 ) in this area has more closely examined the relationship between PC and nonlinearity characteristics. Many of these extend the PC in interesting ways and examine ways of constructing functions which satisfy @math , but experimental research that targets existing algorithms is scarce.
{ "cite_N": [ "@cite_7", "@cite_23", "@cite_2", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "1983760747", "2035999274", "1566579740", "1583499453", "", "1505106584" ], "abstract": [ "The GAC (Global Avalanche Characteristics) were introduced by Zhang and Zheng (1995) as a measure of cryptographic strength of Boolean functions. Two indicators σƒ and Δƒ related to GAC are introduced. (1998) gave a lower bound on σƒ for a balanced Boolean function. In this paper, we provide an improved lower bound. Moreover, we provide bounds on nonlinearity for a balanced Boolean function satisfying the propagation criterion with respect to t vectors.", "Many practical information authentication techniques are based on such cryptographic means as data encryption algorithms and one-way hash functions. A core component of such algorithms and functions are nonlinear functions. In this paper, we reveal a relationship between nonlinearity and propagation characteristic, two critical indicators of the cryptographic strength of a Boolean function. We also investigate the structures of functions that satisfy the propagation criterion with respect to all but six or less vectors. We show that these functions have close relationships with bent functions, and can be easily constructed from the latter.", "We determine those Boolean functions on GF(2)n which satisfy the propagation criterion of degree l and order k ≥ n − l - 2. All of theses functions are quadratic. We design nonquadratic Boolean functions satisfying the criterion PC(l) of order k by using the Maiorana-McFarland construction involving nonlinear mappings derived from non-linear codes.", "We investigate the link between the nonlinearity of a Boolean function and its propagation characteristics. We prove that highly nonlinear functions usually have good propagation properties regarding different criteria. Conversely, any Boolean function satisfying the propagation criterion with respect to a linear subspace of codimension 1 or 2 has a high nonlinearity. We also point out that most highly nonlinear functions with a three-valued Walsh spectrum can be transformed into 1-resilient functions.", "", "" ] }
1609.00616
2516216068
The Strict Avalanche Criterion (SAC) is a measure of both confusion and diffusion, which are key properties of a cryptographic hash function. This work provides a working definition of the SAC, describes an experimental methodology that can be used to statistically evaluate whether a cryptographic hash meets the SAC, and uses this to investigate the degree to which compression function of the SHA-1 hash meets the SAC. The results ( @math ) are heartening: SHA-1 closely tracks the SAC after the first 24 rounds, and demonstrates excellent properties of confusion and diffusion throughout.
Although there are proven theoretical ways to construct a function which satisfies the SAC @cite_16 , there is no way (apart from exhaustive testing) to verify that an existing function satisfies the SAC. By contrast, useful cryptographic properties such as non-degeneracy @cite_4 or bentness @cite_3 are verifiable without having to resort to exhaustive testing. However, the SAC metric is no worse in this regard than the correlation immunity and balance @cite_9 metrics which also require exhaustive testing.
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_4", "@cite_3" ], "mid": [ "2154271831", "2787662955", "1605416950", "2090432705" ], "abstract": [ "Pseudonoise generators for cryptographic applications consisting of several linear feedback shift registers with a nonlinear combining function have been proposed as running key generators in stream ciphers. These running key generators eau sometimes be broken by (ciphertext-only) correlation attacks on individual subsequences. A new class of combining functions is presented, which provides better security against such attacks. The security is quantified by the smallest number m + 1 of subsequences that must be simultaneously considered in a correlation attack. A necessary condition for such m th-order correlation-immunity is proved. A recursive construction is given that permits the construction of an m th-order immune combining function for n subsequences for any m and n with 1 m . Finally, the trade-off between the length of the linear equivalent of the nonlinear generator and the order m of its immunity against correlation attacks is considered.", "", "We study the notion of linear structure of a function defined from ^m to ^n, and in particular of a Boolean function. We characterize the existence of linear structures by means of the Fourier transform of the function. For Boolean functions, this characterization can be stated in a simpler way. Finally, we give some constructions of resilient Boolean functions which have no linear structure.", "Abstract Let P ( x ) be a function from GF (2 n ) to GF (2). P ( x ) is called “bent” if all Fourier coefficients of (−1) P(x) are ±1. The polynomial degree of a bent function P ( x ) is studied, as are the properties of the Fourier transform of (−1) P(x) , and a connection with Hadamard matrices." ] }
1609.00638
2291969311
In this article we present Miuz, a robustness index for complex networks. Miuz measures the impact of disconnecting a node from the network while comparing the sizes of the remaining connected components. Strictly speaking, Miuz for a node is defined as the inverse of the size of the largest connected component divided by the sum of the sizes of the remaining ones. We tested our index in attack strategies where the nodes are disconnected in decreasing order of a specified metric. We considered Miuz and other well-known centrality measures such as betweenness, degree, and harmonic centrality. All of these metrics were compared regarding the behavior of the robustness (R- index) during the attacks. In an attempt to simulate the internet backbone, the attacks were performed in complex networks with power-law degree distributions (scale-free networks). Preliminary results show that attacks based on disconnecting a few number of nodes Miuz are more dangerous (decreasing the robustness) than the same attacks based on other centrality measures. We believe that Miuz, as well as other measures based on the size of the largest connected component, provides a good addition to other robustness metrics for complex networks.
Over the last decade, there has been a huge interest in the analysis of complex networks and their connectivity properties @cite_13 . During the last years, networks and in particular social networks have gained significant popularity. An in-depth understanding of the graph structure is key to convert data into information. To do so, complex networks tools have emerged @cite_9 to classify networks @cite_0 , detect communities @cite_3 , determine important features and measure them @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_3", "@cite_0", "@cite_13" ], "mid": [ "22549352", "161230605", "2131717044", "2112090702", "2065769502" ], "abstract": [ "Social networks can be modeled and analyzed in terms of graph theory. This chapter provides an overview of the mathematical modeling of social networks with an overview of the metrics used to characterize them and the models used to artificially mimic the formation of such networks. We discuss metrics based on distances, degrees, and neighborhoods as well as the use of such metrics to detect change in the network structure. We also discuss the kind of structural differences that distinguish social networks from other types of natural networks together with the implications of these differences about the way in which these networks function.", "", "A large body of work has been devoted to identifying community structure in networks. A community is often though of as a set of nodes that has more connections between its members than to the remainder of the network. In this paper, we characterize as a function of size the statistical and structural properties of such sets of nodes. We define the network community profile plot, which characterizes the \"best\" possible community - according to the conductance measure - over a wide range of size scales, and we study over 70 large sparse real-world networks taken from a wide range of application domains. Our results suggest a significantly more refined picture of community structure in large real-world networks than has been appreciated previously. Our most striking finding is that in nearly every network dataset we examined, we observe tight but almost trivial communities at very small scales, and at larger size scales, the best possible communities gradually \"blend in\" with the rest of the network and thus become less \"community-like.\" This behavior is not explained, even at a qualitative level, by any of the commonly-used network generation models. Moreover, this behavior is exactly the opposite of what one would expect based on experience with and intuition from expander graphs, from graphs that are well-embeddable in a low-dimensional structure, and from small social networks that have served as testbeds of community detection algorithms. We have found, however, that a generative model, in which new edges are added via an iterative \"forest fire\" burning process, is able to produce graphs exhibiting a network community structure similar to our observations.", "Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.", "Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network1. Complex communication networks2 display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web3,4,5, the Internet6, social networks7 and cells8. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks." ] }
1609.00638
2291969311
In this article we present Miuz, a robustness index for complex networks. Miuz measures the impact of disconnecting a node from the network while comparing the sizes of the remaining connected components. Strictly speaking, Miuz for a node is defined as the inverse of the size of the largest connected component divided by the sum of the sizes of the remaining ones. We tested our index in attack strategies where the nodes are disconnected in decreasing order of a specified metric. We considered Miuz and other well-known centrality measures such as betweenness, degree, and harmonic centrality. All of these metrics were compared regarding the behavior of the robustness (R- index) during the attacks. In an attempt to simulate the internet backbone, the attacks were performed in complex networks with power-law degree distributions (scale-free networks). Preliminary results show that attacks based on disconnecting a few number of nodes Miuz are more dangerous (decreasing the robustness) than the same attacks based on other centrality measures. We believe that Miuz, as well as other measures based on the size of the largest connected component, provides a good addition to other robustness metrics for complex networks.
The idea of planning a network attack'' using centrality measures has been catching the attention of researchers and practitioners nowadays. For instance, @cite_5 used bet -ween -ness-centrality () for planning a network attack, calculating the value for all nodes, ordering nodes from higher to lower , and then attacking (discarding) those nodes in that order. They have shown that disconnecting only two of the top -ranked nodes , their packet-delivery ratio is reduced to @math 10 @math 0 Concerning centrality measures, betweenness centrality deserves special attention. Betweenness has been studied as a resilience metric for the routing layer @cite_6 and also as a robustness metric for complex networks @cite_11 and for internet autonomous systems networks @cite_4 among others.
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_6", "@cite_11" ], "mid": [ "2014770087", "2139905147", "2142843434", "2024982571" ], "abstract": [ "As the Internet becomes increasingly important to all aspects of society, the consequences of disruption become increasingly severe. Thus it is critical to increase the resilience and survivability of the future network. We define resilience as the ability of the network to provide desired service even when challenged by attacks, large-scale disasters, and other failures. This paper describes a comprehensive methodology to evaluate network resilience using a combination of analytical and simulation techniques with the goal of improving the resilience and survivability of the Future Internet.", "We calculate an extensive set of characteristics for Internet AS topologies extracted from the three data sources most frequently used by the research community: traceroutes, BGP, and WHOIS. We discover that traceroute and BGP topologies are similar to one another but differ substantially from the WHOIS topology. Among the widely considered metrics, we find that the joint degree distribution appears to fundamentally characterize Internet AS topologies as well as narrowly define values for other important metrics. We discuss the interplay between the specifics of the three data collection mechanisms and the resulting topology views. In particular, we how how the data collection peculiarities explain differences in the resulting joint degree distributions of the respective topologies. Finally, we release to the community the input topology datasets, along with the scripts and output of our calculations. This supplement hould enable researchers to validate their models against real data and to make more informed election of topology data sources for their specific needs", "The cost of failures within communication networks is significant and will only increase as their reach further extends into the way our society functions. Some aspects of network resilience, such as the application of fault-tolerant systems techniques to optical switching, have been studied and applied to great effect. However, networks - and the Internet in particular - are still vulnerable to malicious attacks, human mistakes such as misconfigurations, and a range of environmental challenges. We argue that this is, in part, due to a lack of a holistic view of the resilience problem, leading to inappropriate and difficult-to-manage solutions. In this article, we present a systematic approach to building resilient networked systems. We first study fundamental elements at the framework level such as metrics, policies, and information sensing mechanisms. Their understanding drives the design of a distributed multilevel architecture that lets the network defend itself against, detect, and dynamically respond to challenges. We then use a concrete case study to show how the framework and mechanisms we have developed can be applied to enhance resilience.", "Many complex systems can be described by networks, in which the constituent components are represented by vertices and the connections between the components are represented by edges between the corresponding vertices. A fundamental issue concerning complex networked systems is the robustness of the overall system to the failure of its constituent parts. Since the degree to which a networked system continues to function, as its component parts are degraded, typically depends on the integrity of the underlying network, the question of system robustness can be addressed by analyzing how the network structure changes as vertices are removed. Previous work has considered how the structure of complex networks change as vertices are removed uniformly at random, in decreasing order of their degree, or in decreasing order of their betweenness centrality. Here we extend these studies by investigating the effect on network structure of targeting vertices for removal based on a wider range of non-local measures of potential importance than simply degree or betweenness. We consider the effect of such targeted vertex removal on model networks with different degree distributions, clustering coefficients and assortativity coefficients, and for a variety of empirical networks." ] }
1609.00451
2514278201
ABSTRACTIn most classification tasks, there are observations that are ambiguous and therefore difficult to correctly label. Set-valued classifiers output sets of plausible labels rather than a single label, thereby giving a more appropriate and informative treatment to the labeling of ambiguous instances. We introduce a framework for multiclass set-valued classification, where the classifiers guarantee user-defined levels of coverage or confidence (the probability that the true label is contained in the set) while minimizing the ambiguity (the expected size of the output). We first derive oracle classifiers assuming the true distribution to be known. We show that the oracle classifiers are obtained from level sets of the functions that define the conditional probability of each class. Then we develop estimators with good asymptotic and finite sample properties. The proposed estimators build on existing single-label classifiers. The optimal classifier can sometimes output the empty set, but we provide two ...
Classifiers that output possibly more than one label are known as or . In another related framework called , a classifier may reject to output a definitive class label if the uncertainty is high. Set-valued classification contains this framework as a special case, as one can view the reject to classify'' option as outputting the entire set of possible labels. These methods for set-valued classification generally follow the idea of minimizing a modified loss function. For example, @cite_9 assigns a constant loss @math for the output reject'', while @cite_6 defines the loss function as a weighted combination of precision and recall in an information retrieval framework. Certain components of such modified loss functions, such as the loss of the output reject'' and the weight used to combine precision and recall, lack direct practical meaning and may be hard to choose for practitioners.
{ "cite_N": [ "@cite_9", "@cite_6" ], "mid": [ "2126022166", "2124232193" ], "abstract": [ "This paper studies two-class (or binary) classification of elements X in R k that allows for a reject option. Based on n independent copies of the pair of random variables (X,Y ) with X 2 R k and Y 2 0,1 , we consider classifiers f(X) that render three possible outputs: 0, 1 and R. The option R expresses doubt and is to be used for few observations that are hard to classify in an automatic way. Chow (1970) derived the optimal rule minimizing the risk P f(X) 6 Y, f(X) 6 R + dP f(X) = R . This risk function subsumes that the cost of making a wrong decision equals 1 and that of utilizing the reject option is d. We show that the classification problem hinges on the behavior of the regression function (x) = E(Y |X = x) near d and 1 d. (Here d 2 [0,1 2] as the other cases turn out to be trivial.) Classification rules can be categorized into plug-in estimators and empirical risk minimizers. Both types are considered here and we prove that the rates of convergence of the risk of any estimate depends on P | (X) d| + P | (X) (1 d)| and on the quality of the estimate for or an appropriate measure of the size of the class of classifiers, in case of plug-in rules and empirical risk minimizers, respectively. We extend the mathematical framework even further by dierentiating between costs associated with the two possible errors: predicting f(X) = 0 whilst Y = 1 and predicting f(X) = 1 whilst Y = 0. Such situations are common in, for instance, medical studies where misclassifying a sick patient as healthy is worse than the opposite.", "Nondeterministic classifiers are defined as those allowed to predict more than one class for some entries from an input space. Given that the true class should be included in predictions and the number of classes predicted should be as small as possible, these kind of classifiers can be considered as Information Retrieval (IR) procedures. In this paper, we propose a family of IR loss functions to measure the performance of nondeterministic learners. After discussing such measures, we derive an algorithm for learning optimal nondeterministic hypotheses. Given an entry from the input space, the algorithm requires the posterior probabilities to compute the subset of classes with the lowest expected loss. From a general point of view, nondeterministic classifiers provide an improvement in the proportion of predictions that include the true class compared to their deterministic counterparts; the price to be paid for this increase is usually a tiny proportion of predictions with more than one class. The paper includes an extensive experimental study using three deterministic learners to estimate posterior probabilities: a multiclass Support Vector Machine (SVM), a Logistic Regression, and a Naive Bayes. The data sets considered comprise both UCI multi-class learning tasks and microarray expressions of different kinds of cancer. We successfully compare nondeterministic classifiers with other alternative approaches. Additionally, we shall see how the quality of posterior probabilities (measured by the Brier score) determines the goodness of nondeterministic predictions." ] }
1609.00451
2514278201
ABSTRACTIn most classification tasks, there are observations that are ambiguous and therefore difficult to correctly label. Set-valued classifiers output sets of plausible labels rather than a single label, thereby giving a more appropriate and informative treatment to the labeling of ambiguous instances. We introduce a framework for multiclass set-valued classification, where the classifiers guarantee user-defined levels of coverage or confidence (the probability that the true label is contained in the set) while minimizing the ambiguity (the expected size of the output). We first derive oracle classifiers assuming the true distribution to be known. We show that the oracle classifiers are obtained from level sets of the functions that define the conditional probability of each class. Then we develop estimators with good asymptotic and finite sample properties. The proposed estimators build on existing single-label classifiers. The optimal classifier can sometimes output the empty set, but we provide two ...
Another line of related work is @cite_5 and @cite_2 , who introduced a method called conformal prediction'' that yields set-valued classifiers with finite sample confidence guarantees. @cite_1 @cite_3 , @cite_0 , and @cite_7 studied the conformal approach from the point of view of statistical optimality in the unsupervised, regression and binary classification cases, respectively. We make use of conformal ideas in Sections and . Recently, @cite_8 used asymptotic plug-in methods to derive classification confidence sets in the binary case. They control a different quantity, namely, the coverage conditional on @math having a single element. Finally, we notice that although it would seem appealing to aim at controlling the conditional coverage @math , for all @math , which @cite_4 calls object validity,'' Lemma 1 of @cite_0 unfortunately implies that if @math is continuous and @math has distribution-free conditional validity, then @math is trivial, meaning that @math .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_1", "@cite_3", "@cite_0", "@cite_2", "@cite_5" ], "mid": [ "2166693382", "", "2207876304", "1963600315", "2142246398", "1990621008", "2171585602", "1553101044" ], "abstract": [ "Conformal predictors are set predictors that are automatically valid in the sense of having coverage probability equal to or exceeding a given confidence level. Inductive conformal predictors are a computationally efficient version of conformal predictors satisfying the same property of validity. However, inductive conformal predictors have only been known to control unconditional coverage probability. This paper explores various versions of conditional validity and various ways to achieve them using inductive conformal predictors and their modifications. In particular, it discusses a convenient expression of one of the modifications in terms of ROC curves.", "", "Confident prediction is highly relevant in machine learning; for example, in applications such as medical diagnoses, wrong prediction can be fatal. For classification, there already exist procedures that allow to not classify data when the confidence in their prediction is weak. This approach is known as classification with reject option. In the present paper, we provide new methodology for this approach. Predicting a new instance via a confidence set, we ensure an exact control of the probability of classification. Moreover, we show that this methodology is easily implementable and entails attractive theoretical and numerical properties.", "This paper applies conformal prediction techniques to compute simultaneous prediction bands and clustering trees for functional data. These tools can be used to detect outliers and clusters. Both our prediction bands and clustering trees provide prediction sets for the underlying stochastic process with a guaranteed finite sample behavior, under no distributional assumptions. The prediction sets are also informative in that they correspond to the high density region of the underlying process. While ordinary conformal prediction has high computational cost for functional data, we use the inductive conformal predictor, together with several novel choices of conformity scores, to simplify the computation. Our methods are illustrated on some real data examples.", "We consider high-dimensional generalized linear models with Lipschitz loss functions, and prove a nonasymptotic oracle inequality for the empirical risk minimizer with Lasso penalty. The penalty is based on the coefficients in the linear predictor, after normalization with the empirical norm. The examples include logistic regression, density estimation and classification with hinge loss. Least squares regression is also discussed.", "type=\"main\" xml:id=\"rssb12021-abs-0001\"> We study distribution-free, non-parametric prediction bands with a focus on their finite sample behaviour. First we investigate and develop different notions of finite sample coverage guarantees. Then we give a new prediction band by combining the idea of ‘conformal prediction’ with non-parametric conditional density estimation. The proposed estimator, called COPS (conformal optimized prediction set), always has a finite sample guarantee. Under regularity conditions the estimator converges to an oracle band at a minimax optimal rate. A fast approximation algorithm and a data-driven method for selecting the bandwidth are developed. The method is illustrated in simulated and real data examples.", "Conformal prediction uses past experience to determine precise levels of confidence in new predictions. Given an error probability e, together with a method that makes a prediction ŷ of a label y, it produces a set of labels, typically containing ŷ, that also contains y with probability 1 – e. Conformal prediction can be applied to any method for producing ŷ: a nearest-neighbor method, a support-vector machine, ridge regression, etc. Conformal prediction is designed for an on-line setting in which labels are predicted successively, each one being revealed before the next is predicted. The most novel and valuable feature of conformal prediction is that if the successive examples are sampled independently from the same distribution, then the successive predictions will be right 1 – e of the time, even though they are based on an accumulating data set rather than on independent data sets. In addition to the model under which successive examples are sampled independently, other on-line compression models can also use conformal prediction. The widely used Gaussian linear model is one of these. This tutorial presents a self-contained account of the theory of conformal prediction and works through several numerical examples. A more comprehensive treatment of the topic is provided in Algorithmic Learning in a Random World, by Vladimir Vovk, Alex Gammerman, and Glenn Shafer (Springer, 2005).", "Algorithmic Learning in a Random World describes recent theoretical and experimental developments in building computable approximations to Kolmogorov's algorithmic notion of randomness. Based on these approximations, a new set of machine learning algorithms have been developed that can be used to make predictions and to estimate their confidence and credibility in high-dimensional spaces under the usual assumption that the data are independent and identically distributed (assumption of randomness). Another aim of this unique monograph is to outline some limits of predictions: The approach based on algorithmic theory of randomness allows for the proof of impossibility of prediction in certain situations. The book describes how several important machine learning problems, such as density estimation in high-dimensional spaces, cannot be solved if the only assumption is randomness." ] }
1609.00543
2513222658
The popularity of social media platforms such as Twitter has led to the proliferation of automated bots, creating both opportunities and challenges in information dissemination, user engagements, and quality of services. Past works on profiling bots had been focused largely on malicious bots, with the assumption that these bots should be removed. In this work, however, we find many bots that are benign, and propose a new, broader categorization of bots based on their behaviors. This includes broadcast, consumption, and spam bots. To facilitate comprehensive analyses of bots and how they compare to human accounts, we develop a systematic profiling framework that includes a rich set of features and classifier bank. We conduct extensive experiments to evaluate the performances of different classifiers under varying time windows, identify the key features of bots, and infer about bots in a larger Twitter population. Our analysis encompasses more than 159K bot and human (non-bot) accounts in Twitter. The results provide interesting insights on the behavioral traits of both benign and malicious bots.
A number of studies have been conducted to identify and profile bots in social media. To detect spam bots, Wang @cite_17 utilized content- and graph-based features, derived from the tweet posts and follow network connectivity respectively. Chu @cite_4 investigated whether a Twitter account is a human, bot, or cyborg. Here a bot was defined as an aggresive or spammy automated account, while cyborg refers to a bot-assisted human or human-assisted bot. Different from our work, the bots defined in @cite_4 are more of malicious nature, and the study did not provide further categorization analysis of benign and malicious bots in Twitter.
{ "cite_N": [ "@cite_4", "@cite_17" ], "mid": [ "2072715695", "1517046895" ], "abstract": [ "Twitter is a new web application playing dual roles of online social networking and microblogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot, and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot, and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: 1) an entropy-based component, 2) a spam detection component, 3) an account properties component, and 4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot, or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.", "As online social networking sites become more and more popular, they have also attracted the attentions of the spammers. In this paper, Twitter, a popular micro-blogging service, is studied as an example of spam bots detection in online social networking sites. A machine learning approach is proposed to distinguish the spam bots from normal ones. To facilitate the spam bots detection, three graph-based features, such as the number of friends and the number of followers, are extracted to explore the unique follower and friend relationships among users on Twitter. Three content-based features are also extracted from user's most recent 20 tweets. A real data set is collected from Twitter's public available information using two different methods. Evaluation experiments show that the detection system is efficient and accurate to identify spam bots in Twitter." ] }
1609.00543
2513222658
The popularity of social media platforms such as Twitter has led to the proliferation of automated bots, creating both opportunities and challenges in information dissemination, user engagements, and quality of services. Past works on profiling bots had been focused largely on malicious bots, with the assumption that these bots should be removed. In this work, however, we find many bots that are benign, and propose a new, broader categorization of bots based on their behaviors. This includes broadcast, consumption, and spam bots. To facilitate comprehensive analyses of bots and how they compare to human accounts, we develop a systematic profiling framework that includes a rich set of features and classifier bank. We conduct extensive experiments to evaluate the performances of different classifiers under varying time windows, identify the key features of bots, and infer about bots in a larger Twitter population. Our analysis encompasses more than 159K bot and human (non-bot) accounts in Twitter. The results provide interesting insights on the behavioral traits of both benign and malicious bots.
To investigate on spam bots, Stringhini @cite_21 created honey profiles on Facebook, Twitter and MySpace. By analyzing the collected data, they identified anomalous accounts who contacted the honey profiles and devised features for detecting spam bots. Going further, Lee @cite_20 conducted a 7-month study on Twitter by creating 60 social honeypots that try to lure content polluters'' (a.k.a. spam bots). Users who follow or message two or more honeypot accounts are automatically assumed to be content polluters. There are also related works on spam bot detection based on social proximity @cite_0 or both social and content proximities @cite_15 . Tavares and Faisal @cite_19 distinguished between personal, managed, and bot accounts in Twitter, according to their tweet time intervals.
{ "cite_N": [ "@cite_21", "@cite_0", "@cite_19", "@cite_15", "@cite_20" ], "mid": [ "1986678144", "2005556331", "1996359281", "146417747", "176212337" ], "abstract": [ "Social networking has become a popular way for users to meet and interact online. Users spend a significant amount of time on popular social network platforms (such as Facebook, MySpace, or Twitter), storing and sharing a wealth of personal information. This information, as well as the possibility of contacting thousands of users, also attracts the interest of cybercriminals. For example, cybercriminals might exploit the implicit trust relationships between users in order to lure victims to malicious websites. As another example, cybercriminals might find personal information valuable for identity theft or to drive targeted spam campaigns. In this paper, we analyze to which extent spam has entered social networks. More precisely, we analyze how spammers who target social networking sites operate. To collect the data about spamming activity, we created a large and diverse set of \"honey-profiles\" on three large social networking sites, and logged the kind of contacts and messages that they received. We then analyzed the collected data and identified anomalous behavior of users who contacted our profiles. Based on the analysis of this behavior, we developed techniques to detect spammers in social networks, and we aggregated their messages in large spam campaigns. Our results show that it is possible to automatically identify the accounts used by spammers, and our analysis was used for take-down efforts in a real-world social network. More precisely, during this study, we collaborated with Twitter and correctly detected and deleted 15,857 spam profiles.", "Recently, Twitter has emerged as a popular platform for discovering real-time information on the Web, such as news stories and people's reaction to them. Like the Web, Twitter has become a target for link farming, where users, especially spammers, try to acquire large numbers of follower links in the social network. Acquiring followers not only increases the size of a user's direct audience, but also contributes to the perceived influence of the user, which in turn impacts the ranking of the user's tweets by search engines. In this paper, we first investigate link farming in the Twitter network and then explore mechanisms to discourage the activity. To this end, we conducted a detailed analysis of links acquired by over 40,000 spammer accounts suspended by Twitter. We find that link farming is wide spread and that a majority of spammers' links are farmed from a small fraction of Twitter users, the social capitalists, who are themselves seeking to amass social capital and links by following back anyone who follows them. Our findings shed light on the social dynamics that are at the root of the link farming problem in Twitter network and they have important implications for future designs of link spam defenses. In particular, we show that a simple user ranking scheme that penalizes users for connecting to spammers can effectively address the problem by disincentivizing users from linking with other users simply to gain influence.", "Human behaviour is highly individual by nature, yet statistical structures are emerging which seem to govern the actions of human beings collectively. Here we search for universal statistical laws dictating the timing of human actions in communication decisions. We focus on the distribution of the time interval between messages in human broadcast communication, as documented in Twitter, and study a collection of over 160,000 tweets for three user categories: personal (controlled by one person), managed (typically PR agency controlled) and bot-controlled (automated system). To test our hypothesis, we investigate whether it is possible to differentiate between user types based on tweet timing behaviour, independently of the content in messages. For this purpose, we developed a system to process a large amount of tweets for reality mining and implemented two simple probabilistic inference algorithms: 1. a naive Bayes classifier, which distinguishes between two and three account categories with classification performance of 84.6 and 75.8 , respectively and 2. a prediction algorithm to estimate the time of a user's next tweet with an . Our results show that we can reliably distinguish between the three user categories as well as predict the distribution of a user's inter-message time with reasonable accuracy. More importantly, we identify a characteristic power-law decrease in the tail of inter-message time distribution by human users which is different from that obtained for managed and automated accounts. This result is evidence of a universal law that permeates the timing of human decisions in broadcast communication and extends the findings of several previous studies of peer-to-peer communication.", "The availability of microblogging, like Twitter and Sina Weibo, makes it a popular platform for spammers to unfairly overpower normal users with unwanted content via social networks, known as social spamming. The rise of social spamming can significantly hinder the use of microblogging systems for effective information dissemination and sharing. Distinct features of microblogging systems present new challenges for social spammer detection. First, unlike traditional social networks, microblogging allows to establish some connections between two parties without mutual consent, which makes it easier for spammers to imitate normal users by quickly accumulating a large number of \"human\" friends. Second, microblogging messages are short, noisy, and unstructured. Traditional social spammer detection methods are not directly applicable to microblogging. In this paper, we investigate how to collectively use network and content information to perform effective social spammer detection in microblogging. In particular, we present an optimization formulation that models the social network and content information in a unified framework. Experiments on a real-world Twitter dataset demonstrate that our proposed method can effectively utilize both kinds of information for social spammer detection.", "The rise in popularity of social networking sites such as Twitter and Facebook has been paralleled by the rise of unwanted, disruptive entities on these networks- — including spammers, malware disseminators, and other content polluters. Inspired by sociologists working to ensure the success of commons and criminologists focused on deterring vandalism and preventing crime, we present the first long-term study of social honeypots for tempting, profiling, and filtering content polluters in social media. Concretely, we report on our experiences via a seven-month deployment of 60 honeypots on Twitter that resulted in the harvesting of 36,000 candidate content polluters. As part of our study, we (1) examine the harvested Twitter users, including an analysis of link payloads, user behavior over time, and followers following network dynamics and (2) evaluate a wide range of features to investigate the effectiveness of automatic content polluter identification." ] }
1609.00543
2513222658
The popularity of social media platforms such as Twitter has led to the proliferation of automated bots, creating both opportunities and challenges in information dissemination, user engagements, and quality of services. Past works on profiling bots had been focused largely on malicious bots, with the assumption that these bots should be removed. In this work, however, we find many bots that are benign, and propose a new, broader categorization of bots based on their behaviors. This includes broadcast, consumption, and spam bots. To facilitate comprehensive analyses of bots and how they compare to human accounts, we develop a systematic profiling framework that includes a rich set of features and classifier bank. We conduct extensive experiments to evaluate the performances of different classifiers under varying time windows, identify the key features of bots, and infer about bots in a larger Twitter population. Our analysis encompasses more than 159K bot and human (non-bot) accounts in Twitter. The results provide interesting insights on the behavioral traits of both benign and malicious bots.
Ferrara @cite_6 built a web application to test if a Twitter account behaves like a bot or human. They used the list of bots and human accounts identified by @cite_20 , and collected their tweets and follow network information. This study, however, covers only malicious bots. Dickerson @cite_9 used network, linguistic, and application-oriented features to distinguish between bots and humans in the 2014 Indian election. Abokhodair @cite_12 studied on a network of bots that collectively tweet about the 2012 Syrian civil war. This study covers both malicious (e.g., phishing) and benign (e.g., testimonial) bots. In contrast to our work, however, their findings are tailored to a specific event (i.e., the civil war) and may not be applicable to other bot types in a larger Twitter population.
{ "cite_N": [ "@cite_9", "@cite_20", "@cite_12", "@cite_6" ], "mid": [ "2020264290", "176212337", "1993784310", "1837843568" ], "abstract": [ "In many Twitter applications, developers collect only a limited sample of tweets and a local portion of the Twitter network. Given such Twitter applications with limited data, how can we classify Twitter users as either bots or humans? We develop a collection of network-, linguistic-, and application-oriented variables that could be used as possible features, and identify specific features that distinguish well between humans and bots. In particular, by analyzing a large dataset relating to the 2014 Indian election, we show that a number of sentiment-related factors are key to the identification of bots, significantly increasing the Area under the ROC Curve (AUROC). The same method may be used for other applications as well.", "The rise in popularity of social networking sites such as Twitter and Facebook has been paralleled by the rise of unwanted, disruptive entities on these networks- — including spammers, malware disseminators, and other content polluters. Inspired by sociologists working to ensure the success of commons and criminologists focused on deterring vandalism and preventing crime, we present the first long-term study of social honeypots for tempting, profiling, and filtering content polluters in social media. Concretely, we report on our experiences via a seven-month deployment of 60 honeypots on Twitter that resulted in the harvesting of 36,000 candidate content polluters. As part of our study, we (1) examine the harvested Twitter users, including an analysis of link payloads, user behavior over time, and followers following network dynamics and (2) evaluate a wide range of features to investigate the effectiveness of automatic content polluter identification.", "Social botnets have become an important phenomenon on social media. There are many ways in which social bots can disrupt or influence online discourse, such as, spam hashtags, scam twitter users, and astroturfing. In this paper we considered one specific social botnet in Twitter to understand how it grows over time, how the content of tweets by the social botnet differ from regular users in the same dataset, and lastly, how the social botnet may have influenced the relevant discussions. Our analysis is based on a qualitative coding for approximately 3000 tweets in Arabic and English from the Syrian social bot that was active for 35 weeks on Twitter before it was shutdown. We find that the growth, behavior and content of this particular botnet did not specifically align with common conceptions of botnets. Further we identify interesting aspects of the botnet that distinguish it from regular users.", "Today's social bots are sophisticated and sometimes menacing. Indeed, their presence can endanger online ecosystems as well as our society." ] }
1609.00543
2513222658
The popularity of social media platforms such as Twitter has led to the proliferation of automated bots, creating both opportunities and challenges in information dissemination, user engagements, and quality of services. Past works on profiling bots had been focused largely on malicious bots, with the assumption that these bots should be removed. In this work, however, we find many bots that are benign, and propose a new, broader categorization of bots based on their behaviors. This includes broadcast, consumption, and spam bots. To facilitate comprehensive analyses of bots and how they compare to human accounts, we develop a systematic profiling framework that includes a rich set of features and classifier bank. We conduct extensive experiments to evaluate the performances of different classifiers under varying time windows, identify the key features of bots, and infer about bots in a larger Twitter population. Our analysis encompasses more than 159K bot and human (non-bot) accounts in Twitter. The results provide interesting insights on the behavioral traits of both benign and malicious bots.
There are also studies aiming to quantify the susceptibility of social media users to the influence of bots @cite_7 @cite_11 @cite_10 . By embedding their bots into the Facebook network, Boshmaf @cite_10 demonstrated that users are vulnerable to phishing (e.g., exposing their phone number or address). The susceptibility of users is also evident in Twitter @cite_7 @cite_11 . Freitas @cite_1 tried to reverse-engineer the infiltration strategies of malicious Twitter bots in order to understand their functioning. Most recently, Subrahmanian @cite_13 reported the winning solutions of the DARPA Twitter Bot Detection Challenge. Again, however, all these studies deal mainly with malicious bots and ignore benign bots.
{ "cite_N": [ "@cite_13", "@cite_7", "@cite_1", "@cite_10", "@cite_11" ], "mid": [ "2278635123", "2165591544", "2164441058", "2157826538", "2290561533" ], "abstract": [ "From politicians and nation states to terrorist groups, numerous organizations reportedly conduct explicit campaigns to influence opinions on social media, posing a risk to freedom of expression. Thus, there is a need to identify and eliminate \"influence bots\"--realistic, automated identities that illicitly shape discussions on sites like Twitter and Facebook--before they get too influential.", "The Social Mediator forum was created to bridge the gaps between the theory and practice of social media research and development. The articles are intended to promote greater awareness of new insights and experiences in the rapidly evolving domain of social media, some of which may influence perspectives and approaches in the more established areas of human-computer interaction. Each article in the forum is made up of several short contributions from people representing different perspectives on a particular topic. Previous installments of this forum have woven together diverse perspectives on the ways that social media is transforming relationships among different stakeholders in the realms of healthcare and government. The current article highlights some of the ways social robots (socialbots)---programs that operate autonomously on social networking sites---are transforming relationships within those sites, and how these transformations may more broadly influence relationships among people and organizations in the future. A recent article in Communications of the ACM called \"The Social Life of Robots\" reported that \"researchers have started to explore the possibilities of 'social' machines capable of working together with minimal human supervision\" [1]. That article illuminates recent developments involving interactions between humans and robots in the physical world; this article focuses on the interactions between humans and robots in the virtual world. Our authors are exploring and expanding the frontiers of designing, deploying, and analyzing the behavior and impact of robots operating in online social networks, and they have invited a number of other frontierspeople to share some of their insights, experiences, and future expectations for social robotics.", "Online Social Networks (OSNs) such as Twitter and Facebook have become a significant testing ground for Artificial Intelligence developers who build programs, known as socialbots, that imitate actual users by automating their social-network activities such as forming social links and posting content. Particularly, Twitter users have shown difficulties in distinguishing these socialbots from the human users in their social graphs. Frequently, legitimate users engage in conversations with socialbots. More impressively, socialbots are effective in acquiring human users as followers and exercising influence within them. While the success of socialbots is certainly a remarkable achievement for AI practitioners, their proliferation in the Twitter-sphere opens many possibilities for cybercrime. The proliferation of socialbots in the Twitter-sphere motivates us to assess the characteristics or strategies that make socialbots most likely to succeed. In this direction, we created 120 socialbot accounts in Twitter, which have a profile, follow other users, and generate tweets either by reposting messages that others have posted or by creating their own synthetic tweets. Then, we employ a 2k factorial design experiment in order to quantify the infiltration effectiveness of different socialbot strategies. Our analysis is the first of a kind, and reveals what strategies make socialbots successful in the Twitter-sphere.", "Online Social Networks (OSNs) have attracted millions of active users and have become an integral part of today's web ecosystem. Unfortunately, in the wrong hands, OSNs can be used to harvest private user data, distribute malware, control botnets, perform surveillance, spread misinformation, and even influence algorithmic trading. Usually, an adversary starts off by running an infiltration campaign using hijacked or adversary-owned OSN accounts, with an objective to connect with a large number of users in the targeted OSN. In this article, we evaluate how vulnerable OSNs are to a large-scale infiltration campaign run by socialbots: bots that control OSN accounts and mimic the actions of real users. We adopted the design of a traditional web-based botnet and built a prototype of a Socialbot Network (SbN): a group of coordinated programmable socialbots. We operated our prototype on Facebook for 8weeks, and collected data about user behavior in response to a large-scale infiltration campaign. Our results show that (1) by exploiting known social behaviors of users, OSNs such as Facebook can be infiltrated with a success rate of up to 80 , (2) subject to user profile privacy settings, a successful infiltration can result in privacy breaches where even more private user data are exposed, (3) given the economics of today's underground markets, running a large-scale infiltration campaign might be profitable but is still not particularly attractive as a sustainable and independent business, (4) the security of socially-aware systems that use or integrate OSN platforms can be at risk, given the infiltration capability of an adversary in OSNs, and (5) defending against malicious socialbots raises a set of challenges that relate to web automation, online-offline identity binding, and usable security.", "Social bots are automatic or semi-automatic computer programs that mimic humans and or human behavior in online social networks. Social bots can attack users (targets) in online social networks to pursue a variety of latent goals, such as to spread information or to influence targets. Without a deep understanding of the nature of such attacks or the susceptibility of users, the potential of social media as an instrument for facilitating discourse or democratic processes is in jeopardy. In this paper, we study data from the Social Bot Challenge 2011 - an experiment conducted by the WebEcologyProject during 2011 - in which three teams implemented a number of social bots that aimed to influence user behavior on Twitter. Using this data, we aim to develop models to (i) identify susceptible users among a set of targets and (ii) predict users’ level of susceptibility. We explore the predictiveness of three different groups of features (network, behavioral and linguistic features) for these tasks. Our results suggest that susceptible users tend to use Twitter for a conversational purpose and tend to be more open and social since they communicate with many different users, use more social words and show more affection than non-susceptible users." ] }
1609.00512
2513587678
The goal of a hub-based distance labeling scheme for a network G = (V, E) is to assign a small subset S(u) @math V to each node u @math V, in such a way that for any pair of nodes u, v, the intersection of hub sets S(u) @math S(v) contains a node on the shortest uv-path. The existence of small hub sets, and consequently efficient shortest path processing algorithms, for road networks is an empirical observation. A theoretical explanation for this phenomenon was proposed by (SODA 2010) through a network parameter they called highway dimension, which captures the size of a hitting set for a collection of shortest paths of length at least r intersecting a given ball of radius 2r. In this work, we revisit this explanation, introducing a more tractable (and directly comparable) parameter based solely on the structure of shortest-path spanning trees, which we call skeleton dimension. We show that skeleton dimension admits an intuitive definition for both directed and undirected graphs, provides a way of computing labels more efficiently than by using highway dimension, and leads to comparable or stronger theoretical bounds on hub set size.
The notion of @math -preserving distance labeling, first introduced by Bollob ' @cite_3 , describes a labeling scheme correctly encoding every distance that is at least @math . @cite_3 presents such a @math -preserving scheme of size @math . This was recently improved by @cite_8 to a @math -preserving scheme of size @math . Together with an observation that all distances smaller than @math can be stored directly, this results in a labeling scheme of size @math , where @math . For sparse graphs, this is @math .
{ "cite_N": [ "@cite_3", "@cite_8" ], "mid": [ "2082352769", "756611012" ], "abstract": [ "For an unweighted graph @math , @math is a subgraph if @math , and @math is a Steiner graph if @math , and for any pair of vertices @math , the distance between them in @math (denoted @math ) is at least the distance between them in @math (denoted @math ). In this paper we introduce the notion of distance preserver. A subgraph (resp., Steiner graph) @math of a graph @math is a subgraph (resp., Steiner) @math -preserver of @math if for every pair of vertices @math with @math , @math . We show that any graph (resp., digraph) has a subgraph @math -preserver with at most @math edges (resp., arcs), and there are graphs and digraphs for which any undirected Steiner @math -preserver contains @math edges. However, we show that if one allows a directed Steiner (diSteiner) @math -preserver, then these bounds can be improved. Specifically, we show that for any graph or digraph there exists a diSteiner @math -preserver with @math arcs, and that this result is tight up to a constant factor. We also study @math -preserving distance labeling schemes, that are labeling schemes that guarantee precise calculation of distances between pairs of vertices that are at a distance of at least @math one from another. We show that there exists a @math -preserving labeling scheme with labels of size @math , and that labels of size @math are required for any @math -preserving labeling scheme.", "A distance labeling scheme labels the n nodes of a graph with binary strings such that, given the labels of any two nodes, one can determine the distance in the graph between the two nodes by looking only at the labels. A D-preserving distance labeling scheme only returns precise distances between pairs of nodes that are at distance at least D from each other. In this paper we consider distance labeling schemes for the classical case of unweighted and undirected graphs. We present the first distance labeling scheme of size opnq for sparse graphs (and hence bounded degree graphs). This addresses an open problem by Gavoille et. al. [J. Algo. 2004], hereby separating the complexity from general graphs which require pnq size Moon [Proc. of Glasgow Math. Association 1965]. As an intermediate result we give a Op n D log" ] }
1609.00512
2513587678
The goal of a hub-based distance labeling scheme for a network G = (V, E) is to assign a small subset S(u) @math V to each node u @math V, in such a way that for any pair of nodes u, v, the intersection of hub sets S(u) @math S(v) contains a node on the shortest uv-path. The existence of small hub sets, and consequently efficient shortest path processing algorithms, for road networks is an empirical observation. A theoretical explanation for this phenomenon was proposed by (SODA 2010) through a network parameter they called highway dimension, which captures the size of a hitting set for a collection of shortest paths of length at least r intersecting a given ball of radius 2r. In this work, we revisit this explanation, introducing a more tractable (and directly comparable) parameter based solely on the structure of shortest-path spanning trees, which we call skeleton dimension. We show that skeleton dimension admits an intuitive definition for both directed and undirected graphs, provides a way of computing labels more efficiently than by using highway dimension, and leads to comparable or stronger theoretical bounds on hub set size.
Highway dimension @math guarantees the existence of distance labels of size @math where @math is the weighted diameter of the graph @cite_14 . However, when restricting to polynomial time algorithms, such labels can only be approximated within a @math factor using shortest path cover algorithms @cite_14 or a @math factor with a more involved procedure based on VC-dimension @cite_12 . In any case, this requires an all-pair shortest path computation. For large networks, labels can be practically computed when classical heuristics such as contraction hierarchies (CH) can be performed @cite_14 @cite_19 @cite_22 . Low highway dimension guarantees that there exists an elimination ordering for CH such that the graph produced has bounded size @cite_14 . However, it does not ensure running time faster than all pair shortest path computation.
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_22", "@cite_12" ], "mid": [ "2112513979", "", "2146583842", "85521454" ], "abstract": [ "[SODA 2010] have recently presented a theoretical analysis of several practical point-to-point shortest path algorithms based on modeling road networks as graphs with low highway dimension. They also analyze a labeling algorithm. While no practical implementation of this algorithm existed, it has the best time bounds. This paper describes an implementation of the labeling algorithm that is faster than any existing method on continental road networks.", "", "We study hierarchical hub labelings for computing shortest paths. Our new theoretical insights into the structure of hierarchical labels lead to faster preprocessing algorithms, making the labeling approach practical for a wider class of graphs. We also find smaller labels for road networks, improving the query speed.", "We explore the relationship between VC-dimension and graph algorithm design. In particular, we show that set systems induced by sets of vertices on shortest paths have VC-dimension at most two. This allows us to use a result from learning theory to improve time bounds on query algorithms for the point-to-point shortest path problem in networks of low highway dimension, such as road networks. We also refine the definitions of highway dimension and related concepts, making them more general and potentially more relevant to practice. In particular, we define highway dimension in terms of set systems induced by shortest paths, and give cardinality-based and average case definitions." ] }
1609.00161
2510678786
Graph clustering is widely used in many data analysis applications. In this paper we propose several parallel graph clustering algorithms based on Monte Carlo simulations and expectation maximization in the context of stochastic block models. We apply those algorithms to the specific problems of recommender systems and social network anonymization. We compare the experimental results to previous propositions.
) is closely related to the (originally defined in @cite_4 ) and more precisely to the algorithm of @cite_6 : see .
{ "cite_N": [ "@cite_4", "@cite_6" ], "mid": [ "2107107106", "2339289145" ], "abstract": [ "Consider data consisting of pairwise measurements, such as presence or absence of links between pairs of objects. These data arise, for instance, in the analysis of protein interactions and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing pairwise measurements with probabilistic models requires special assumptions, since the usual independence or exchangeability assumptions no longer hold. Here we introduce a class of variance allocation models for pairwise measurements: mixed membership stochastic blockmodels. These models combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters that instantiate node-specific variability in the connections (mixed membership). We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic blockmodels with applications to social networks and protein interaction networks.", "With increasing amounts of information available, modeling and predicting user preferences—for books or articles, for example—are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users’ ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user’s and item’s groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets." ] }
1609.00129
2949232997
Detection of partially occluded objects is a challenging computer vision problem. Standard Convolutional Neural Network (CNN) detectors fail if parts of the detection window are occluded, since not every sub-part of the window is discriminative on its own. To address this issue, we propose a novel loss layer for CNNs, named grid loss, which minimizes the error rate on sub-blocks of a convolution layer independently rather than over the whole feature map. This results in parts being more discriminative on their own, enabling the detector to recover if the detection window is partially occluded. By mapping our loss layer back to a regular fully connected layer, no additional computational cost is incurred at runtime compared to standard CNNs. We demonstrate our method for face detection on several public face detection benchmarks and show that our method outperforms regular CNNs, is suitable for realtime applications and achieves state-of-the-art performance.
Since there is a multitude of work in the area of face detection, a complete discussion of all papers is out of scope of this work. Hence, we focus our discussion only on seminal work and closely related approaches in the field and refer to @cite_26 for a more complete survey.
{ "cite_N": [ "@cite_26" ], "mid": [ "2160532515" ], "abstract": [ "We present a comprehensive and survey for face detection 'in-the-wild'.We critically describe the advances in the three main families of algorithms.We comment on the performance of the state-of-the-art in the current benchmarks.We outline future research avenues on the topic and beyond. Face detection is one of the most studied topics in computer vision literature, not only because of the challenging nature of face as an object, but also due to the countless applications that require the application of face detection as a first step. During the past 15years, tremendous progress has been made due to the availability of data in unconstrained capture conditions (so-called 'in-the-wild') through the Internet, the effort made by the community to develop publicly available benchmarks, as well as the progress in the development of robust computer vision algorithms. In this paper, we survey the recent advances in real-world face detection techniques, beginning with the seminal Viola-Jones face detector methodology. These techniques are roughly categorized into two general schemes: rigid templates, learned mainly via boosting based methods or by the application of deep neural networks, and deformable models that describe the face by its parts. Representative methods will be described in detail, along with a few additional successful methods that we briefly go through at the end. Finally, we survey the main databases used for the evaluation of face detection algorithms and recent benchmarking efforts, and discuss the future of face detection." ] }
1609.00129
2949232997
Detection of partially occluded objects is a challenging computer vision problem. Standard Convolutional Neural Network (CNN) detectors fail if parts of the detection window are occluded, since not every sub-part of the window is discriminative on its own. To address this issue, we propose a novel loss layer for CNNs, named grid loss, which minimizes the error rate on sub-blocks of a convolution layer independently rather than over the whole feature map. This results in parts being more discriminative on their own, enabling the detector to recover if the detection window is partially occluded. By mapping our loss layer back to a regular fully connected layer, no additional computational cost is incurred at runtime compared to standard CNNs. We demonstrate our method for face detection on several public face detection benchmarks and show that our method outperforms regular CNNs, is suitable for realtime applications and achieves state-of-the-art performance.
A seminal work is the method of Viola and Jones @cite_30 . They propose a realtime detector using a cascade of simple decision stumps. These classifiers are based on area-difference features computed over differently sized rectangles. To accelerate feature computation, they employ integral images for computing rectangular areas in constant time, independent of the rectangle size.
{ "cite_N": [ "@cite_30" ], "mid": [ "2137401668" ], "abstract": [ "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second." ] }
1609.00129
2949232997
Detection of partially occluded objects is a challenging computer vision problem. Standard Convolutional Neural Network (CNN) detectors fail if parts of the detection window are occluded, since not every sub-part of the window is discriminative on its own. To address this issue, we propose a novel loss layer for CNNs, named grid loss, which minimizes the error rate on sub-blocks of a convolution layer independently rather than over the whole feature map. This results in parts being more discriminative on their own, enabling the detector to recover if the detection window is partially occluded. By mapping our loss layer back to a regular fully connected layer, no additional computational cost is incurred at runtime compared to standard CNNs. We demonstrate our method for face detection on several public face detection benchmarks and show that our method outperforms regular CNNs, is suitable for realtime applications and achieves state-of-the-art performance.
Modern boosting based detectors use linear classifiers on SURF based features @cite_27 , exemplars @cite_4 , and leverage landmark information with shape-indexed features for classification @cite_49 . Other boosting based detectors compute integral images on oriented gradient features as well as LUV channels and use shallow boosted decision trees @cite_46 or constrain the features on the feature channels to be block sized @cite_43 . Additionally, @cite_6 proposes CNN features for the boosting framework.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_43", "@cite_27", "@cite_49", "@cite_46" ], "mid": [ "1966822758", "", "2041497292", "2100807570", "204612701", "1849007038" ], "abstract": [ "Despite the fact that face detection has been studied intensively over the past several decades, the problem is still not completely solved. Challenging conditions, such as extreme pose, lighting, and occlusion, have historically hampered traditional, model-based methods. In contrast, exemplar-based face detection has been shown to be effective, even under these challenging conditions, primarily because a large exemplar database is leveraged to cover all possible visual variations. However, relying heavily on a large exemplar database to deal with the face appearance variations makes the detector impractical due to the high space and time complexity. We construct an efficient boosted exemplar-based face detector which overcomes the defect of the previous work by being faster, more memory efficient, and more accurate. In our method, exemplars as weak detectors are discriminatively trained and selectively assembled in the boosting framework which largely reduces the number of required exemplars. Notably, we propose to include non-face images as negative exemplars to actively suppress false detections to further improve the detection accuracy. We verify our approach over two public face detection benchmarks and one personal photo album, and achieve significant improvement over the state-of-the-art algorithms in terms of both accuracy and efficiency.", "", "Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can’t meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multiscale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in ViolaJones framework, the multi-view face detector using aggregate channel features surpasses current state-of-the-art detectors on AFW and FDDB testsets, while runs at 42 FPS", "This paper presents a novel learning framework for training boosting cascade based object detector from large scale dataset. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by three key differences. First, the proposed framework adopts multi-dimensional SURF features instead of single dimensional Haar features to describe local patches. In this way, the number of used local patches can be reduced from hundreds of thousands to several hundreds. Second, it adopts logistic regression as weak classifier for each local patch instead of decision trees in the VJ framework. Third, we adopt AUC as a single criterion for the convergence test during cascade training rather than the two trade-off criteria (false-positive-rate and hit-rate) in the VJ framework. The benefit is that the false-positive-rate can be adaptive among different cascade stages, and thus yields much faster convergence speed of SURF cascade. Combining these points together, the proposed approach has three good properties. First, the boosting cascade can be trained very efficiently. Experiments show that the proposed approach can train object detectors from billions of negative samples within one hour even on personal computers. Second, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed. Third, the built detector is small in model-size due to short cascade stages.", "We present a new state-of-the-art approach for face detection. The key idea is to combine face alignment with detection, observing that aligned face shapes provide better features for face classification. To make this combination more effective, our approach learns the two tasks jointly in the same cascade framework, by exploiting recent advances in face alignment. Such joint learning greatly enhances the capability of cascade detection and still retains its realtime performance. Extensive experiments show that our approach achieves the best accuracy on challenging datasets, where all existing solutions are either inaccurate or too slow.", "Face detection is a mature problem in computer vision. While diverse high performing face detectors have been proposed in the past, we present two surprising new top performance results. First, we show that a properly trained vanilla DPM reaches top performance, improving over commercial and research systems. Second, we show that a detector based on rigid templates - similar in structure to the Viola&Jones detector - can reach similar top performance on this task. Importantly, we discuss issues with existing evaluation benchmark and propose an improved procedure." ] }
1609.00129
2949232997
Detection of partially occluded objects is a challenging computer vision problem. Standard Convolutional Neural Network (CNN) detectors fail if parts of the detection window are occluded, since not every sub-part of the window is discriminative on its own. To address this issue, we propose a novel loss layer for CNNs, named grid loss, which minimizes the error rate on sub-blocks of a convolution layer independently rather than over the whole feature map. This results in parts being more discriminative on their own, enabling the detector to recover if the detection window is partially occluded. By mapping our loss layer back to a regular fully connected layer, no additional computational cost is incurred at runtime compared to standard CNNs. We demonstrate our method for face detection on several public face detection benchmarks and show that our method outperforms regular CNNs, is suitable for realtime applications and achieves state-of-the-art performance.
Another family of detectors are DPM @cite_15 based detectors, which learn root and part templates. The responses of these templates are combined with a deformation model to compute a confidence score. Extensions to DPM have been proposed which handle occlusions @cite_17 , improve runtime speed @cite_50 and leverage manually annotated part positions in a tree structure @cite_40 .
{ "cite_N": [ "@cite_40", "@cite_15", "@cite_50", "@cite_17" ], "mid": [ "", "2168356304", "2056025798", "2005264304" ], "abstract": [ "", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "This paper solves the speed bottleneck of deformable part model (DPM), while maintaining the accuracy in detection on challenging datasets. Three prohibitive steps in cascade version of DPM are accelerated, including 2D correlation between root filter and feature map, cascade part pruning and HOG feature extraction. For 2D correlation, the root filter is constrained to be low rank, so that 2D correlation can be calculated by more efficient linear combination of 1D correlations. A proximal gradient algorithm is adopted to progressively learn the low rank filter in a discriminative manner. For cascade part pruning, neighborhood aware cascade is proposed to capture the dependence in neighborhood regions for aggressive pruning. Instead of explicit computation of part scores, hypotheses can be pruned by scores of neighborhoods under the first order approximation. For HOG feature extraction, look-up tables are constructed to replace expensive calculations of orientation partition and magnitude with simpler matrix index operations. Extensive experiments show that (a) the proposed method is 4 times faster than the current fastest DPM method with similar accuracy on Pascal VOC, (b) the proposed method achieves state-of-the-art accuracy on pedestrian and face detection task with frame-rate speed.", "The presence of occluders significantly impacts performance of systems for object recognition. However, occlusion is typically treated as an unstructured source of noise and explicit models for occluders have lagged behind those for object appearance and shape. In this paper we describe a hierarchical deformable part model for face detection and keypoint localization that explicitly models occlusions of parts. The proposed model structure makes it possible to augment positive training data with large numbers of synthetically occluded instances. This allows us to easily incorporate the statistics of occlusion patterns in a discriminatively trained model. We test the model on several benchmarks for keypoint localization including challenging sets featuring significant occlusion. We find that the addition of an explicit model of occlusion yields a system that outperforms existing approaches in keypoint localization accuracy." ] }
1609.00129
2949232997
Detection of partially occluded objects is a challenging computer vision problem. Standard Convolutional Neural Network (CNN) detectors fail if parts of the detection window are occluded, since not every sub-part of the window is discriminative on its own. To address this issue, we propose a novel loss layer for CNNs, named grid loss, which minimizes the error rate on sub-blocks of a convolution layer independently rather than over the whole feature map. This results in parts being more discriminative on their own, enabling the detector to recover if the detection window is partially occluded. By mapping our loss layer back to a regular fully connected layer, no additional computational cost is incurred at runtime compared to standard CNNs. We demonstrate our method for face detection on several public face detection benchmarks and show that our method outperforms regular CNNs, is suitable for realtime applications and achieves state-of-the-art performance.
Further, there are complimentary approaches improving existing detectors by domain adaption techniques @cite_47 ; and exemplar based methods using retrieval techniques to detect and align faces @cite_2 @cite_24 .
{ "cite_N": [ "@cite_24", "@cite_47", "@cite_2" ], "mid": [ "", "2169066052", "2015268479" ], "abstract": [ "", "We propose an unsupervised detector adaptation algorithm to adapt any offline trained face detector to a specific collection of images, and hence achieve better accuracy. The core of our detector adaptation algorithm is a probabilistic elastic part (PEP) model, which is offline trained with a set of face examples. It produces a statistically aligned part based face representation, namely the PEP representation. To adapt a general face detector to a collection of images, we compute the PEP representations of the candidate detections from the general face detector, and then train a discriminative classifier with the top positives and negatives. Then we re-rank all the candidate detections with this classifier. This way, a face detector tailored to the statistics of the specific image collection is adapted from the original detector. We present extensive results on three datasets with two state-of-the-art face detectors. The significant improvement of detection accuracy over these state of-the-art face detectors strongly demonstrates the efficacy of the proposed face detector adaptation algorithm.", "Detecting faces in uncontrolled environments continues to be a challenge to traditional face detection methods due to the large variation in facial appearances, as well as occlusion and clutter. In order to overcome these challenges, we present a novel and robust exemplar-based face detector that integrates image retrieval and discriminative learning. A large database of faces with bounding rectangles and facial landmark locations is collected, and simple discriminative classifiers are learned from each of them. A voting-based method is then proposed to let these classifiers cast votes on the test image through an efficient image retrieval technique. As a result, faces can be very efficiently detected by selecting the modes from the voting maps, without resorting to exhaustive sliding window-style scanning. Moreover, due to the exemplar-based framework, our approach can detect faces under challenging conditions without explicitly modeling their variations. Evaluation on two public benchmark datasets shows that our new face detection approach is accurate and efficient, and achieves the state-of-the-art performance. We further propose to use image retrieval for face validation (in order to remove false positives) and for face alignment landmark localization. The same methodology can also be easily generalized to other face-related tasks, such as attribute recognition, as well as general object detection." ] }
1609.00129
2949232997
Detection of partially occluded objects is a challenging computer vision problem. Standard Convolutional Neural Network (CNN) detectors fail if parts of the detection window are occluded, since not every sub-part of the window is discriminative on its own. To address this issue, we propose a novel loss layer for CNNs, named grid loss, which minimizes the error rate on sub-blocks of a convolution layer independently rather than over the whole feature map. This results in parts being more discriminative on their own, enabling the detector to recover if the detection window is partially occluded. By mapping our loss layer back to a regular fully connected layer, no additional computational cost is incurred at runtime compared to standard CNNs. We demonstrate our method for face detection on several public face detection benchmarks and show that our method outperforms regular CNNs, is suitable for realtime applications and achieves state-of-the-art performance.
Recently, CNN became increasingly popular due to their success in recognition and detection problems, e.g. @cite_35 @cite_7 . They successively apply convolution filters followed by non-linear activation functions. Early work in this area applies a small number of convolution filters followed by sum or average pooling on the image @cite_51 @cite_45 @cite_55 . More recent work leverages a larger number of filters which are pre-trained on large datasets, e.g. ILSVRC @cite_12 , and fine-tuned on face datasets. These approaches are capable of detecting faces in multiple orientations and poses, e.g. @cite_42 . Furthermore, @cite_13 uses a coarse-to-fine neural network cascade to efficiently detect faces in realtime. Successive networks in the cascade have a larger number of parameters and use previous features of the cascade as inputs. @cite_20 propose a large dataset with attribute annotated faces to learn 5 face attribute CNN for predicting hair, eye, nose, mouth and beard attributes (e.g. black hair vs. blond hair vs. bald hair). Classifier responses are used to re-rank object proposals, which are then classified by a CNN as face vs. non-face.
{ "cite_N": [ "@cite_13", "@cite_35", "@cite_7", "@cite_55", "@cite_42", "@cite_45", "@cite_20", "@cite_51", "@cite_12" ], "mid": [ "1934410531", "2102605133", "", "", "1970456555", "", "2950557924", "2120284346", "2117539524" ], "abstract": [ "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "", "", "In this paper we consider the problem of multi-view face detection. While there has been significant research on this problem, current state-of-the-art approaches for this task require annotation of facial landmarks, e.g. TSM [25], or annotation of face poses [28, 22]. They also require training dozens of models to fully capture faces in all orientations, e.g. 22 models in HeadHunter method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method that does not require pose landmark annotation and is able to detect faces in a wide range of orientations using a single model based on deep convolutional neural networks. The proposed method has minimal complexity; unlike other recent deep learning object detection methods [9], it does not require additional components such as segmentation, bounding-box regression, or SVM classifiers. Furthermore, we analyzed scores of the proposed face detector for faces in different orientations and found that 1) the proposed method is able to detect faces from different angles and can handle occlusion to some extent, 2) there seems to be a correlation between distribution of positive examples in the training set and scores of the proposed face detector. The latter suggests that the proposed method's performance can be further improved by using better sampling strategies and more sophisticated data augmentation techniques. Evaluations on popular face detection benchmark datasets show that our single-model face detector algorithm has similar or better performance compared to the previous methods, which are more complex and require annotations of either different poses or facial landmarks.", "", "In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.", "In this paper, we present a novel face detection approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns, rotated up to spl plusmn 20 degrees in image plane and turned up to spl plusmn 60 degrees, in complex real world images. The proposed system automatically synthesizes simple problem-specific feature extractors from a training set of face and nonface patterns, without making any assumptions or using any hand-made design concerning the features to extract or the areas of the face pattern to analyze. The face detection procedure acts like a pipeline of simple convolution and subsampling modules that treat the raw input image as a whole. We therefore show that an efficient face detection system does not require any costly local preprocessing before classification of image areas. The proposed scheme provides very high detection rate with a particularly low level of false positives, demonstrated on difficult test sets, without requiring the use of multiple networks for handling difficult cases. We present extensive experimental results illustrating the efficiency of the proposed approach on difficult test sets and including an in-depth sensitivity analysis with respect to the degrees of variability of the face patterns.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements." ] }
1609.00062
2516715920
Future Internet-of-Things (IoT) is expected to wirelessly connect billions of low-complexity devices. For wireless information transfer (WIT) in IoT, high density of IoT devices and their ad hoc communication result in strong interference which acts as a bottleneck on WIT. Furthermore, battery replacement for the massive number of IoT devices is difficult if not infeasible, making wireless energy transfer (WET) desirable. This motivates: (i) the design of full-duplex WIT to reduce latency and enable efficient spectrum utilization, and (ii) the implementation of passive IoT devices using backscatter antennas that enable WET from one device (reader) to another (tag). However, the resultant increase in the density of simultaneous links exacerbates the interference issue. This issue is addressed in this paper by proposing the design of full-duplex backscatter communication (BackCom) networks, where a novel multiple-access scheme based on time-hopping spread-spectrum (TH-SS) is designed to enable both one-way WET and two-way WIT in coexisting backscatter reader-tag links. Comprehensive performance analysis of BackCom networks is presented in this paper, including forward backward bit-error rates and WET efficiency and outage probabilities, which accounts for energy harvesting at tags, non-coherent and coherent detection at tags and readers, respectively, and the effects of asynchronous transmissions.
For this reason, active research has been conducted on designing techniques for various types of BackCom systems and networks which are more complex than the traditional RFID systems @cite_5 @cite_11 @cite_7 @cite_8 @cite_16 . One focus of the research is to design multiple-access BackCom networks where a single reader serves multiple tags. As proposed in @cite_5 , collision can be avoided by directional beamforming at the reader and decoupling tags covered by the same beam using the frequency-shift keying modulation. Subsequently, alternative multiple-access schemes were proposed in @cite_11 and @cite_7 based on time-division multiple access and collision-detection-carrier-sensing based random access, respectively. A novel approach for collision avoidance was presented in @cite_8 which treats backscatter transmissions by tags as a sparse code and decodes multi-tag data using a compressive-sensing algorithm.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_5", "@cite_16", "@cite_11" ], "mid": [ "2062920601", "", "2142095519", "2595612820", "1975147464" ], "abstract": [ "Reliable and energy-efficient reading of Radio Frequency IDentification (RFID) tags is of utmost importance, especially in mobile and dense tag settings. We identify tag collisions as a main source of inefficiency in terms of wasting both medium access control (MAC) frame slots and reader’s energy. We propose modulation silencing (MS), a reader-tag interaction framework to limit the effect of tag collisions. Utilizing relatively simple circuitry at the tag, MS enhances the performance of existing anti-collision protocols by allowing readers to terminate collision slots once a decoding violation is detected. With shorter collision slots, we revisit the performance metrics and introduce a new generalized time efficiency metric and an optimal frame selection formula that takes into consideration the MS effects. Through analytical solutions and extensive simulations, we show that the use of MS results in significant performance gains under various scenarios.", "", "Sensor collision (interference) is studied in a large network of low bit-rate sensors that communicate via backscatter, i.e. modulate the reflection of a common carrier transmitted by a central reader. Closed-form analysis is provided, quantifying sensor collision (interference) in high-density, backscatter sensor networks (BSN), as a function of number of tags and aggregate bandwidth. Analysis is applicable to a broad class of sensor subcarrier modulations, propagation environments and reader antenna directivity patterns. It is discovered that anti-collision performance in high-density backscatter sensor networks is feasible provided that appropriate modulation is used at each sensor. That is due to the round-trip nature of backscatter communication as well as the extended target range, which both impose stringent requirements on spectrum efficiency, not easily met by all modulations. Furthermore, aggregate bandwidth savings for given anti-collision performance are quantified, when simple division techniques on subcarrier (modulating) frequency and space (via moderately directive hub antenna) are combined.", "Future Internet-of-Things (IoT) will connect billions of small computing devices embedded in the environment and support their device-to-device (D2D) communication. Powering the massive number of embedded devices is a key challenge of designing IoT, since batteries increase the devices’ form factors and battery recharging replacement is difficult. To tackle this challenge, we propose a novel network architecture that enables D2D communication between passive nodes by integrating wireless power transfer and backscatter communication, which is called a wirelessly powered backscatter communication (WP-BackCom) network. In this network, standalone power beacons (PBs) are deployed for wirelessly powering nodes by beaming unmodulated carrier signals to targeted nodes. Provisioned with a backscatter antenna, a node transmits data to an intended receiver by modulating and reflecting a fraction of a carrier signal. Such transmission by backscatter consumes orders-of-magnitude less power than a traditional radio. Thereby, the dense deployment of low-complexity PBs with high transmission power can power a large-scale IoT. In this paper, a WP-BackCom network is modeled as a random Poisson cluster process in the horizontal plane where PBs are Poisson distributed and active ad hoc pairs of backscatter communication nodes with fixed separation distances form random clusters centered at PBs. The backscatter nodes can harvest energy from and backscatter carrier signals transmitted by PBs. Furthermore, the transmission power of each node depends on the distance from the associated PB. Applying stochastic geometry, the network coverage probability and transmission capacity are derived and optimized as functions of backscatter parameters, including backscatter duty cycle, reflection coefficient, and the PB density. The effects of the parameters on network performance are quantified.", "Scatter radio achieves communication by reflection and requires low-cost and low-power RF front-ends. However, its use in wireless sensor networks (WSNs) is limited, since commercial scatter radio (e.g. RFID) offers short ranges of a few tens of meters. This work redesigns scatter radio systems and maximizes range through non-classic bistatic architectures: the carrier emitter is detached from the reader. It is shown that conventional radio receivers may show a potential 3dB performance loss, since they do not exploit the correct signal model for scatter radio links. Receivers for on-off-keying (OOK) and frequency-shift keying (FSK) that overcome the frequency offset between the carrier emitter and the reader are presented. Additionally, non-coherent designs are also offered. This work emphasizes that sensor tag design should accompany receiver design. Impact of important parameters such as the antenna structural mode are presented through bit error rate (BER) results. Experimental measurements corroborate the long-range ability of bistatic radio; ranges of up to 130 meters with 20 milliwatts of carrier power are experimentally demonstrated, with commodity software radio and no directional antennas. Therefore, bistatic scatter radio may be viewed as a key enabling technology for large-scale, low-cost and low-power WSNs." ] }
1609.00062
2516715920
Future Internet-of-Things (IoT) is expected to wirelessly connect billions of low-complexity devices. For wireless information transfer (WIT) in IoT, high density of IoT devices and their ad hoc communication result in strong interference which acts as a bottleneck on WIT. Furthermore, battery replacement for the massive number of IoT devices is difficult if not infeasible, making wireless energy transfer (WET) desirable. This motivates: (i) the design of full-duplex WIT to reduce latency and enable efficient spectrum utilization, and (ii) the implementation of passive IoT devices using backscatter antennas that enable WET from one device (reader) to another (tag). However, the resultant increase in the density of simultaneous links exacerbates the interference issue. This issue is addressed in this paper by proposing the design of full-duplex backscatter communication (BackCom) networks, where a novel multiple-access scheme based on time-hopping spread-spectrum (TH-SS) is designed to enable both one-way WET and two-way WIT in coexisting backscatter reader-tag links. Comprehensive performance analysis of BackCom networks is presented in this paper, including forward backward bit-error rates and WET efficiency and outage probabilities, which accounts for energy harvesting at tags, non-coherent and coherent detection at tags and readers, respectively, and the effects of asynchronous transmissions.
IoT devices having the capabilities of sensing and computing consume more power than simple RFID tags and also require much longer IT ET ranges (RFID ranges are limited to only several meters). This calls for techniques for enhancing the ET efficiency in BackCom systems by leveraging the rich results from the popular area of wireless power transfer (e.g., see the surveys in @cite_4 @cite_0 ). @cite_14 , it was proposed that a reader is provisioned with multi-antennas to beam energy to multiple tags. An algorithm was also provided therein for the reader to estimate the forward-link channel, which is required for energy beamforming, using the backscattered pilot signal also transmitted by the reader.
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_4" ], "mid": [ "2118588134", "2964056649", "1964974987" ], "abstract": [ "The performance of wireless communication is fundamentally constrained by the limited battery life of wireless devices, the operations of which are frequently disrupted due to the need of manual battery replacement recharging. The recent advance in RF-enabled wireless energy transfer (WET) technology provides an attractive solution named wireless powered communication (WPC), where the wireless devices are powered by dedicated wireless power transmitters to provide continuous and stable microwave energy over the air. As a key enabling technology for truly perpetual communications, WPC opens up the potential to build a network with larger throughput, higher robustness, and increased flexibility compared to its battery-powered counterpart. However, the combination of wireless energy and information transmissions also raises many new research problems and implementation issues that need to be addressed. In this article, we provide an overview of stateof- the-art RF-enabled WET technologies and their applications to wireless communications, highlighting the key design challenges, solutions, and opportunities ahead.", "We study RF-enabled wireless energy transfer (WET) via energy beamforming, from a multi-antenna energy transmitter (ET) to multiple energy receivers (ERs) in a backscatter communication system such as RFID. The acquisition of the forward-channel (i.e., ET-to-ER) state information (F-CSI) at the ET (or RFID reader) is challenging, since the ERs (or RFID tags) are typically too energy-and-hardware-constrained to estimate or feedback the F-CSI. The ET leverages its observed backscatter signals to estimate the backscatter-channel (i.e., ET-to-ER-to-ET) state information (BS-CSI) directly. We first analyze the harvested energy obtained using the estimated BS-CSI. Furthermore, we optimize the resource allocation to maximize the total utility of harvested energy. For WET to single ER, we obtain the optimal channel-training energy in a semiclosed form. For WET to multiple ERs, we optimize the channel-training energy and the energy allocation weights for different energy beams. For the straightforward weighted-sum-energy (WSE) maximization, the optimal WET scheme is shown to use only one energy beam, which leads to unfairness among ERs and motivates us to consider the complicated proportional-fair-energy (PFE) maximization. For PFE maximization, we show that it is a biconvex problem, and propose a block-coordinate-descent-based algorithm to find the close-to-optimal solution. Numerical results show that with the optimized solutions, the harvested energy suffers slight reduction of less than 10 , compared to that obtained using the perfect F-CSI.", "The advancements in microwave power transfer (MPT) over recent decades have enabled wireless power transfer over long distances. The latest breakthroughs in wireless communication - massive MIMO, small cells, and millimeterwave communication - make wireless networks suitable platforms for implementing MPT. This can lead to the elimination of the “last wires” connecting mobile devices to the grid for recharging, thereby tackling a huge long-standing ICT challenge. Furthermore, the seamless integration between MPT and wireless communication opens up a new area called wirelessly powered communications (WPC) where many new research directions arise, such as simultaneous information and power transfer, WPC network architectures, and techniques for safe and efficient WPC. This article provides an introduction to WPC by describing the key features of WPC, shedding light on a set of frequently asked questions, and identifying the key design issues and discussing possible solutions." ] }
1609.00062
2516715920
Future Internet-of-Things (IoT) is expected to wirelessly connect billions of low-complexity devices. For wireless information transfer (WIT) in IoT, high density of IoT devices and their ad hoc communication result in strong interference which acts as a bottleneck on WIT. Furthermore, battery replacement for the massive number of IoT devices is difficult if not infeasible, making wireless energy transfer (WET) desirable. This motivates: (i) the design of full-duplex WIT to reduce latency and enable efficient spectrum utilization, and (ii) the implementation of passive IoT devices using backscatter antennas that enable WET from one device (reader) to another (tag). However, the resultant increase in the density of simultaneous links exacerbates the interference issue. This issue is addressed in this paper by proposing the design of full-duplex backscatter communication (BackCom) networks, where a novel multiple-access scheme based on time-hopping spread-spectrum (TH-SS) is designed to enable both one-way WET and two-way WIT in coexisting backscatter reader-tag links. Comprehensive performance analysis of BackCom networks is presented in this paper, including forward backward bit-error rates and WET efficiency and outage probabilities, which accounts for energy harvesting at tags, non-coherent and coherent detection at tags and readers, respectively, and the effects of asynchronous transmissions.
The wireless ET efficiency can be also enhanced by reader cooperation. For example, multiple readers are coordinated to perform ET (and IT) to multiple tags as proposed in @cite_1 . The implementation of such designs require BackCom network architectures with centralized control. However, IoT relies primarily on distributed (D2D) communication. Large-scale distributed D2D BackCom are modeled and analyzed in @cite_16 using stochastic geometry, where tags are wirelessly powered by dedicated stations (called power beacons). In particular, the network transmission capacity that measures the network spatial throughput was derived and maximized as a function of backscatter parameters including duty cycle and reflection coefficient. Instead of relying on peer-to-peer ET, an alternative approach of powering IoT devices is to harvest ambient RF energy from transmissions by WiFi access points or TV towers @cite_3 .
{ "cite_N": [ "@cite_16", "@cite_1", "@cite_3" ], "mid": [ "2595612820", "2006651501", "" ], "abstract": [ "Future Internet-of-Things (IoT) will connect billions of small computing devices embedded in the environment and support their device-to-device (D2D) communication. Powering the massive number of embedded devices is a key challenge of designing IoT, since batteries increase the devices’ form factors and battery recharging replacement is difficult. To tackle this challenge, we propose a novel network architecture that enables D2D communication between passive nodes by integrating wireless power transfer and backscatter communication, which is called a wirelessly powered backscatter communication (WP-BackCom) network. In this network, standalone power beacons (PBs) are deployed for wirelessly powering nodes by beaming unmodulated carrier signals to targeted nodes. Provisioned with a backscatter antenna, a node transmits data to an intended receiver by modulating and reflecting a fraction of a carrier signal. Such transmission by backscatter consumes orders-of-magnitude less power than a traditional radio. Thereby, the dense deployment of low-complexity PBs with high transmission power can power a large-scale IoT. In this paper, a WP-BackCom network is modeled as a random Poisson cluster process in the horizontal plane where PBs are Poisson distributed and active ad hoc pairs of backscatter communication nodes with fixed separation distances form random clusters centered at PBs. The backscatter nodes can harvest energy from and backscatter carrier signals transmitted by PBs. Furthermore, the transmission power of each node depends on the distance from the associated PB. Applying stochastic geometry, the network coverage probability and transmission capacity are derived and optimized as functions of backscatter parameters, including backscatter duty cycle, reflection coefficient, and the PB density. The effects of the parameters on network performance are quantified.", "This paper studies the simultaneous wireless information and power transfer (SWIPT) in a multiuser wireless system, in which distributed transmitters send independent messages to their respective receivers, and at the same time cooperatively transmit wireless power to the receivers via energy beamforming. Accordingly, from the wireless information transmission (WIT) perspective, the system of interest can be modeled as the classic interference channel, while it also can be regarded as a distributed multiple-input multiple-output (MIMO) system for collaborative wireless energy transmission (WET). To enable both information decoding (ID) and energy harvesting (EH) in SWIPT, we adopt the low-complexity time switching operation at each receiver to switch between the ID and EH modes over scheduled time. For the hybrid system, we aim to characterize the achievable rate-energy (R–E) trade-offs by various transmitter-side collaboration schemes. Specifically, to facilitate the collaborative energy beamforming, we propose a new signal splitting scheme at the transmitters, where each transmit signal is generally split into an information signal and an energy signal for WIT and WET, respectively. With this new scheme, first, we study the two-user SWIPT system over the fading channel and derive the optimal mode switching rule at the receivers as well as the corresponding transmit signal optimization to achieve various R-E trade-offs. We also compare the R-E performance of our proposed scheme with transmit energy beamforming and signal splitting against two existing schemes with partial or no cooperation of the transmitters. Next, the general case of SWIPT systems with more than two users is studied, for which we propose a practical transmit collaboration scheme by extending the result for the two-user case: we group users into different pairs and apply the cooperation schemes obtained in the two-user case to each paired group. Furthermore, we present a benchmarking scheme based on joint cooperation of all the transmitters inspired by the principle of interference alignment , against which the performance of the proposed scheme is compared.", "" ] }
1609.00221
2951511174
In this paper we present a simple yet effective approach to extend without supervision any object proposal from static images to videos. Unlike previous methods, these spatio-temporal proposals, to which we refer as tracks, are generated relying on little or no visual content by only exploiting bounding boxes spatial correlations through time. The tracks that we obtain are likely to represent objects and are a general-purpose tool to represent meaningful video content for a wide variety of tasks. For unannotated videos, tracks can be used to discover content without any supervision. As further contribution we also propose a novel and dataset-independent method to evaluate a generic object proposal based on the entropy of a classifier output response. We experiment on two competitive datasets, namely YouTube Objects and ILSVRC-2015 VID.
Object proposals @cite_3 provide a relatively small set of bounding boxes likely to contain salient regions in images, based on some measure. Different proposals, such as EdgeBoxes @cite_10 , are commonly used in image related tasks to reduce the number of candidate regions to evaluate. Recently, there have been some attempts to adapt the paradigm of object proposals to videos to solve specific tasks, by generating consistent spatio-temporal volumes. In @cite_4 motion segmentation is exploited to extract a single spatio-temporal tube for video, in order to perform video classification. The task of object discovery is tackled in @cite_7 by generating a set of boxes using a foreground estimation method and matching them across frames using both geometric and appearance terms. Kwak al @cite_1 combine a discovery step matching similar regions in different frames and a tracking step to obtain temporal proposals. In @cite_11 a classifier is learnt to guide a super-voxel merging process for obtaining object proposals. Temporal proposals have been exploited to segment objects in videos in @cite_9 by discovering easy instances and propagating the tube to adjacent frames. Other methods to generate salient tubes have been proposed for action localization in @cite_0 using human and motion detection.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_1", "@cite_3", "@cite_0", "@cite_10", "@cite_11" ], "mid": [ "1973054923", "2329995605", "2467181293", "2949999282", "1958328135", "1945129080", "7746136", "" ], "abstract": [ "Object detectors are typically trained on a large set of still images annotated by bounding-boxes. This paper introduces an approach for learning object detectors from real-world web videos known only to contain objects of a target class. We propose a fully automatic pipeline that localizes objects in a set of videos of the class and learns a detector for it. The approach extracts candidate spatio-temporal tubes based on motion segmentation and then selects one tube per video jointly over all videos. To compare to the state of the art, we test our detector on still images, i.e., Pascal VOC 2007. We observe that frames extracted from web videos can differ significantly in terms of quality to still images taken by a good camera. Thus, we formulate the learning from videos as a domain adaptation task. We show that training from a combination of weakly annotated videos and fully annotated still images using domain adaptation improves the performance of a detector trained from still images alone.", "Automatic discovery of foreground objects in video sequences is important in computer vision, with applications to object tracking, video segmentation and weakly supervised learning. This task is related to cosegmentation [4, 5] and weakly supervised localization [2, 6]. We propose an efficient method for the simultaneous discovery of foreground objects in video and their segmentation masks across multiple frames. We offer a graph matching formulation for bounding box selection and refinement using second and higher order terms. It is based on an Integer Quadratic Programming formulation and related to graph matching and MAP inference [3]. We take into consideration local frame-based information as well as spatiotemporal and appearance consistency over multiple frames. Our approach consists of three stages. First, we find an initial pool of candidate boxes using a novel and fast foreground estimation method in video (VideoPCA) based on Principal Component Analysis of the video content. The output of VideoPCA combined with Edge Boxes [8] is then used to produce high quality bounding box proposals. Second, we efficiently match bounding boxes across multiple frames, using the IPFP algorithm [3] with pairwise geometric and appearance terms. Third, we optimize the higher order terms using the Mean-Shift algorithm [1] to refine the box locations and establish appearance regularity over multiple frames. We make the following contributions:", "We present an unsupervised approach that generates a diverse, ranked set of bounding box and segmentation video object proposals—spatio-temporal tubes that localize the foreground objects—in an unannotated video. In contrast to previous unsupervised methods that either track regions initialized in an arbitrary frame or train a fixed model over a cluster of regions, we instead discover a set of easy-togroup instances of an object and then iteratively update its appearance model to gradually detect harder instances in temporally-adjacent frames. Our method first generates a set of spatio-temporal bounding box proposals, and then refines them to obtain pixel-wise segmentation proposals. We demonstrate state-of-the-art segmentation results on the SegTrack v2 dataset, and bounding box tracking results that perform competitively to state-of-the-art supervised tracking methods.", "This paper addresses the problem of automatically localizing dominant objects as spatio-temporal tubes in a noisy collection of videos with minimal or even no supervision. We formulate the problem as a combination of two complementary processes: discovery and tracking. The first one establishes correspondences between prominent regions across videos, and the second one associates successive similar object regions within the same video. Interestingly, our algorithm also discovers the implicit topology of frames associated with instances of the same object class across different videos, a role normally left to supervisory information in the form of class labels in conventional image and video understanding methods. Indeed, as demonstrated by our experiments, our method can handle video collections featuring multiple object classes, and substantially outperforms the state of the art in colocalization, even though it tackles a broader problem with much less supervision.", "Current top performing object detectors employ detection proposals to guide the search for objects, thereby avoiding exhaustive sliding window search across images. Despite the popularity and widespread use of detection proposals, it is unclear which trade-offs are made when using them during object detection. We provide an in-depth analysis of twelve proposal methods along with four baselines regarding proposal repeatability, ground truth annotation recall on PASCAL, ImageNet, and MS COCO, and their impact on DPM, R-CNN, and Fast R-CNN detection performance. Our analysis shows that for object detection improving proposal localisation accuracy is as important as improving recall. We introduce a novel metric, the average recall (AR), which rewards both high recall and good localisation and correlates surprisingly well with detection performance. Our findings show common strengths and weaknesses of existing methods, and provide insights and metrics for selecting and tuning proposal methods.", "In this paper we target at generating generic action proposals in unconstrained videos. Each action proposal corresponds to a temporal series of spatial bounding boxes, i.e., a spatio-temporal video tube, which has a good potential to locate one human action. Assuming each action is performed by a human with meaningful motion, both appearance and motion cues are utilized to measure the actionness of the video tubes. After picking those spatiotemporal paths of high actionness scores, our action proposal generation is formulated as a maximum set coverage problem, where greedy search is performed to select a set of action proposals that can maximize the overall actionness score. Compared with existing action proposal approaches, our action proposals do not rely on video segmentation and can be generated in nearly real-time. Experimental results on two challenging datasets, MSRII and UCF 101, validate the superior performance of our action proposals as well as competitive results on action detection and search.", "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "" ] }
1609.00290
2507638320
Consider the set of all digraphs on @math with @math edges, whose minimum in-degree and minimum out-degree are at least @math and @math respectively. For @math and @math , @math , we show that, among those digraphs, the fraction of @math -strongly connected digraphs is @math . Earlier with Dan Poole we identified a sharp edge-density threshold @math for birth of a giant @math -core in the random digraph @math . Combining the claims, for @math with probability @math the giant @math -core exists and is @math -strongly connected.
Let us reproduce the definition of the sequence model from @cite_15 , since it will be instrumental in our proofs in this paper as well. Given a sequence @math , @math , we define a multi -digraph @math with vertex set @math and (directed) edge set @math ; thus @math , the number of directed edges @math , is @math . The in-degree sequence @math and the out-degree sequence @math of @math are given by @math , @math , so that @math . If @math is distributed uniformly on the set @math then @math and @math are equi-distributed. Consequently @math and @math are mutually independent, each distributed multinomially, with @math trials and @math equally likely outcomes in each trial.
{ "cite_N": [ "@cite_15" ], "mid": [ "2517839992" ], "abstract": [ "The @math -core of a digraph is the largest sub-digraph with minimum in-degree and minimum out-degree at least @math and @math respectively. For @math , we establish existence of the threshold edge-density @math , such that the random digraph @math , on the vertex set @math with @math edges, asymptotically almost surely has a giant @math -core if @math , and has no @math -core if @math . Specifically, denoting @math by @math , we prove that @math ." ] }
1609.00290
2507638320
Consider the set of all digraphs on @math with @math edges, whose minimum in-degree and minimum out-degree are at least @math and @math respectively. For @math and @math , @math , we show that, among those digraphs, the fraction of @math -strongly connected digraphs is @math . Earlier with Dan Poole we identified a sharp edge-density threshold @math for birth of a giant @math -core in the random digraph @math . Combining the claims, for @math with probability @math the giant @math -core exists and is @math -strongly connected.
The deletion algorithm delivers a sequence @math where @math , and each @math , where for all @math , @math if and only if @math . The @math pairs mark the locations @math in the original @math whose vertex occupants have been deleted after @math steps. The process @math is obviously Markov, though the complexity of its sample space makes it intractable. Let @math be a @math -tuple whose components are the counts of vertices that are in out-light, in-light out-heavy, in-heavy out-light, in-heavy out-heavy, and the total count of all edges in @math . We need that many components since, to preserve Markovian property, we have to classify the in-light degrees and the out-light degrees according to their possible @math and @math values. Fortunately no similar classification is needed for the in-heavy degrees and the out-heavy degrees. It was proved in @cite_15 that the process @math is indeed Markov, and that, conditioned on @math , the sequence @math is uniform.
{ "cite_N": [ "@cite_15" ], "mid": [ "2517839992" ], "abstract": [ "The @math -core of a digraph is the largest sub-digraph with minimum in-degree and minimum out-degree at least @math and @math respectively. For @math , we establish existence of the threshold edge-density @math , such that the random digraph @math , on the vertex set @math with @math edges, asymptotically almost surely has a giant @math -core if @math , and has no @math -core if @math . Specifically, denoting @math by @math , we prove that @math ." ] }
1609.00290
2507638320
Consider the set of all digraphs on @math with @math edges, whose minimum in-degree and minimum out-degree are at least @math and @math respectively. For @math and @math , @math , we show that, among those digraphs, the fraction of @math -strongly connected digraphs is @math . Earlier with Dan Poole we identified a sharp edge-density threshold @math for birth of a giant @math -core in the random digraph @math . Combining the claims, for @math with probability @math the giant @math -core exists and is @math -strongly connected.
The upshot of this discussion is that, conditioned on the terminal vertex set and the terminal number of edges, the terminal sequence @math is distributed uniformly. So Theorem in combination with Theorem from @cite_15 yield
{ "cite_N": [ "@cite_15" ], "mid": [ "2517839992" ], "abstract": [ "The @math -core of a digraph is the largest sub-digraph with minimum in-degree and minimum out-degree at least @math and @math respectively. For @math , we establish existence of the threshold edge-density @math , such that the random digraph @math , on the vertex set @math with @math edges, asymptotically almost surely has a giant @math -core if @math , and has no @math -core if @math . Specifically, denoting @math by @math , we prove that @math ." ] }
1608.08736
2513722160
Software developers use Application Programming Interfaces (APIs) of libraries and frameworks extensively while writing programs. In this context, the recommendations provided in code completion pop-ups help developers choose the desired methods. The candidate lists recommended by these tools, however, tend to be large, ordered alphabetically and sometimes even incomplete. A fair amount of work has been done recently to improve the relevance of these code completion results, especially for statically typed languages like Java. However, these proposed techniques rely on the static type of the object and are therefore inapplicable for a dynamically typed language like Python. In this paper, we present PyReco, an intelligent code completion system for Python which uses the mined API usages from open source repositories to order the results based on relevance rather than the conventional alphabetic order. To recommend suggestions that are relevant for a working context, a nearest neighbor classifier is used to identify the best matching usage among all the extracted usage patterns. To evaluate the effectiveness of our system, the code completion queries are automatically extracted from projects and tested quantitatively using a ten-fold cross validation technique. The evaluation shows that our approach outperforms the alphabetically ordered API recommendation systems in recommending APIs for standard, as well as, third-party libraries.
Robbes and Lanza @cite_21 propose a code completion tool that uses temporal information like the program history to provide more relevant completions. On similar lines, @cite_5 have an additional temporal dimension for evolutionary information on the code. In a collaborative work environment, they propose that such information could make development tasks easier.
{ "cite_N": [ "@cite_5", "@cite_21" ], "mid": [ "1989357464", "2166597811" ], "abstract": [ "Modern IDEs make many software engineering tasks easier by automating functionality such as code completion and navigation. However, this functionality operates on one version of the code at a time. We envision a new approach that makes code completion and navigation aware of code evolution and enables them to operate on multiple versions at a time, without having to manually switch across these versions. We illustrate our approach on several example scenarios. We also describe a prototype Eclipse plugin that embodies our approach for code completion and navigation for Java code. We believe our approach opens a new line of research that adds a novel, temporal dimension for treating code in IDEs in the context of tasks that previously required manual switching across different code versions.", "Code completion is a widely used productivity tool. It takes away the burden of remembering and typing the exact names of methods or classes: As a developer starts typing a name, it provides a progressively refined list of candidates matching the name. However, the candidate list always comes in alphabetic order, i.e., the environment is only second-guessing the name based on pattern matching. Finding the correct candidate can be cumbersome or slower than typing the full name. We present an approach to improve code completion with program history. We define a benchmark measuring the accuracy and usefulness of a code completion engine. Further, we use the change history data to also improve the results offered by code completion tools. Finally, we propose an alternative interface for completion tools." ] }
1608.08736
2513722160
Software developers use Application Programming Interfaces (APIs) of libraries and frameworks extensively while writing programs. In this context, the recommendations provided in code completion pop-ups help developers choose the desired methods. The candidate lists recommended by these tools, however, tend to be large, ordered alphabetically and sometimes even incomplete. A fair amount of work has been done recently to improve the relevance of these code completion results, especially for statically typed languages like Java. However, these proposed techniques rely on the static type of the object and are therefore inapplicable for a dynamically typed language like Python. In this paper, we present PyReco, an intelligent code completion system for Python which uses the mined API usages from open source repositories to order the results based on relevance rather than the conventional alphabetic order. To recommend suggestions that are relevant for a working context, a nearest neighbor classifier is used to identify the best matching usage among all the extracted usage patterns. To evaluate the effectiveness of our system, the code completion queries are automatically extracted from projects and tested quantitatively using a ten-fold cross validation technique. The evaluation shows that our approach outperforms the alphabetically ordered API recommendation systems in recommending APIs for standard, as well as, third-party libraries.
@cite_25 model the extracted method call sequences into statistical language models like N-Gram and recurrent neural networks to predict recommendations. This approach has been proven to be fast and efficient in determining the likelihood of the next method call for Android APIs.
{ "cite_N": [ "@cite_25" ], "mid": [ "2143861926" ], "abstract": [ "We address the problem of synthesizing code completions for programs using APIs. Given a program with holes, we synthesize completions for holes with the most likely sequences of method calls. Our main idea is to reduce the problem of code completion to a natural-language processing problem of predicting probabilities of sentences. We design a simple and scalable static analysis that extracts sequences of method calls from a large codebase, and index these into a statistical language model. We then employ the language model to find the highest ranked sentences, and use them to synthesize a code completion. Our approach is able to synthesize sequences of calls across multiple objects together with their arguments. Experiments show that our approach is fast and effective. Virtually all computed completions typecheck, and the desired completion appears in the top 3 results in 90 of the cases." ] }
1608.08736
2513722160
Software developers use Application Programming Interfaces (APIs) of libraries and frameworks extensively while writing programs. In this context, the recommendations provided in code completion pop-ups help developers choose the desired methods. The candidate lists recommended by these tools, however, tend to be large, ordered alphabetically and sometimes even incomplete. A fair amount of work has been done recently to improve the relevance of these code completion results, especially for statically typed languages like Java. However, these proposed techniques rely on the static type of the object and are therefore inapplicable for a dynamically typed language like Python. In this paper, we present PyReco, an intelligent code completion system for Python which uses the mined API usages from open source repositories to order the results based on relevance rather than the conventional alphabetic order. To recommend suggestions that are relevant for a working context, a nearest neighbor classifier is used to identify the best matching usage among all the extracted usage patterns. To evaluate the effectiveness of our system, the code completion queries are automatically extracted from projects and tested quantitatively using a ten-fold cross validation technique. The evaluation shows that our approach outperforms the alphabetically ordered API recommendation systems in recommending APIs for standard, as well as, third-party libraries.
@cite_12 propose CACHECA that captures the localized regularities in a program by using its recent token usage frequency. This could, however could lead to some false positives in the code suggestions for a dynamic language which is not backed by types.
{ "cite_N": [ "@cite_12" ], "mid": [ "1974020522" ], "abstract": [ "Nearly every Integrated Development Environment includes a form of code completion. The suggested completions (\"suggestions\") are typically based on information available at compile time, such as type signatures and variables in scope. A statistical approach, based on estimated models of code patterns in large code corpora, has been demonstrated to be effective at predicting tokens given a context. In this demo, we present CACHECA, an Eclipse plug in that combines the native suggestions with a statistical suggestion regime. We demonstrate that a combination of the two approaches more than doubles Eclipse's suggestion accuracy. A video demonstration is available at https: www.youtube.com watch?v=3INk0N3JNtc." ] }
1608.08736
2513722160
Software developers use Application Programming Interfaces (APIs) of libraries and frameworks extensively while writing programs. In this context, the recommendations provided in code completion pop-ups help developers choose the desired methods. The candidate lists recommended by these tools, however, tend to be large, ordered alphabetically and sometimes even incomplete. A fair amount of work has been done recently to improve the relevance of these code completion results, especially for statically typed languages like Java. However, these proposed techniques rely on the static type of the object and are therefore inapplicable for a dynamically typed language like Python. In this paper, we present PyReco, an intelligent code completion system for Python which uses the mined API usages from open source repositories to order the results based on relevance rather than the conventional alphabetic order. To recommend suggestions that are relevant for a working context, a nearest neighbor classifier is used to identify the best matching usage among all the extracted usage patterns. To evaluate the effectiveness of our system, the code completion queries are automatically extracted from projects and tested quantitatively using a ten-fold cross validation technique. The evaluation shows that our approach outperforms the alphabetically ordered API recommendation systems in recommending APIs for standard, as well as, third-party libraries.
Our approach for ranking the recommendations is based on the BMN algorithm since it outperforms techniques which incorporated association-rule mining @cite_1 and statistical techniques based on usage frequency.
{ "cite_N": [ "@cite_1" ], "mid": [ "2043791485" ], "abstract": [ "Frameworks provide means to reuse existing design and functionality, but first require developers to understand how to use them. Learning the correct usage of a framework can be difficult due to the large number of rules to obey and the complex collaborations between the classes. We propose the use of data mining techniques to extract reuse patterns from existing framework instantiations. Based on these patterns, suggestions about other relevant parts of the framework are presented to novice users in a context-dependent manner. We have built FrUiT, an Eclipse plug-in that implements this approach and present a first assessment by mining parts of the Eclipse framework." ] }
1608.08368
2518228494
Persistent Identifiers (PID) are the foundation referencing digital assets in scientific publications, books, and digital repositories. In its realization, PIDs contain metadata and resolving targets in form of URLs that point to data sets located on the network. In contrast to PIDs, the target URLs are typically changing over time; thus, PIDs need continuous maintenance - an effort that is increasing tremendously with the advancement of e-Science and the advent of the Internet-of-Things (IoT). Nowadays, billions of sensors and data sets are subject of PID assignment. This paper presents a new approach of embedding location independent targets into PIDs that allows the creation of maintenance-free PIDs using content-centric network technology and overlay networks. For proving the validity of the presented approach, the Handle PID System is used in conjunction with Magnet Link access information encoding, state-of-the-art decentralized data distribution with BitTorrent, and Named Data Networking (NDN) as location-independent data access technology for networks. Contrasting existing approaches, no green-field implementation of PID or major modifications of the Handle System is required to enable location-independent data dissemination with maintenance-free PIDs.
The concept of bridging different content-centric network systems through a centralized URN system has been initially drafted by Sollins in 2012 @cite_17 . Her concept utilizes foundations of PID principles for creating an identification system for different ICN families and their related data objects that meets the requirements of scalability, longevity, evolvability, and security. Sollins's identification system abstracts different object naming schemes from ICN families such as DONA @cite_31 , NETINF @cite_34 and PURSUIT @cite_12 . Although, PID principles are used for location-independent data access, the publication by Sollins does not suggest access through an existing well-introduced PID system that is provided in our work, but rather uses a greed-field approach for location-independent data access.
{ "cite_N": [ "@cite_31", "@cite_34", "@cite_12", "@cite_17" ], "mid": [ "2168903090", "2005243340", "2014702203", "2170075839" ], "abstract": [ "The Internet has evolved greatly from its original incarnation. For instance, the vast majority of current Internet usage is data retrieval and service access, whereas the architecture was designed around host-to-host applications such as telnet and ftp. Moreover, the original Internet was a purely transparent carrier of packets, but now the various network stakeholders use middleboxes to improve security and accelerate applications. To adapt to these changes, we propose the Data-Oriented Network Architecture (DONA), which involves a clean-slate redesign of Internet naming and name resolution.", "Information-centric networking (ICN) has evolved as important option for a future Internet architecture. Prototyping such an architecture is important to gain valuable insights about its feasibility. We have developed a prototype of the Network of Information (NetInf) architecture called OpenNetInf and have tested the architecture with a wide variety of applications. We have also performed traffic measurements to evaluate the influence of caching and our Multi-Level Distributed Hash Table (MDHT) name resolution service on inter-domain traffic. The measurements show a decrease in inter-domain traffic by a factor of up to 4 in our test scenario. The prototyping experience has validated the general feasibility of the NetInf architecture. The gained insights have had and will have significant impact on future NetInf architecture iterations.", "The Publish-Subscribe Internet Routing Paradigm (PSIRP) project aims at developing and evaluating an information-centric architecture for the future Internet. The ambition is to provide a new form of internetworking which will offer the desired functionality, flexibility, and performance, but will also support availability, security, and mobility, as well as innovative applications and new market opportunities. This paper illustrates the high level architecture developed in the PSIRP project, revealing its principles, core components, and basic operations through example usage scenarios. While the focus of this paper is specifically on the operations within the architecture, the revelation of the workings through our use cases can also be considered relevant more generally for publish-subscribe architectures.", "Identification is central to information or content centric networking, in order to enable referencing and access to the information objects. In this work we focus on identifiers and the identification system as a target of a design process, because without careful attention to the identifiers themselves and the approaches to selecting, assigning and using them, they may not meet their design goals. The paper begins with an examination of key issues central to the design of an identification system. With those in mind, we discuss the objectives of pervasiveness and persistence as requirements for identification in an information centric networking (ICN) approach. These lead to a set of design four goals: longevity, scalability, evolvability and security. We apply two key design principles, layering and modularity, to derive our design for the Pervasive Persistent Identification System or PPInS for information centric networking. The contributions of this work include (1) the design issues for identification systems, (2) analysis of goals and key design criteria for identification in an ICN approach, and (3) a principled design of PPInS." ] }
1608.08368
2518228494
Persistent Identifiers (PID) are the foundation referencing digital assets in scientific publications, books, and digital repositories. In its realization, PIDs contain metadata and resolving targets in form of URLs that point to data sets located on the network. In contrast to PIDs, the target URLs are typically changing over time; thus, PIDs need continuous maintenance - an effort that is increasing tremendously with the advancement of e-Science and the advent of the Internet-of-Things (IoT). Nowadays, billions of sensors and data sets are subject of PID assignment. This paper presents a new approach of embedding location independent targets into PIDs that allows the creation of maintenance-free PIDs using content-centric network technology and overlay networks. For proving the validity of the presented approach, the Handle PID System is used in conjunction with Magnet Link access information encoding, state-of-the-art decentralized data distribution with BitTorrent, and Named Data Networking (NDN) as location-independent data access technology for networks. Contrasting existing approaches, no green-field implementation of PID or major modifications of the Handle System is required to enable location-independent data dissemination with maintenance-free PIDs.
The realization of complex secure naming schemes for content-centric data has been covered by in 2010 @cite_35 . They demand name persistency without incorporating the concept of PID s. In their publication, clarify that basic security functionality must be attached directly to the data and its naming scheme, because the identity of network locations cannot be used as a trust base for data authenticity. Our approach follows this principle for secure location-independent data access and facilitates directly attached PID security mechanisms. By this, location-independent access through PID is shifted into the requirements formulated by , and our approach enables authentic data access through PID s.
{ "cite_N": [ "@cite_35" ], "mid": [ "2092433893" ], "abstract": [ "Several projects propose an information-centric approach to the network of the future. Such an approach makes efficient content distribution possible by making information retrieval host-independent and integrating into the network storage for caching information. Requests for particular content can, thus, be satisfied by any host or server holding a copy. The current security model based on host authentication is not applicable in this context. Basic security functionality must instead be attached directly to the data and its naming scheme. A naming scheme to name content and other objects that enables verification of data integrity as well as owner authentication and identification is here presented. The naming scheme is designed for flexibility and extensibility, e.g., to integrate other security properties like access control. At the same time, the naming scheme offers persistent IDs even though the content, content owner and or owner's organizational structure, or location change. The requirements for the naming scheme and an analysis showing how the proposed scheme fulfills them are presented. Experience with prototyping the naming scheme is also discussed. The naming scheme builds the foundation for a secure information-centric network infrastructure that can also solve some of the main security problems of today's Internet." ] }
1608.08368
2518228494
Persistent Identifiers (PID) are the foundation referencing digital assets in scientific publications, books, and digital repositories. In its realization, PIDs contain metadata and resolving targets in form of URLs that point to data sets located on the network. In contrast to PIDs, the target URLs are typically changing over time; thus, PIDs need continuous maintenance - an effort that is increasing tremendously with the advancement of e-Science and the advent of the Internet-of-Things (IoT). Nowadays, billions of sensors and data sets are subject of PID assignment. This paper presents a new approach of embedding location independent targets into PIDs that allows the creation of maintenance-free PIDs using content-centric network technology and overlay networks. For proving the validity of the presented approach, the Handle PID System is used in conjunction with Magnet Link access information encoding, state-of-the-art decentralized data distribution with BitTorrent, and Named Data Networking (NDN) as location-independent data access technology for networks. Contrasting existing approaches, no green-field implementation of PID or major modifications of the Handle System is required to enable location-independent data dissemination with maintenance-free PIDs.
In the context of semantic digital archives for archiving data of PIM applications, Haun and Nürnberger proposed a PID schema for accessing objects in file systems using an URN-like Magnet Links scheme @cite_36 . They link the congruent attributes of the Magnet Link scheme to the attributes provided by some PID systems such as global uniqueness, persistence and scalability for the application in offline data archives serving data from archive medium such as file systems on WORM medium. In contrast, our approach relies on currently employed PID systems and incorporates location-independent data access in a distributed online environment using the full-featured Handle PID system.
{ "cite_N": [ "@cite_36" ], "mid": [ "2403322466" ], "abstract": [ "Persistent identification is necessary for recognition, dissemination and (external) cross-references to digital objects. Uniform Resource Identifiers (URIs) provide an established scheme for this task, but do not guarantee stable and persistent identification. In the context of (personal) archives, stability is needed when references are be stored on a medium where later changes to identifiers cannot be corrected at all or only with a very large overhead, such as WORM media or tape archives. Additionally, resources like contacts or appointments do not have a URI, while other URIs, such as file system paths or the IMAP URI, are unstable by design and cannot represent the dynamic aspects of Personal Information Management (PIM). This paper discusses problems of archiving that arise with entity identification in PIM, especially on the example of the personal file system." ] }
1608.08454
2516591164
The simultaneous orthogonal matching pursuit (SOMP) algorithm aims to find the joint support of a set of sparse signals acquired under a multiple measurement vector model. Critically, the analysis of SOMP depends on the maximal inner product of any atom of a suitable dictionary and the current signal residual, which is formed by the subtraction of previously selected atoms. This inner product, or correlation, is a key metric to determine the best atom to pick at each iteration. This letter provides, for each iteration of SOMP, a novel lower bound of the aforementioned metric for the atoms belonging to the correct and common joint support of the multiple signals. Although the bound is obtained for the noiseless case, its main purpose is to intervene in noisy analyses of SOMP. Finally, it is shown for specific signal patterns that the proposed bound outperforms state-of-the-art results for SOMP and orthogonal matching pursuit (OMP) as a special case.
explains in details the contribution. states and comments an alternative bound in the literature. Finally, compares our contribution against the alternative lower bound. It is shown that, under several common sensing scenarios, our result outperforms its counterpart, even for @math , , for OMP. Our contribution can be used in several theoretical analyses @cite_12 @cite_13 of OMP and SOMP in the noisy case by replacing the older bound of (Theorem ) by the one obtained in this paper, , Theorem . For example, [Lemma 4.1] dan2014robustness can be partially replaced by Theorem . The cases under which this replacement leads to less stringent conditions on whether an iteration is successful are hence discussed in . Other related algorithms include CoSaMP @cite_1 , Subspace pursuit @cite_4 , and orthogonal matching pursuit with replacement (OMPR) @cite_8 . Since the decisions of these algorithms also rely on the highest inner products of a residual and the atoms, our methodology might provide relevant insights for them as well.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_1", "@cite_13", "@cite_12" ], "mid": [ "2160979406", "2169380424", "2289917018", "", "2140856955" ], "abstract": [ "We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques when applied to very sparse signals, and reconstruction accuracy of the same order as that of linear programming (LP) optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean-squared error of the reconstruction is upper-bounded by constant multiples of the measurement and signal perturbation energies.", "In this paper, we consider orthogonal matching pursuit (OMP) algorithm for multiple measurement vectors (MMV) problem. The robustness of OMPMMV is studied under general perturbations—when the measurement vectors as well as the sensing matrix are incorporated with additive noise. The main result shows that although exact recovery of the sparse solutions is unrealistic in noisy scenario, recovery of the support set of the solutions is guaranteed under suitable conditions. Specifically, a sufficient condition is derived that guarantees exact recovery of the sparse solutions in noiseless scenario.", "Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix–vector multiplies with the sampling matrix. For compressible signals, the running time is just O ( N log 2 N ) , where N is the length of the signal.", "", "We consider the orthogonal matching pursuit (OMP) algorithm for the recovery of a high-dimensional sparse signal based on a small number of noisy linear measurements. OMP is an iterative greedy algorithm that selects at each step the column, which is most correlated with the current residuals. In this paper, we present a fully data driven OMP algorithm with explicit stopping rules. It is shown that under conditions on the mutual incoherence and the minimum magnitude of the nonzero components of the signal, the support of the signal can be recovered exactly by the OMP algorithm with high probability. In addition, we also consider the problem of identifying significant components in the case where some of the nonzero components are possibly small. It is shown that in this case the OMP algorithm will still select all the significant components before possibly selecting incorrect ones. Moreover, with modified stopping rules, the OMP algorithm can ensure that no zero components are selected." ] }
1608.08395
2949805464
Information of time differentiation is extremely important cue for a motion representation. We have applied first-order differential velocity from a positional information, moreover we believe that second-order differential acceleration is also a significant feature in a motion representation. However, an acceleration image based on a typical optical flow includes motion noises. We have not employed the acceleration image because the noises are too strong to catch an effective motion feature in an image sequence. On one hand, the recent convolutional neural networks (CNN) are robust against input noises. In this paper, we employ acceleration-stream in addition to the spatial- and temporal-stream based on the two-stream CNN. We clearly show the effectiveness of adding the acceleration stream to the two-stream CNN.
Space-time interest points (STIP) have been a primary focus in action recognition @cite_1 . In STIP, time @math space is added to the @math spatial domain. Improvements of STIP have been reported in several papers, such as @cite_7 , @cite_12 , @cite_24 . However, the significant approach is arguably the dense trajectories approach (DT) @cite_20 . The DT is describes the trajectories that track densely sampled feature points. Descriptors are applied to the densely captured trajectories by histograms of oriented gradients (HOG) @cite_28 , histograms of optical flow (HOF) @cite_7 , and motion boundary histograms (MBH) @cite_16 .
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_1", "@cite_24", "@cite_16", "@cite_12", "@cite_20" ], "mid": [ "", "2161969291", "2020163092", "2000590230", "", "2163292664", "2126574503" ], "abstract": [ "", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.", "This paper is concerned with recognizing realistic human actions in videos based on spatio-temporal interest points (STIPs). Existing STIP-based action recognition approaches operate on intensity representations of the image data. Because of this, these approaches are sensitive to disturbing photometric phenomena such as highlights and shadows. Moreover, valuable information is neglected by discarding chromaticity from the photometric representation. These issues are addressed by Color STIPs. Color STIPs are multi-channel reformulations of existing intensity-based STIP detectors and descriptors, for which we consider a number of chromatic representations derived from the opponent color space. This enhanced modeling of appearance improves the quality of subsequent STIP detection and description. Color STIPs are shown to substantially outperform their intensity-based counterparts on the challenging UCF sports, UCF11 and UCF50 action recognition benchmarks. Moreover, the results show that color STIPs are currently the single best low-level feature choice for STIP-based approaches to human action recognition.", "", "This paper exploits the context of natural dynamic scenes for human action recognition in video. Human actions are frequently constrained by the purpose and the physical properties of scenes and demonstrate high correlation with particular scene classes. For example, eating often happens in a kitchen while running is more common outdoors. The contribution of this paper is three-fold: (a) we automatically discover relevant scene classes and their correlation with human actions, (b) we show how to learn selected scene classes from video without manual supervision and (c) we develop a joint framework for action and scene recognition and demonstrate improved recognition of both in natural video. We use movie scripts as a means of automatic supervision for training. For selected action classes we identify correlated scene classes in text and then retrieve video samples of actions and scenes for training using script-to-video alignment. Our visual models for scenes and actions are formulated within the bag-of-features framework and are combined in a joint scene-action SVM-based classifier. We report experimental results and validate the method on a new large dataset with twelve action classes and ten scene classes acquired from 69 movies.", "Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports." ] }
1608.08469
2515492080
User-perceived quality-of-experience (QoE) is critical in internet video delivery systems. Extensive prior work has studied the design of client-side bitrate adaptation algorithms to maximize single-player QoE. However, multiplayer QoE fairness becomes critical as the growth of video traffic makes it more likely that multiple players share a bottleneck in the network. Despite several recent proposals, there is still a series of open questions. In this paper, we bring the problem space to light from a control theory perspective by formalizing the multiplayer QoE fairness problem and addressing two key questions in the broader problem space. First, we derive the sufficient conditions of convergence to steady state QoE fairness under TCP-based bandwidth sharing scheme. Based on the insight from this analysis that in-network active bandwidth allocation is needed, we propose a non-linear MPC-based, router-assisted bandwidth allocation algorithm that regards each player as closed-loop systems. We use trace-driven simulation to show the improvement over existing approaches. We identify several research directions enabled by the control theoretic modeling and envision that control theory can play an important role on guiding real system design in adaptive video streaming.
Player-side solutions, such as FESTIVE @cite_5 and PANDA @cite_2 , entail designing better bitrate adaptation algorithms for multiplayer QoE fairness. While only requiring player algorithm change and thus easy to deploy, player-side solutions do not alter bandwidth allocation in the network and can suffer from suboptimal bandwidth allocation schemes such as the unideal TCP effect @cite_3 and interaction with uncooperative players and cross traffic @cite_17 .
{ "cite_N": [ "@cite_5", "@cite_17", "@cite_3", "@cite_2" ], "mid": [ "2150453038", "2130954326", "2115363289", "2017146017" ], "abstract": [ "Many commercial video players rely on bitrate adaptation logic to adapt the bitrate in response to changing network conditions. Past measurement studies have identified issues with today's commercial players with respect to three key metrics---efficiency, fairness, and stability---when multiple bitrate-adaptive players share a bottleneck link. Unfortunately, our current understanding of why these effects occur and how they can be mitigated is quite limited. In this paper, we present a principled understanding of bitrate adaptation and analyze several commercial players through the lens of an abstract player model. Through this framework, we identify the root causes of several undesirable interactions that arise as a consequence of overlaying the video bitrate adaptation over HTTP. Building on these insights, we develop a suite of techniques that can systematically guide the tradeoffs between stability, fairness and efficiency and thus lead to a general framework for robust video adaptation. We pick one concrete instance from this design space and show that it significantly outperforms today's commercial players on all three key metrics across a range of experimental scenarios.", "With an increasing demand for high-quality video content over the Internet, it is becoming more likely that two or more adaptive streaming players share the same network bottleneck and compete for available bandwidth. This competition can lead to three performance problems: player instability, unfairness between players, and bandwidth underutilization. However, the dynamics of such competition and the root cause for the previous three problems are not yet well understood. In this paper, we focus on the problem of competing video players and describe how the typical behavior of an adaptive streaming player in its Steady-State, which includes periods of activity followed by periods of inactivity (ON-OFF periods), is the main root cause behind the problems listed above. We use two adaptive players to experimentally showcase these issues. Then, focusing on the issue of player instability, we test how several factors (the ON-OFF durations, the available bandwidth and its relation to available bitrates, and the number of competing players) affect stability.", "Today's commercial video streaming services use dynamic rate selection to provide a high-quality user experience. Most services host content on standard HTTP servers in CDNs, so rate selection must occur at the client. We measure three popular video streaming services -- Hulu, Netflix, and Vudu -- and find that accurate client-side bandwidth estimation above the HTTP layer is hard. As a result, rate selection based on inaccurate estimates can trigger a feedback loop, leading to undesirably variable and low-quality video. We call this phenomenon the \"downward spiral effect\", and we measure it on all three services, present insights into its root causes, and validate initial solutions to prevent it.", "Today, the technology for video streaming over the Internet is converging towards a paradigm named HTTP-based adaptive streaming (HAS), which brings two new features. First, by using HTTP TCP, it leverages network-friendly TCP to achieve both firewall NAT traversal and bandwidth sharing. Second, by pre-encoding and storing the video in a number of discrete rate levels, it introduces video bitrate adaptivity in a scalable way so that the video encoding is excluded from the closed-loop adaptation. A conventional wisdom in HAS design is that since the TCP throughput observed by a client would indicate the available network bandwidth, it could be used as a reliable reference for video bitrate selection. We argue that this is no longer true when HAS becomes a substantial fraction of the total network traffic. We show that when multiple HAS clients compete at a network bottleneck, the discrete nature of the video bitrates results in difficulty for a client to correctly perceive its fair-share bandwidth. Through analysis and test bed experiments, we demonstrate that this fundamental limitation leads to video bitrate oscillation and other undesirable behaviors that negatively impact the video viewing experience. We therefore argue that it is necessary to design at the application layer using a \"probe and adapt\" principle for video bitrate adaptation (where \"probe\" refers to trial increment of the data rate, instead of sending auxiliary piggybacking traffic), which is akin, but also orthogonal to the transport-layer TCP congestion control. We present PANDA - a client-side rate adaptation algorithm for HAS - as a practical embodiment of this principle. Our test bed results show that compared to conventional algorithms, PANDA is able to reduce the instability of video bitrate selection by over 75 without increasing the risk of buffer underrun." ] }
1608.08469
2515492080
User-perceived quality-of-experience (QoE) is critical in internet video delivery systems. Extensive prior work has studied the design of client-side bitrate adaptation algorithms to maximize single-player QoE. However, multiplayer QoE fairness becomes critical as the growth of video traffic makes it more likely that multiple players share a bottleneck in the network. Despite several recent proposals, there is still a series of open questions. In this paper, we bring the problem space to light from a control theory perspective by formalizing the multiplayer QoE fairness problem and addressing two key questions in the broader problem space. First, we derive the sufficient conditions of convergence to steady state QoE fairness under TCP-based bandwidth sharing scheme. Based on the insight from this analysis that in-network active bandwidth allocation is needed, we propose a non-linear MPC-based, router-assisted bandwidth allocation algorithm that regards each player as closed-loop systems. We use trace-driven simulation to show the improvement over existing approaches. We identify several research directions enabled by the control theoretic modeling and envision that control theory can play an important role on guiding real system design in adaptive video streaming.
Alternatively, server-side solutions regard the server as a single point of control and allocate bandwidth to players @cite_22 . However, the actual bandwidth bottleneck can occur in the network instead of server and the computation cost is high when the number of players is too large.
{ "cite_N": [ "@cite_22" ], "mid": [ "1488576317" ], "abstract": [ "Recent studies observe that competing adaptive video streaming applications generate flows that lead to instability, under-utilization, and unfairness in bottleneck link sharing within the network. Additional measurements suggest there may also be a negative impact on users' perceived quality of service as a consequence. While it may be intuitive to resolve application-generated issues at the application layer, in this paper we explore the merits of a network layer solution. We are motivated by the observation that traditional network-layer metrics associated with throughput, loss, and delay are inadequate to the task. To bridge this gap we present a network-layer QoS framework for adaptive streaming video fairness that reflect the video user's quality of experience (QoE). We begin first by deriving a new measure to describe user-level fairness among competing flows, one that reflects the dynamics between the video encoding and its mapping to a screen with a given size and resolution. We then design and implement our framework in VHS (Video-Home-Shaper) to evaluate performance in the home's last access hop where this problem is known to exist. Experiments using a variety of devices, O S platforms, and viewing screens demonstrate the merits of using video QoE as a basis for fair bandwidth sharing." ] }
1608.08614
2510153535
The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?
A number of papers have studied transfer learning in CNNs, including the various factors that affect pre-training and fine-tuning. For example, the question of whether pre-training should be terminated early to prevent over-fitting and what layers should be used for transfer learning was studied by @cite_15 @cite_4 . A thorough investigation of good architectural choices for transfer learning was conducted by @cite_39 , while @cite_32 propose an approach to fine-tuning for new tasks without ''forgetting'' the old ones. In contrast to these works, we use a fixed fine-tuning pr
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_32", "@cite_39" ], "mid": [ "2160921898", "2949667497", "2949808626", "" ], "abstract": [ "In the last two years, convolutional neural networks (CNNs) have achieved an impressive suite of results on standard recognition datasets and tasks. CNN-based features seem poised to quickly replace engineered representations, such as SIFT and HOG. However, compared to SIFT and HOG, we understand much less about the nature of the features learned by large CNNs. In this paper, we experimentally probe several aspects of CNN feature learning in an attempt to help practitioners gain useful, evidence-backed intuitions about how to apply CNNs to computer vision problems.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "When building a unified vision system or gradually adding new capabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.", "" ] }
1608.08614
2510153535
The tremendous success of ImageNet-trained deep features on a wide range of transfer tasks begs the question: what are the properties of the ImageNet dataset that are critical for learning good, general-purpose features? This work provides an empirical investigation of various facets of this question: Is more pre-training data always better? How does feature quality depend on the number of training examples per class? Does adding more object classes improve performance? For the same data budget, how should the data be split into classes? Is fine-grained recognition necessary for learning good features? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class? To answer these and related questions, we pre-trained CNN features on various subsets of the ImageNet dataset and evaluated transfer performance on PASCAL detection, PASCAL action classification, and SUN scene classification tasks. Our overall findings suggest that most changes in the choice of pre-training data long thought to be critical do not significantly affect transfer performance.? Given the same number of training classes, is it better to have coarse classes or fine-grained classes? Which is better: more classes or more examples per class?
Numerous methods have explored learning features by optimizing some auxiliary criterion of the data itself, such as image reconstruction @cite_28 @cite_9 @cite_41 @cite_29 @cite_27 @cite_45 (see @cite_26 for a comprehensive overview) and feature slowness @cite_34 @cite_30 . Unfortunately, none of these unsupervised methods turned out to be competitive with those obtained from supervised ImageNet pre-training. In an attempt to make the CNNs work harder'', more recent self-supervised'' methods for more difficult auxiliary data prediction tasks, such as ego-motion @cite_12 @cite_0 , spatial context @cite_18 @cite_40 @cite_35 , temporal context @cite_22 , and even color @cite_5 @cite_1 and sound @cite_17 . Again, these numbers were unable to beat those obtained from ImageNet. Additionally, @cite_36 delve into using weakly supervised'' method, the middle ground of superivised & unsupervised, by pre-training on YFCC100M dataset of 100 million Flickr images labeled with noisy user tags instead of ImageNet. But yet again, despite the YFCC100M being two orders of magnitude larger than ImageNet, the results either came close or fell short from the results pre-trained on ImageNet.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_22", "@cite_41", "@cite_36", "@cite_29", "@cite_5", "@cite_18", "@cite_17", "@cite_26", "@cite_28", "@cite_27", "@cite_40", "@cite_34", "@cite_12", "@cite_9", "@cite_1", "@cite_0", "@cite_45" ], "mid": [ "1699156674", "2321533354", "219040644", "2145889472", "2100031962", "2145038566", "2326925005", "2950187998", "2196697617", "60493759", "2017257315", "2139427956", "2342877626", "2146444479", "2951590555", "189596042", "2950064337", "2198618282", "" ], "abstract": [ "Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric.", "We propose a novel unsupervised learning approach to build features suitable for object detection and classification. The features are pre-trained on a large dataset without human annotation and later transferred via fine-tuning on a different, smaller and labeled dataset. The pre-training consists of solving jigsaw puzzles of natural images. To facilitate the transfer of features to other tasks, we introduce the context-free network (CFN), a siamese-ennead convolutional neural network. The features correspond to the columns of the CFN and they process image tiles independently (i.e., free of context). The later layers of the CFN then use the features to identify their geometric arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. We pre-train the CFN on the training set of the ILSVRC2012 dataset and transfer the features on the combined training and validation set of Pascal VOC 2007 for object detection (via fast RCNN) and classification. These features outperform all current unsupervised features with (51.8 , ) for detection and (68.6 , ) for classification, and reduce the gap with supervised learning ( (56.5 , ) and (78.2 , ) respectively).", "Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.", "THE receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented1–4 and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms5,6. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding7–12. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties13–18, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal8,12 that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs.", "Convolutional networks trained on large supervised datasets produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weakly-labeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and comments, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity and learn correspondences between different languages.", "This work proposes a learning method for deep architectures that takes advantage of sequential data, in particular from the temporal coherence that naturally exists in unlabeled video recordings. That is, two successive frames are likely to contain the same object or objects. This coherence is used as a supervisory signal over the unlabeled data, and is used to improve the performance on a supervised task of interest. We demonstrate the effectiveness of this method on some pose invariant object and face recognition tasks.", "Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.", "This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.", "Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a \"real or fake\" psychophysical experiment, and that they convey significant information about material properties and physical interactions.", "", "The multilayer perceptron, when working in auto-association mode, is sometimes considered as an interesting candidate to perform data compression or dimensionality reduction of the feature space in information processing applications. The present paper shows that, for auto-association, the nonlinearities of the hidden units are useless and that the optimal parameter values can be derived directly by purely linear techniques relying on singular value decomposition and low rank matrix approximation, similar in spirit to the well-known Karhunen-Loeve transform. This approach appears thus as an efficient alternative to the general error back-propagation algorithm commonly used for training multilayer perceptrons. Moreover, it also gives a clear interpretation of the role of the different parameters.", "We present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within adjacent windows, and a point-wise sigmoid non-linearity. A second level of larger and more invariant features is obtained by training the same algorithm on patches of features from the first level. Training a supervised classifier on these features yields 0.64 error on MNIST, and 54 average recognition rate on Caltech 101 with 30 training samples per category. While the resulting architecture is similar to convolutional networks, the layer-wise unsupervised training procedure alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples.", "We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.", "Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.", "The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching.", "We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and dataindependent expectations are approximated using persistent Markov chains. The use of two quite different techniques for estimating the two types of expectation that enter into the gradient of the log-likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer “pre-training” phase that allows variational inference to be initialized with a single bottomup pass. We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and perform well on handwritten digit and visual object recognition tasks.", "We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning.", "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance, i.e, they respond predictably to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.", "" ] }
1608.08128
2508191294
This thesis explore different approaches using Convolutional and Recurrent Neural Networks to classify and temporally localize activities on videos, furthermore an implementation to achieve it has been proposed. As the first step, features have been extracted from video frames using an state of the art 3D Convolutional Neural Network. This features are fed in a recurrent neural network that solves the activity classification and temporally location tasks in a simple and flexible way. Different architectures and configurations have been tested in order to achieve the best performance and learning of the video dataset provided. In addition it has been studied different kind of post processing over the trained network's output to achieve a better results on the temporally localization of activities on the videos. The results provided by the neural network developed in this thesis have been submitted to the ActivityNet Challenge 2016 of the CVPR, achieving competitive results using a simple and flexible architecture.
Several works in the literature have used 2D-CNNs to exploit the spatial correlations between frames of a video by combining their outputs using different strategies @cite_6 @cite_9 @cite_4 . Others have tried using the optical flow as an additional input to the 2D-CNNN @cite_5 , which provides information of the temporal correlations. Later on, 3D-CNNs were proposed in @cite_11 (known as C3D), which were able to exploit short temporal correlations between frames and have demonstrated to work remarkably well for video classification @cite_11 @cite_3 . C3D have also been used for temporal detection in @cite_0 , where multi-stage C3D architecture is used to classify video segment proposals.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_6", "@cite_3", "@cite_0", "@cite_5", "@cite_11" ], "mid": [ "2180092181", "2179401333", "2950209802", "2272842615", "2394849137", "787785461", "2952633803" ], "abstract": [ "We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call \"percepts\" using Gated-Recurrent-Unit Recurrent Networks (GRUs).Our method relies on percepts that are extracted from all level of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts can leads to high-dimensionality video representations. To mitigate this effect and control the model number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler text-decoder model and without extra 3D CNN features.", "In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2 or less) of the video frames.", "There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2 mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.", "Over the last few years deep learning methods have emerged as one of the most prominent approaches for video analysis. However, so far their most successful applications have been in the area of video classification and detection, i.e., problems involving the prediction of a single class label or a handful of output variables per video. Furthermore, while deep networks are commonly recognized as the best models to use in these domains, there is a widespread perception that in order to yield successful results they often require time-consuming architecture search, manual tweaking of parameters and computationally intensive pre-processing or post-processing methods. In this paper we challenge these views by presenting a deep 3D convolutional architecture trained end to end to perform voxel-level prediction, i.e., to output a variable at every voxel of the video. Most importantly, we show that the same exact architecture can be used to achieve competitive results on three widely different voxel-prediction tasks: video semantic segmentation, optical flow estimation, and video coloring. The three networks learned on these problems are trained from raw video without any form of preprocessing and their outputs do not require post-processing to achieve outstanding performance. Thus, they offer an efficient alternative to traditional and much more computationally expensive methods in these video domains.", "We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments in a long video that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes on the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and therefore achieve high temporal localization accuracy. Only the proposal network and the localization network are used during prediction. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7 to 7.4 on MEXaction2 and increases from 15.0 to 19.0 on THUMOS 2014, when the overlap threshold for evaluation is set to 0.5.", "Deep convolutional networks have achieved great success for object recognition in still images. However, for action recognition in videos, the improvement of deep convolutional networks is not so evident. We argue that there are two reasons that could probably explain this result. First the current network architectures (e.g. Two-stream ConvNets) are relatively shallow compared with those very deep models in image domain (e.g. VGGNet, GoogLeNet), and therefore their modeling capacity is constrained by their depth. Second, probably more importantly, the training dataset of action recognition is extremely small compared with the ImageNet dataset, and thus it will be easy to over-fit on the training dataset. To address these issues, this report presents very deep two-stream ConvNets for action recognition, by adapting recent very deep architectures into video domain. However, this extension is not easy as the size of action recognition is quite small. We design several good practices for the training of very deep two-stream ConvNets, namely (i) pre-training for both spatial and temporal nets, (ii) smaller learning rates, (iii) more data augmentation techniques, (iv) high drop out ratio. Meanwhile, we extend the Caffe toolbox into Multi-GPU implementation with high computational efficiency and low memory consumption. We verify the performance of very deep two-stream ConvNets on the dataset of UCF101 and it achieves the recognition accuracy of @math .", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use." ] }
1608.08128
2508191294
This thesis explore different approaches using Convolutional and Recurrent Neural Networks to classify and temporally localize activities on videos, furthermore an implementation to achieve it has been proposed. As the first step, features have been extracted from video frames using an state of the art 3D Convolutional Neural Network. This features are fed in a recurrent neural network that solves the activity classification and temporally location tasks in a simple and flexible way. Different architectures and configurations have been tested in order to achieve the best performance and learning of the video dataset provided. In addition it has been studied different kind of post processing over the trained network's output to achieve a better results on the temporally localization of activities on the videos. The results provided by the neural network developed in this thesis have been submitted to the ActivityNet Challenge 2016 of the CVPR, achieving competitive results using a simple and flexible architecture.
For temporal activity detection, recent works have proposed the usage of Long Short-Term Memory units (LSTM) @cite_8 . LSTMs are a type of RNNs that are able to better exploit long and short temporal correlations in sequences, which makes them suitable for video applications. LSTMs have been used alongside CNNs for video classification @cite_10 and activity localization in videos @cite_2 .
{ "cite_N": [ "@cite_2", "@cite_10", "@cite_8" ], "mid": [ "2952835694", "2950307714", "" ], "abstract": [ "Every moment counts in action recognition. A comprehensive understanding of human activity in video requires labeling every frame according to the actions occurring, placing multiple labels densely over a video sequence. To study this problem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new dataset of dense labels over unconstrained internet videos. Modeling multiple, dense labels benefits from temporal relations within and across classes. We define a novel variant of long short-term memory (LSTM) deep networks for modeling these temporal relations via multiple input and output connections. We show that this model improves action labeling accuracy and further enables deeper understanding tasks ranging from structured retrieval to action prediction.", "Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.", "" ] }
1608.08128
2508191294
This thesis explore different approaches using Convolutional and Recurrent Neural Networks to classify and temporally localize activities on videos, furthermore an implementation to achieve it has been proposed. As the first step, features have been extracted from video frames using an state of the art 3D Convolutional Neural Network. This features are fed in a recurrent neural network that solves the activity classification and temporally location tasks in a simple and flexible way. Different architectures and configurations have been tested in order to achieve the best performance and learning of the video dataset provided. In addition it has been studied different kind of post processing over the trained network's output to achieve a better results on the temporally localization of activities on the videos. The results provided by the neural network developed in this thesis have been submitted to the ActivityNet Challenge 2016 of the CVPR, achieving competitive results using a simple and flexible architecture.
In this paper, we combine the capabilities of both 3D-CNNs and RNNs into a single framework. This way, we design a simple network that takes a sequence of video features from the C3D model @cite_11 as input to a RNN and is able to classify each one of them into an activity category.
{ "cite_N": [ "@cite_11" ], "mid": [ "2952633803" ], "abstract": [ "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use." ] }
1608.08168
2513908807
Choice decisions made by users of online applications can suffer from biases due to the users' level of engagement. For instance, low engagement users may make random choices with no concern for the quality of items offered. This biased choice data can corrupt estimates of user preferences for items. However, one can correct for these biases if additional behavioral data is utilized. To do this we construct a new choice engagement time model which captures the impact of user engagement on choice decisions and response times associated with these choice decisions. Response times are the behavioral data we choose because they are easily measured by online applications and reveal information about user engagement. To test our model we conduct online polls with subject populations that have different levels of engagement and measure their choice decisions and response times. We have two main empirical findings. First, choice decisions and response times are correlated, with strong preferences having faster response times than weak preferences. Second, low user engagement is manifested through more random choice data and faster response times. Both of these phenomena are captured by our choice engagement time model and we find that this model fits the data better than traditional choice models. Our work has direct implications for online applications. It lets these applications remove the bias of low engagement users when estimating preferences for items. It also allows for the segmentation of users according to their level of engagement, which can be useful for targeted advertising or marketing campaigns.
Models were also developed which viewed the information accumulation process as a random walk . The random walk modeled the relative information accumulation of each item, whereas previous models had focused on independent information accumulation processes for each item. The drift of the walk depends on the item utilities and there are decision thresholds above and below the starting point of the process. An item is chosen when one of these thresholds is hit by the process, and the item chosen depends on whether the upper or lower threshold is hit first. A continuous time version of these models known as the drift diffusion model was proposed by @cite_0 , where the information accumulation process is modeled as a one dimensional Brownian motion. The drift diffusion model has been successful at modeling phenomena seen in empirical data . However, a challenge posed by the drift diffusion model is the lack of a simple closed form expression for the likelihood function of the response times. To overcome this, approximations have been developed to allow for easier model estimation . Recent work has extended the drift diffusion model to multi-item choice decisions .
{ "cite_N": [ "@cite_0" ], "mid": [ "2150648562" ], "abstract": [ "A theory of discrimination which assumes that subjects compare psychological values evoked by a stimulus to a subjective referent is proposed. Momentary differences between psychological values for the stimulus and the referent are accumulated over time until one or the other of two response thresholds is first exceeded. The theory is analyzed as a random walk bounded between two absorbing barriers. A general solution to response conditioned expected response times is computed and the important role played by the moment generating function (mgf) for increments to the random walk is examined. From considerations of the mgf it is shown that unlike other random walk models [Stone, 1960; Laming, 1968] the present theory does not imply that response conditioned mean correct and error times must be equal. For two fixed stimuli and a fixed referent it is shown that by controlling values of response thresholds, subjects can produce Receiver Operating Characteristics similar or identical to those predicted by Signal Detection Theory, High Threshold Theory, or Low Threshold Theory." ] }
1608.08020
2950975278
An election is a process through which citizens in liberal democracies select their governing bodies, usually through voting. For elections to be truly honest, people must be able to vote freely without being subject to coercion; that is why voting is usually done in a private manner. In this paper we analyze the security offered by a paper-ballot voting system that is used in Israel, as well as in several other countries around the world. we provide an algorithm which, based on publicly available information, breaks the privacy of the voters participating in such elections. Simulations based on real data collected in Israel show that our algorithm performs well, and can correctly recover the vote of up to 96 of the voters.
Mixes are widely used to model private communications. Proposed by Chaum in 1981 @cite_17 , a mix is a means for delivering messages anonymously between senders and receivers. Communication in a mix is split into rounds, such that in each round @math senders send messages which are then sent to @math receivers in an arbitrary or random order.
{ "cite_N": [ "@cite_17" ], "mid": [ "2103647628" ], "abstract": [ "A technique based on public key cryptography is presented that allows an electronic mail system to hide who a participant communicates with as well as the content of the communication - in spite of an unsecured underlying telecommunication system. The technique does not require a universally trusted authority. One correspondent can remain anonymous to a second, while allowing the second to respond via an untraceable return address. The technique can also be used to form rosters of untraceable digital pseudonyms from selected applications. Applicants retain the exclusive ability to form digital signatures corresponding to their pseudonyms. Elections in which any interested party can verify that the ballots have been properly counted are possible if anonymously mailed ballots are signed with pseudonyms from a roster of registered voters. Another use allows an individual to correspond with a record-keeping organization under a unique pseudonym, which appears in a roster of acceptable clients." ] }
1608.08020
2950975278
An election is a process through which citizens in liberal democracies select their governing bodies, usually through voting. For elections to be truly honest, people must be able to vote freely without being subject to coercion; that is why voting is usually done in a private manner. In this paper we analyze the security offered by a paper-ballot voting system that is used in Israel, as well as in several other countries around the world. we provide an algorithm which, based on publicly available information, breaks the privacy of the voters participating in such elections. Simulations based on real data collected in Israel show that our algorithm performs well, and can correctly recover the vote of up to 96 of the voters.
Each ballot box in the Israeli voting system can be modeled as a certain kind of a mix, namely a timed-mix. In such a mix, a buffer of messages is mixed once in each time period. The set of voters in each polling station corresponds to the set of senders, while the set of parties contesting in an election corresponds to the set of receivers. There are various known attacks on mixes @cite_9 @cite_20 @cite_21 @cite_18 @cite_3 and we refer the interested reader to a recent survey @cite_15 .
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_21", "@cite_3", "@cite_15", "@cite_20" ], "mid": [ "2464505152", "2102714612", "1764405480", "", "", "1494538302" ], "abstract": [ "A passive attacker can compromise a generic anonymity protocol by applying the so called disclosure attack, i.e. a special traffic analysis attack. In this work we present a more efficient way to accomplish this goal, i.e. we need less observations by looking for unique minimal hitting sets. We call this the hitting set attack or just HS-attack. In general, solving the minimal hitting set problem is NP-hard. Therefore, we use frequency analysis to enhance the applicability of our attack. It is possible to apply highly efficient backtracking search algorithms. We call this approach the statistical hitting set attack or SHS-attack. However, the statistical hitting set attack is prone to wrong solutions with a given small probability. We use here duality checking algorithms to resolve this problem. We call this final exact attack the HS*-attack.", "Anonymity services hide user identity at the network or address level but are vulnerable to attacks involving repeated observations of the user. Quantifying the number of observations required for an attack is a useful measure of anonymity.", "A user is only anonymous within a set of other users. Hence, the core functionality of an anonymity providing technique is to establish an anonymity set. In open environments, such as the Internet, the established anonymity sets in the whole are observable and change with every anonymous communication. We use this fact of changing anonymity sets and present a model where we can determine the protection limit of an anonymity technique, i.e. the number of observations required for an attacker to \"break\" uniquely a given anonymity technique. In this paper, we use the popular MIX method to demonstrate our attack. The MIX method forms the basis of most of the today's deployments of anonymity services (e.g. Freedom, Onion Routing, Webmix). We note that our approach is general and can be applied equally well to other anonymity providing techiques.", "", "", "An improvement over the previously known disclosure attack is presented that allows, using statistical methods, to effectively deanonymize users of a mix system. Furthermore the statistical disclosure attack is computationally efficient, and the conditions for it to be possible and accurate are much better understood. The new attack can be generalized easily to a variety of anonymity systems beyond mix networks." ] }
1608.08039
2514652784
In this note we construct minimax observers for linear stationary DAEs with bounded uncertain inputs, given noisy measurements. We prove a new duality principle and show that a finite (infinite) horizon minimax observer exists if and only if the DAE is @math -impulse observable ( @math -detectable) . Remarkably, the regularity of the DAE is not required.
To the best of our knowledge, the results of this paper are new. Its preliminary version appeared in @cite_15 . With respect to @cite_15 the main differences are: (i) new (necessary and sufficient) conditions for existence of minimax observers (Theorem ), (ii) detailed proofs. Duality principle for non-stationary DAEs was introduced in @cite_6 , provided @math . It was then used to derive a sub-optimal observer. In contrast, our duality theorems hold for DAEs with uncertain @math and the constructed observers are optimal in that the worst-case estimation errors associated with the observers are minimal (see Definitions -). The algorithm of @cite_20 constructing a finite horizon minimax observer by using projectors @math is a special case of the one of this paper. Minimax observers for discrete time DAEs were considered in @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_20", "@cite_6" ], "mid": [ "2072850088", "2963300528", "", "1998828465" ], "abstract": [ "Abstract This paper presents a generalization of the minimax state estimation approach for singular linear Differential-Algebraic Equations (DAE) with uncertain but bounded input and observation's noise. We apply generalized Kalman Duality principle to DAE in order to represent the minimax estimate as a solution of a dual control problem for adjoint DAE. The latter is then solved converting the adjoint DAE into ODE by means of a projection algorithm. Finally, we represent the minimax estimate in the form of a linear recursive filter.", "In this paper we construct an infinite horizon minimax state observer for a linear stationary differential-algebraic equation (DAE) with uncertain but bounded input and noisy output. We do not assume regularity or existence of a (unique) solution for any initial state of the DAE. Our approach is based on a generalization of Kalman's duality principle. In addition, we obtain a solution of infinite-horizon linear quadratic optimal control problem for DAE.", "", "In this paper we present Kalman duality principle for a class of linear Differential-Algebraic Equations (DAE) with arbitrary index and time-varying coefficients. We apply it to an ill-posed minimax control problem with DAE constraint and derive a corresponding dual control problem. It turns out that the dual problem is ill-posed as well and so classical optimality conditions are not applicable in the general case. We construct a minimizing sequence ( u _ ) for the dual problem applying Tikhonov method. Finally we represent ( u _ ) in the feedback form using Riccati equation on a subspace which corresponds to the differential part of the DAE." ] }
1608.08029
2517325737
In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named RegionNet ) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to RegionNet to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance.
Salient object detection was first exploited by Itti @cite_22 , and later attracted wide attention in the computer vision society. Traditional methods mostly rely on prior assumptions and most are un-supervised. Center-surround difference which assumes that salient regions differs from their surrounding regions is an important prior in early research. Itti @cite_22 first proposed center-surround difference at different scales to compute saliency. Liu @cite_25 propose center-surround histogram which defines saliency as the difference between center region and its surrounding region. Li @cite_44 propose cost-sensitive SVM to learn and discover salient regions that are different from their surrounding regions. These methods cannot provide sharp boundary for salient region because they are based on rectangle regions, which is only able to generate coarse and blurry boundary.
{ "cite_N": [ "@cite_44", "@cite_25", "@cite_22" ], "mid": [ "2137110664", "2157554677", "2128272608" ], "abstract": [ "Salient object detection aims to locate objects that capture human attention within images. Previous approaches often pose this as a problem of image contrast analysis. In this work, we model an image as a hyper graph that utilizes a set of hyper edges to capture the contextual properties of image pixels or regions. As a result, the problem of salient object detection becomes one of finding salient vertices and hyper edges in the hyper graph. The main advantage of hyper graph modeling is that it takes into account each pixel's (or region's) affinity with its neighborhood as well as its separation from image background. Furthermore, we propose an alternative approach based on center-versus-surround contextual contrast analysis, which performs salient object detection by optimizing a cost-sensitive support vector machine (SVM) objective function. Experimental results on four challenging datasets demonstrate the effectiveness of the proposed approaches against the state-of-the-art approaches to salient object detection.", "We study visual attention by detecting a salient object in an input image. We formulate salient object detection as an image segmentation problem, where we separate the salient object from the image background. We propose a set of novel features including multi-scale contrast, center-surround histogram, and color spatial distribution to describe a salient object locally, regionally, and globally. A conditional random field is learned to effectively combine these features for salient object detection. We also constructed a large image database containing tens of thousands of carefully labeled images by multiple users. To our knowledge, it is the first large image database for quantitative evaluation of visual attention algorithms. We validate our approach on this image database, which is public available with this paper.", "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail." ] }
1608.08029
2517325737
In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named RegionNet ) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to RegionNet to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance.
While center-surround difference considers local contrast, it does not take into consideration of global contrast. Global contrast based methods are later proposed, , Cheng @cite_62 and Yan @cite_60 . @cite_62 , image is first segmented into superpixels. Then saliency value of each region is defined as the contrast with all other regions. The contrast is weighted by spatial distance so that nearby regions have greater impact on it. To deal with objects with complex structures, Yan @cite_60 propose a hierarchical model which analyzes saliency cues from multiple scales based on local contrast and then infers the final saliency values of regions by optimizing them in a tree model. Following them, many methods utilizing bottom-up priors are proposed, readers are encouraged to find more details in a recent survey paper by Borji @cite_63 .
{ "cite_N": [ "@cite_62", "@cite_63", "@cite_60" ], "mid": [ "2037954058", "2160613239", "2002781701" ], "abstract": [ "Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.", "Detecting and segmenting salient objects from natural scenes, often referred to as salient object detection, has attracted great interest in computer vision. While many models have been proposed and several applications have emerged, a deep understanding of achievements and issues remains lacking. We aim to provide a comprehensive review of recent progress in salient object detection and situate this field among other closely related areas such as generic scene segmentation, object proposal generation, and saliency for fixation prediction. Covering 228 publications, we survey i) roots, key concepts, and tasks, ii) core techniques and main modeling trends, and iii) datasets and evaluation metrics for salient object detection. We also discuss open problems such as evaluation metrics and dataset bias in model performance, and suggest future research directions.", "When dealing with objects with complex structures, saliency detection confronts a critical problem - namely that detection accuracy could be adversely affected if salient foreground or background in an image contains small-scale high-contrast patterns. This issue is common in natural images and forms a fundamental challenge for prior methods. We tackle it from a scale point of view and propose a multi-layer approach to analyze saliency cues. The final saliency map is produced in a hierarchical model. Different from varying patch sizes or downsizing images, our scale-based region handling is by finding saliency values optimally in a tree model. Our approach improves saliency detection on many images that cannot be handled well traditionally. A new dataset is also constructed." ] }
1608.08029
2517325737
In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named RegionNet ) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to RegionNet to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance.
Wang @cite_3 propose to detect salient object by integrating both local estimation and global search with two trained networks DNN-L and DNN-G. Zhao @cite_0 consider global and local context by putting a global and a closer-focused superpixel-centered window to extract features of each superpixel, respectively, and then combine them to predict saliency score. Li @cite_46 propose multi-scale deep features by extracting features of each region at three scales and then fuse them to generate its saliency score. These works are region-based which focused on extracting features of regions and fuse larger scale of regions as context to predict saliency score of each region. These fusions are mostly applied at only one layer and does not achieve a optimal performance. In addition, the networks extract features of one region for each forwarding which is very time-consuming.
{ "cite_N": [ "@cite_0", "@cite_46", "@cite_3" ], "mid": [ "1942214758", "1894057436", "1947031653" ], "abstract": [ "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.", "Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0 and 13.2 respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7 and 35.1 respectively on these two datasets.", "This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the local estimation stage, we detect local saliency by using a deep neural network (DNN-L) which learns local patch features to determine the saliency value of each pixel. The estimated local saliency maps are further refined by exploring the high level object concepts. In the global search stage, the local saliency map together with global contrast and geometric information are used as global features to describe a set of object candidate regions. Another deep neural network (DNN-G) is trained to predict the saliency score of each object region based on the global features. The final saliency map is generated by a weighted sum of salient object regions. Our method presents two interesting insights. First, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection. Second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically. Quantitative and qualitative experiments on several benchmark data sets demonstrate that our algorithm performs favorably against the state-of-the-art methods." ] }
1608.08029
2517325737
In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named RegionNet ) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to RegionNet to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance.
Recently, CNN has also been applied to pixels-to-pixels dense image prediction, such as semantic segmentation and saliency prediction. Long @cite_53 propose fully convolutional networks which is trained end-to-end and pixels-to-pixels by introducing fully convolutional layers and a skip architecture. Chen @cite_34 propose a coarse-to-fine manner in which the first CNN generates coarse map using the entire image as input and then the second CNN takes the coarse map and local patch as input to generate fine-grained saliency map. Li @cite_30 propose a multi-task model based on fully convolutional network. @cite_30 , saliency detection task is in conjunction with object segmentation task, which is helpful for perceiving objects. A Laplacian regularized regression is then applied to refine saliency map. However, while end-to-end dense saliency prediction is efficient, the resulting saliency maps are coarse and with blurry object boundaries due to the presence of convolutional layers with large receptive fields and pooling layers.
{ "cite_N": [ "@cite_30", "@cite_53", "@cite_34" ], "mid": [ "2147347517", "1903029394", "2270657321" ], "abstract": [ "A key problem in salient object detection is how to effectively model the semantic properties of salient objects in a data-driven manner. In this paper, we propose a multi-task deep saliency model based on a fully convolutional neural network with global input (whole raw images) and global output (whole saliency maps). In principle, the proposed saliency model takes a data-driven strategy for encoding the underlying saliency prior information, and then sets up a multi-task learning scheme for exploring the intrinsic correlations between saliency detection and semantic image segmentation. Through collaborative feature learning from such two correlated tasks, the shared fully convolutional layers produce effective features for object perception. Moreover, it is capable of capturing the semantic information on salient objects across different levels using the fully convolutional layers, which investigate the feature-sharing properties of salient object detection with a great reduction of feature redundancy. Finally, we present a graph Laplacian regularized nonlinear regression model for saliency refinement. Experimental results demonstrate the effectiveness of our approach in comparison with the state-of-the-art approaches.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Salient object detection increasingly receives attention as an important component or step in several pattern recognition and image processing tasks. Although a variety of powerful saliency models have been intensively proposed, they usually involve heavy feature (or model) engineering based on priors (or assumptions) about the properties of objects and backgrounds. Inspired by the effectiveness of recently developed feature learning, we provide a novel deep image saliency computing (DISC) framework for fine-grained image saliency computing. In particular, we model the image saliency from both the coarse-and fine-level observations, and utilize the deep convolutional neural network (CNN) to learn the saliency representation in a progressive manner. In particular, our saliency model is built upon two stacked CNNs. The first CNN generates a coarse-level saliency map by taking the overall image as the input, roughly identifying saliency regions in the global context. Furthermore, we integrate superpixel-based local context information in the first CNN to refine the coarse-level saliency map. Guided by the coarse saliency map, the second CNN focuses on the local context to produce fine-grained and accurate saliency map while preserving object details. For a testing image, the two CNNs collaboratively conduct the saliency computing in one shot. Our DISC framework is capable of uniformly highlighting the objects of interest from complex background while preserving well object details. Extensive experiments on several standard benchmarks suggest that DISC outperforms other state-of-the-art methods and it also generalizes well across data sets without additional training. The executable version of DISC is available online: http: vision.sysu.edu.cn projects DISC ." ] }
1608.08029
2517325737
In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named RegionNet ) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to RegionNet to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance.
RGB-D saliency is an emerging topic and most RGB-D saliency methods are based on fusing depth priors with RGB saliency priors. Ju @cite_16 propose RGB-D saliency method based on anisotropic center-surround difference, in which saliency is measured as how much it outstands from surroundings. Peng @cite_49 propose depth saliency with multi-contextual contrast and then fuse it with appearance cues via a multi-stage model. Ren @cite_18 propose normalized depth prior and global-context surface orientation prior based on depth information and then fuse them with RGB region contrast priors. Depth contrast may cause false positives in background region, to address it, in @cite_41 , Feng propose local background enclosure feature based on the observation that salient objects tend to be locally in front of surrounding regions. To the best of our knowledge, existing RGB-D salient object detection are all using hand-crafted features and the performance is not optimized.
{ "cite_N": [ "@cite_41", "@cite_18", "@cite_16", "@cite_49" ], "mid": [ "2461758788", "1938386764", "1976409045", "20683899" ], "abstract": [ "Recent work in salient object detection has considered the incorporation of depth cues from RGB-D images. In most cases, depth contrast is used as the main feature. However, areas of high contrast in background regions cause false positives for such methods, as the background frequently contains regions that are highly variable in depth. Here, we propose a novel RGB-D saliency feature. Local Background Enclosure (LBE) captures the spread of angular directions which are background with respect to the candidate region and the object that it is part of. We show that our feature improves over state-of-the-art RGB-D saliency approaches as well as RGB methods on the RGBD1000 and NJUDS2000 datasets.", "Inspired by the effectiveness of global priors for 2D saliency analysis, this paper aims to explore those particular to RGB-D data. To this end, we propose two priors, which are the normalized depth prior and the global-context surface orientation prior, and formulate them in the forms simple for computation. A two-stage RGB-D salient object detection framework is presented. It first integrates the region contrast, together with the background, depth, and orientation priors to achieve a saliency map. Then, a saliency restoration scheme is proposed, which integrates the PageRank algorithm for sampling high confident regions and recovers saliency for those ambiguous. The saliency map is thus reconstructed and refined globally. We conduct comparative experiments on two publicly available RGB-D datasets. Experimental results show that our approach consistently outperforms other state-of-the-art algorithms on both datasets.", "Most previous works on saliency detection are dedicated to 2D images. Recently it has been shown that 3D visual information supplies a powerful cue for saliency analysis. In this paper, we propose a novel saliency method that works on depth images based on anisotropic center-surround difference. Instead of depending on absolute depth, we measure the saliency of a point by how much it outstands from surroundings, which takes the global depth structure into consideration. Besides, two common priors based on depth and location are used for refinement. The proposed method works within a complexity of O(N) and the evaluation on a dataset of over 1000 stereo images shows that our method outperforms state-of-the-art.", "Although depth information plays an important role in the human vision system, it is not yet well-explored in existing visual saliency computational models. In this work, we first introduce a large scale RGBD image dataset to address the problem of data deficiency in current research of RGBD salient object detection. To make sure that most existing RGB saliency models can still be adequate in RGBD scenarios, we continue to provide a simple fusion framework that combines existing RGB-produced saliency with new depth-induced saliency, the former one is estimated from existing RGB models while the latter one is based on the proposed multi-contextual contrast model. Moreover, a specialized multi-stage RGBD model is also proposed which takes account of both depth and appearance cues derived from low-level feature contrast, mid-level region grouping and high-level priors enhancement. Extensive experiments show the effectiveness and superiority of our model which can accurately locate the salient objects from RGBD images, and also assign consistent saliency values for the target objects." ] }
1608.08029
2517325737
In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named RegionNet ) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to RegionNet to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance.
Multi-scale context has been proved to be useful for image segmentation task @cite_4 @cite_5 @cite_43 @cite_65 . Hariharan @cite_4 proposed hypercolumns for object segmentation and fine-grained localization, in which they defined “hypercolumn” at a given input location as the outputs of all layers at that location. Features of different layers are combined and then be used for classification. Zhao @cite_0 proposed multi-context network which extracts features of a given superpixel at global and local scale, and then predict saliency value of that superpixel. Li @cite_46 proposed to extract features at three scales: bounding box, neighbourhood rectangular and the entire image. Liu @cite_56 proposed to use recurrent convolutional layers (RCLs) @cite_57 iteratively to integrate context information and to refine saliency maps. At each step, the RCL takes coarse saliency map from last step and feature map at lower layer as input to predict a finer saliency map. In this way, context information is integrated iteratively and the final saliency map is more accurate than that predicted from global context.
{ "cite_N": [ "@cite_4", "@cite_65", "@cite_56", "@cite_0", "@cite_43", "@cite_57", "@cite_5", "@cite_46" ], "mid": [ "1948751323", "", "2461475918", "1942214758", "", "1934184906", "", "1894057436" ], "abstract": [ "Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline.", "", "Traditional1 salient object detection models often use hand-crafted features to formulate contrast and various prior knowledge, and then combine them artificially. In this work, we propose a novel end-to-end deep hierarchical saliency network (DHSNet) based on convolutional neural networks for detecting salient objects. DHSNet first makes a coarse global prediction by automatically learning various global structured saliency cues, including global contrast, objectness, compactness, and their optimal combination. Then a novel hierarchical recurrent convolutional neural network (HRCNN) is adopted to further hierarchically and progressively refine the details of saliency maps step by step via integrating local context information. The whole architecture works in a global to local and coarse to fine manner. DHSNet is directly trained using whole images and corresponding ground truth saliency masks. When testing, saliency maps can be generated by directly and efficiently feedforwarding testing images through the network, without relying on any other techniques. Evaluations on four benchmark datasets and comparisons with other 11 state-of-the-art algorithms demonstrate that DHSNet not only shows its significant superiority in terms of performance, but also achieves a real-time speed of 23 FPS on modern GPUs.", "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.", "", "In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks. Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer. Though the input is static, the activities of RCNN units evolve over time so that the activity of each unit is modulated by the activities of its neighboring units. This property enhances the ability of the model to integrate the context information, which is important for object recognition. Like other recurrent neural networks, unfolding the RCNN through time can result in an arbitrarily deep network with a fixed number of parameters. Furthermore, the unfolded network has multiple paths, which can facilitate the learning process. The model is tested on four benchmark object recognition datasets: CIFAR-10, CIFAR-100, MNIST and SVHN. With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets. Increasing the number of parameters leads to even better performance. These results demonstrate the advantage of the recurrent structure over purely feed-forward structure for object recognition.", "", "Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0 and 13.2 respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7 and 35.1 respectively on these two datasets." ] }
1608.08029
2517325737
In this paper, we propose a novel edge preserving and multi-scale contextual neural network for salient object detection. The proposed framework is aiming to address two limits of the existing CNN based methods. First, region-based CNN methods lack sufficient context to accurately locate salient object since they deal with each region independently. Second, pixel-based CNN methods suffer from blurry boundaries due to the presence of convolutional and pooling layers. Motivated by these, we first propose an end-to-end edge-preserved neural network based on Fast R-CNN framework (named RegionNet ) to efficiently generate saliency map with sharp object boundaries. Later, to further improve it, multi-scale spatial context is attached to RegionNet to consider the relationship between regions and the global scenes. Furthermore, our method can be generally applied to RGB-D saliency detection by depth refinement. The proposed framework achieves both clear detection boundary and multi-scale contextual robustness simultaneously for the first time, and thus achieves an optimized performance. Experiments on six RGB and two RGB-D benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance.
Fixation prediction @cite_22 @cite_35 @cite_64 @cite_39 aims to predict the regions people may pay attention to, and semantic segmentation @cite_53 @cite_47 aims to segment objects of certain classes in images. They are topics related to salient object detection, but they also have significant differences. Fixation prediction aims to predict which most attract people's attention, while salient object detection focuses on segmenting the most attractive . For semantic segmentation, saliency detection is a class-agnostic task, whether an object is salient or not is largely depend on its surroundings, while semantic segmentation mainly focuses on segmentation objects of certain classes ( 20 classes in PASCAL VOC dataset). So compared with semantic segmentation, context information is more important for saliency detection, and this is the main motivation of our .
{ "cite_N": [ "@cite_35", "@cite_64", "@cite_22", "@cite_53", "@cite_39", "@cite_47" ], "mid": [ "2133589685", "1996031228", "2128272608", "1903029394", "2138046011", "" ], "abstract": [ "We propose a definition of saliency by considering what the visual system is trying to optimize when directing attention. The resulting model is a Bayesian framework from which bottom-up saliency emerges naturally as the self-information of visual features, and overall saliency (incorporating top-down information with bottom-up saliency) emerges as the pointwise mutual information between the features and the target when searching for a target. An implementation of our framework demonstrates that our model’s bottom-up saliency maps perform as well as or better than existing algorithms in predicting people’s fixations in free viewing. Unlike existing saliency measures, which depend on the statistics of the particular image being viewed, our measure of saliency is derived from natural image statistics, obtained in advance from a collection of natural images. For this reason, we call our model SUN (Saliency Using Natural statistics). A measure of saliency based on natural image statistics, rather than based on a single test image, provides a straightforward explanation for many search asymmetries observed in humans; the statistics of a single test image lead to predictions that are not consistent with these asymmetries. In our model, saliency is computed locally, which is consistent with the neuroanatomy of the early visual system and results in an efficient algorithm with few free parameters.", "Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.", "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "A novel Boolean Map based Saliency (BMS) model is proposed. An image is characterized by a set of binary images, which are generated by randomly thresholding the image's color channels. Based on a Gestalt principle of figure-ground segregation, BMS computes saliency maps by analyzing the topological structure of Boolean maps. BMS is simple to implement and efficient to run. Despite its simplicity, BMS consistently achieves state-of-the-art performance compared with ten leading methods on five eye tracking datasets. Furthermore, BMS is also shown to be advantageous in salient object detection.", "" ] }
1608.08334
2523233161
Thanks to the availability and increasing popularity of Egocentric cameras such as GoPro cameras, glasses, and etc. we have been provided with a plethora of videos captured from the first person perspective. Surveillance cameras and Unmanned Aerial Vehicles(also known as drones) also offer tremendous amount of videos, mostly with top-down or oblique view-point. Egocentric vision and top-view surveillance videos have been studied extensively in the past in the computer vision community. However, the relationship between the two has yet to be explored thoroughly. In this effort, we attempt to explore this relationship by approaching two questions. First, having a set of egocentric videos and a top-view video, can we verify if the top-view video contains all, or some of the egocentric viewers present in the egocentric set? And second, can we identify the egocentric viewers in the content of the top-view video? In other words, can we find the cameramen in the surveillance videos? These problems can become more challenging when the videos are not time-synchronous. Thus we formalize the problem in a way which handles and also estimates the unknown relative time-delays between the egocentric videos and the top-view video. We formulate the problem as a spectral graph matching instance, and jointly seek the optimal assignments and relative time-delays of the videos. As a result, we spatiotemporally localize the egocentric observers in the top-view video. We model each view (egocentric or top) using a graph, and compute the assignment and time-delays in an iterative-alternative fashion.
Visual analysis of egocentric videos has recently became a hot research topic in computer vision @cite_13 @cite_10 , from recognizing daily activities @cite_16 @cite_23 to object detection @cite_2 , video summarization @cite_6 , and predicting gaze behavior @cite_12 @cite_37 @cite_14 . In the following, we review some previous work related to ours spanning , , and .
{ "cite_N": [ "@cite_13", "@cite_37", "@cite_14", "@cite_6", "@cite_23", "@cite_2", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "", "", "2129053638", "2120645068", "2149276562", "2031688197", "", "", "2136668269" ], "abstract": [ "", "", "Several visual attention models have been proposed for describing eye movements over simple stimuli and tasks such as free viewing or visual search. Yet, to date, there exists no computational framework that can reliably mimic human gaze behavior in more complex environments and tasks such as urban driving. In addition, benchmark datasets, scoring techniques, and top-down model architectures are not yet well understood. In this paper, we describe new task-dependent approaches for modeling top-down overt visual attention based on graphical models for probabilistic inference and reasoning. We describe a dynamic Bayesian network that infers probability distributions over attended objects and spatial locations directly from observed data. Probabilistic inference in our model is performed over object-related functions that are fed from manual annotations of objects in video scenes or by state-of-the-art object detection recognition algorithms. Evaluating over approximately 3 h (approximately 315 000 eye fixations and 12 000 saccades) of observers playing three video games (time-scheduling, driving, and flight combat), we show that our approach is significantly more predictive of eye fixations compared to: 1) simpler classifier-based models also developed here that map a signature of a scene (multimodal information from gist, bottom-up saliency, physical actions, and events) to eye positions; 2) 14 state-of-the-art bottom-up saliency models; and 3) brute-force algorithms such as mean eye position. Our results show that the proposed model is more effective in employing and reasoning over spatio-temporal visual data compared with the state-of-the-art.", "We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.", "We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.", "This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.", "", "", "We present a model for gaze prediction in egocentric video by leveraging the implicit cues that exist in camera wearer's behaviors. Specifically, we compute the camera wearer's head motion and hand location from the video and combine them to estimate where the eyes look. We further model the dynamic behavior of the gaze, in particular fixations, as latent variables to improve the gaze prediction. Our gaze prediction results outperform the state-of-the-art algorithms by a large margin on publicly available egocentric vision datasets. In addition, we demonstrate that we get a significant performance boost in recognizing daily actions and segmenting foreground objects by plugging in our gaze predictions into state-of-the-art methods." ] }
1608.08334
2523233161
Thanks to the availability and increasing popularity of Egocentric cameras such as GoPro cameras, glasses, and etc. we have been provided with a plethora of videos captured from the first person perspective. Surveillance cameras and Unmanned Aerial Vehicles(also known as drones) also offer tremendous amount of videos, mostly with top-down or oblique view-point. Egocentric vision and top-view surveillance videos have been studied extensively in the past in the computer vision community. However, the relationship between the two has yet to be explored thoroughly. In this effort, we attempt to explore this relationship by approaching two questions. First, having a set of egocentric videos and a top-view video, can we verify if the top-view video contains all, or some of the egocentric viewers present in the egocentric set? And second, can we identify the egocentric viewers in the content of the top-view video? In other words, can we find the cameramen in the surveillance videos? These problems can become more challenging when the videos are not time-synchronous. Thus we formalize the problem in a way which handles and also estimates the unknown relative time-delays between the egocentric videos and the top-view video. We formulate the problem as a spectral graph matching instance, and jointly seek the optimal assignments and relative time-delays of the videos. As a result, we spatiotemporally localize the egocentric observers in the top-view video. We model each view (egocentric or top) using a graph, and compute the assignment and time-delays in an iterative-alternative fashion.
To explore the relationship among multiple egocentric viewers, @cite_17 combines several egocentric videos to achieve a more complete video with less quality degradation by estimating the importance of different scene regions and incorporating the consensus among several egocentric videos. Fathi , @cite_38 detect and recognize the type of social interactions such as dialogue, monologue, and discussion by detecting human faces and estimating their body and head orientations. Yonetani @cite_25 correlate the head motion of an egocentric observer with the humans present in other egocentric videos to perform self-search. @cite_20 proposes a multi-task clustering framework, which searches for coherent clusters of daily actions using the notion that people tend to perform similar actions in certain environments such as workplace or kitchen. @cite_30 proposes a framework that discovers static and movable objects used by a set of egocentric users. Recent work in @cite_19 identifies the person who draws the most attention in a set of egocentric viewers, given a set of time-synchronized egocentric videos interacting with each other.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_19", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2198667788", "", "2963006768", "", "2294654473", "2061320885" ], "abstract": [ "We present a fully unsupervised approach for the discovery of i) task relevant objects and ii) how these objects have been used. A Task Relevant Object (TRO) is an object, or part of an object, with which a person interacts during task performance. Given egocentric video from multiple operators, the approach can discover objects with which the users interact, both static objects such as a coffee machine as well as movable ones such as a cup. Importantly, we also introduce the term Mode of Interaction (MOI) to refer to the different ways in which TROs are used. Say, a cup can be lifted, washed, or poured into. When harvesting interactions with the same object from multiple operators, common MOIs can be found. Setup and Dataset: Using a wearable camera and gaze tracker (Mobile Eye-XG from ASL), egocentric video is collected of users performing tasks, along with their gaze in pixel coordinates. Six locations were chosen: kitchen, workspace, laser printer, corridor with a locked door, cardiac gym and weight-lifting machine. The Bristol Egocentric Object Interactions Dataset is publically available .", "", "Wearable cameras, such as Google Glass and Go Pro, enable video data collection over larger areas and from different views. In this paper, we tackle a new problem of locating the co-interest person (CIP), i.e., the one who draws attention from most camera wearers, from temporally synchronized videos taken by multiple wearable cameras. Our basic idea is to exploit the motion patterns of people and use them to correlate the persons across different videos, instead of performing appearance-based matching as in traditional video co-segmentation localization. This way, we can identify CIP even if a group of people with similar appearance are present in the view. More specifically, we detect a set of persons on each frame as the candidates of the CIP and then build a Conditional Random Field (CRF) model to select the one with consistent motion patterns in different videos and high spacial-temporal consistency in each video. We collect three sets of wearable-camera videos for testing the proposed algorithm. All the involved people have similar appearances in the collected videos and the experiments demonstrate the effectiveness of the proposed algorithm.", "", "Recognizing human activities from videos is a fundamental research problem in computer vision. Recently, there has been a growing interest in analyzing human behavior from data collected with wearable cameras. First-person cameras continuously record several hours of their wearers' life. To cope with this vast amount of unlabeled and heterogeneous data, novel algorithmic solutions are required. In this paper, we propose a multitask clustering framework for activity of daily living analysis from visual data gathered from wearable cameras. Our intuition is that, even if the data are not annotated, it is possible to exploit the fact that the tasks of recognizing everyday activities of multiple individuals are related, since typically people perform the same actions in similar environments, e.g., people working in an office often read and write documents). In our framework, rather than clustering data from different users separately, we propose to look for clustering partitions which are coherent among related tasks. In particular, two novel multitask clustering algorithms, derived from a common optimization problem, are introduced. Our experimental evaluation, conducted both on synthetic data and on publicly available first-person vision data sets, shows that the proposed approach outperforms several single-task and multitask learning methods.", "Videos recorded by wearable egocentric cameras can suffer from quality degradations that cannot always be fixed by current methods. When several wearable video cameras are viewing the same scene, each having highly variable quality, it is possible to combine them into a single high-quality video. Current techniques select for each point in time the highest quality video stream, but the highest quality video may not be relevant. E.g. the best quality video can come from a person that happen to look sideways from the main attraction. We propose the curation of a single video stream from multiple egocentric videos by requiring that the selected video will also view the most interesting region in the scene. Importance of a region is determined by the \"wisdom of the crowd\", i.e. the number of cameras looking at a region. The resulting video is more interesting and of higher quality than any individual video streams can possibly obtain. Several examples are presented demonstrating the effectiveness of this technique." ] }
1608.08014
2518479084
In this paper, we propose effective channel assignment algorithms for network utility maximization in a cellular network with underlaying device-to-device (D2D) communications. A major innovation is the consideration of partial channel state information (CSI), i.e., the base station (BS) is assumed to be able to acquire partial' instantaneous CSI of the cellular and D2D links, as well as, the interference links. In contrast to existing works, multiple D2D links are allowed to share the same channel, and the quality of service (QoS) requirements for both the cellular and D2D links are enforced. We first develop an optimal channel assignment algorithm based on dynamic programming (DP), which enjoys a much lower complexity compared to exhaustive search and will serve as a performance benchmark. To further reduce complexity, we propose a cluster-based sub-optimal channel assignment algorithm. New closed-form expressions for the expected weighted sum-rate and the successful transmission probabilities are also derived. Simulation results verify the effectiveness of the proposed algorithms. Moreover, by comparing different partial CSI scenarios, we observe that the CSI of the D2D communication links and the interference links from the D2D transmitters to the BS significantly affects the network performance, while the CSI of the interference links from the BS to the D2D receivers only has a negligible impact.
Resource allocation in D2D networks faces two main challenges. Firstly, when multiple D2D links are allowed to access the same channel, the dynamic resource allocation problem becomes an NP-hard problem @cite_13 . Secondly, in cellular networks with underlaying D2D communications, the total number of links, especially interference links, is usually very large. As a result, overwhelming overheads will be incurred for collecting the channel state information (CSI) of all these links @cite_14 . Therefore, the commonly considered full CSI scenario is not practical, and the partial CSI case, where instantaneous CSI of part of the communication and interference links is unknown at the BS, should be considered. In this paper, we will develop effective channel assignment algorithms for D2D communications with partial CSI.
{ "cite_N": [ "@cite_14", "@cite_13" ], "mid": [ "1990557017", "2074454855" ], "abstract": [ "Device-to-device communication is likely to be added to LTE in 3GPP Release 12. In principle, exploiting direct communication between nearby mobile devices will improve spectrum utilization, overall throughput, and energy consumption, while enabling new peer-to-peer and location-based applications and services. D2D-enabled LTE devices can also become competitive for fallback public safety networks, which must function when cellular networks are not available or fail. Introducing D2D poses many challenges and risks to the long-standing cellular architecture, which is centered around the base station. We provide an overview of D2D standardization activities in 3GPP, identify outstanding technical challenges, draw lessons from initial evaluation studies, and summarize \"best practices\" in the design of a D2D-enabled air interface for LTE-based cellular networks", "In this paper, we investigate Device-to-Device (D2D) communication underlaying cellular networks to provide spectrally efficient support of local services. Since in underlay mode, D2D communications share resources in the time and frequency domains with cellular system, it will introduce potentially severe interference to the cellular users and accordingly presents a challenge in radio resource management. In order to avoid generating interference to the high-priority users (cellular users) operating on the same time-frequency resources and to optimize the throughput over the shared resources under the transmit power and the quality of service (QoS) constraints, we propose an interference alignment-based resource sharing scheme for D2D communication underlaying cellular networks. The simulation results demonstrate that by using the proposed scheme, D2D communication can effectively improve the total throughput without generating harmful interference to cellular networks." ] }
1608.08014
2518479084
In this paper, we propose effective channel assignment algorithms for network utility maximization in a cellular network with underlaying device-to-device (D2D) communications. A major innovation is the consideration of partial channel state information (CSI), i.e., the base station (BS) is assumed to be able to acquire partial' instantaneous CSI of the cellular and D2D links, as well as, the interference links. In contrast to existing works, multiple D2D links are allowed to share the same channel, and the quality of service (QoS) requirements for both the cellular and D2D links are enforced. We first develop an optimal channel assignment algorithm based on dynamic programming (DP), which enjoys a much lower complexity compared to exhaustive search and will serve as a performance benchmark. To further reduce complexity, we propose a cluster-based sub-optimal channel assignment algorithm. New closed-form expressions for the expected weighted sum-rate and the successful transmission probabilities are also derived. Simulation results verify the effectiveness of the proposed algorithms. Moreover, by comparing different partial CSI scenarios, we observe that the CSI of the D2D communication links and the interference links from the D2D transmitters to the BS significantly affects the network performance, while the CSI of the interference links from the BS to the D2D receivers only has a negligible impact.
Most previous works have focused on the full CSI scenario, but, recently, some papers have considered the partial CSI case. Considering that the BS cannot acquire CSI of the interference links between user devices, a maximum weighted bipartite matching algorithm was applied in @cite_16 to get the optimal recourse allocation scheme in a D2D network. However, this work allowed at most one D2D link to access one channel. In @cite_25 , multiple D2D links were allowed to access the same channel with the assumption that the BS only has knowledge of the CSI of cellular links, and a centralized channel assignment algorithm and a distributed power control algorithm were developed in the high SINR region. However, the high SINR assumption does not usually hold in D2D communications. Furthermore, all the previous works considering partial CSI have only dealt with one particular partial CSI scenario, e.g., in @cite_16 , the BS was assumed to know the CSI of all the links except the interference links between user devices. Thus, the relative importance of the CSI of different links in D2D networks is still unknown, and it is the question that will be explored in this paper.
{ "cite_N": [ "@cite_16", "@cite_25" ], "mid": [ "2509136231", "2149016651" ], "abstract": [ "In device-to-device (D2D) communications, channel state information (CSI) is exploited to manage the interference between D2D users and regular cellular users (CUs) and improve system performance. However, obtaining the accurate CSI is usually difficult and causes high overhead, particularly when the links are not connected to the base station (BS), such as the links between regular CUs and D2D receivers (CU-D links). In this paper, we investigate the signaling overhead and performance tradeoff in D2D communications with channel uncertainty. To limit interference to regular CUs, we only allow the resource of a CU to be reused by, at most, one D2D pair. We also assume that only partial CSI of the CU-D links is available at the BS and develop two different strategies to deal with the channel uncertainty, namely, probabilistic and partial feedback schemes. We first derive a probability-based resource-allocation scheme by utilizing channel statistical characteristics to maximize the overall throughput of the CUs and admissible D2D pairs while guaranteeing their quality of service (QoS) in terms of signal-to-interference-plus-noise ratio (SINR) and outage probability, respectively. Then, we propose an efficient feedback scheme to reduce the overhead of CSI feedback while providing near-optimal performance. In addition, we propose a combined scheme to take advantages of both probabilistic and partial feedback schemes. It is shown by simulation that there exists an optimal threshold of the outage probability for probabilistic scheme while the partial feedback scheme is robust to the channel models. Furthermore, the combined scheme outperforms the probabilistic and the partial feedback schemes in terms of overall throughput.", "The basic idea of device-to-device (D2D) communication is that pairs of suitably selected wireless devices reuse the cellular spectrum to establish direct communication links, provided that the adverse effects of D2D communication on cellular users are minimized and that cellular users are given higher priority in using limited wireless resources. Despite its great potential in terms of coverage and capacity performance, implementing this new concept poses some challenges, particularly with respect to radio resource management. The main challenges arise from a strong need for distributed D2D solutions that operate in the absence of precise channel and network knowledge. To address this challenge, this paper studies a resource allocation problem in a single-cell wireless network with multiple D2D users sharing the available radio frequency channels with cellular users. We consider a realistic scenario where the base station (BS) is provided with strictly limited channel knowledge, whereas D2D and cellular users have no information. We prove a lower bound for the cellular aggregate utility in the downlink with fixed BS power, which allows for decoupling the channel allocation and D2D power control problems. An efficient graph-theoretical approach is proposed to perform channel allocation, which offers flexibility with respect to allocation criteria (aggregate utility maximization, fairness, and quality-of-service (QoS) guarantee). We model the power control problem as a multiagent learning game. We show that the game is an exact potential game with noisy rewards, which is defined on a discrete strategy set, and characterize the set of Nash equilibria. Q-learning better-reply dynamics is then used to achieve equilibrium." ] }
1608.07857
2517992291
Device-to-device ( @math ) communication is a promising approach to optimize the utilization of air interface resources in 5G networks, since it allows decentralized opportunistic short-range communication. For @math to be useful, mobile nodes must possess content that other mobiles want. Thus, intelligent caching techniques are essential for @math . In this paper, we use results from stochastic geometry to derive the probability of successful content delivery in the presence of interference and noise. We employ a general transmission strategy, where multiple files are cached at the users and different files can be transmitted simultaneously throughout the network. We then formulate an optimization problem, and find the caching distribution that maximizes the density of successful receptions (DSR) under a simple transmission strategy, where a single file is transmitted at a time throughout the network. We model file requests by a Zipf distribution with exponent @math , which results in an optimal caching distribution that is also a Zipf distribution with exponent @math , which is related to @math through a simple expression involving the path loss exponent. We solve the optimal content placement problem for more general demand profiles under Rayleigh, Ricean, and Nakagami small-scale fading distributions. Our results suggest that it is required to flatten the request distribution to optimize the caching performance. We also develop strategies to optimize content caching for the more general case with multiple files, and bound the DSR for that scenario.
Different aspects of @math content distribution are studied. Scalability in ad hoc networks is considered @cite_25 , where decentralized algorithms for message forwarding are proposed by considering a Zipf product form model for message preferences. Throughput scaling laws with caching have been widely studied @cite_24 @cite_8 @cite_30 . Optimal collaboration distance, Zipf distribution for content reuse, best achievable scaling for the expected number of active @math interference-free collaboration pairs for different Zipf exponents is studied @cite_29 . With a heuristic choice (Zipf) of caching distribution for Zipf distributed requests, the optimal collaboration distance @cite_4 and the Zipf exponent to maximize number of @math links are determined @cite_8 . However, in general, the caching pmf is not necessarily same as the request pmf. This brings us to the one of the main objectives in this paper, which is to find the best caching pmf that achieves the best density of successful receptions ( @math ) in @math networks.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_8", "@cite_29", "@cite_24", "@cite_25" ], "mid": [ "1606035500", "2039124714", "2054581561", "1964721119", "2106248279", "1971814938" ], "abstract": [ "We consider a wireless device-to-device (D2D) network where communication is restricted to be single-hop. Users make arbitrary requests from a finite library of files and have pre-cached information on their devices, subject to a per-node storage capacity constraint. A similar problem has already been considered in an infrastructure setting, where all users receive a common multicast (coded) message from a single omniscient server (e.g., a base station having all the files in the library) through a shared bottleneck link. In this paper, we consider a D2D infrastructureless version of the problem. We propose a caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We also consider a random caching strategy, which is more suitable to a fully decentralized implementation. Under certain conditions, both approaches can achieve the information theoretic outer bound within a constant multiplicative factor. In our previous work, we showed that a caching D2D wireless network with one-hop communication, random caching, and uncoded delivery (direct file transmissions) achieves the same throughput scaling law of the infrastructure-based coded multicasting scheme, in the regime of large number of users and files in the library. This shows that the spatial reuse gain of the D2D network is order-equivalent to the coded multicasting gain of single base station transmission. It is, therefore, natural to ask whether these two gains are cumulative, i.e., if a D2D network with both local communication (spatial reuse) and coded multicasting can provide an improved scaling law. Somewhat counterintuitively, we show that these gains do not cumulate (in terms of throughput scaling law). This fact can be explained by noticing that the coded delivery scheme creates messages that are useful to multiple nodes, such that it benefits from broadcasting to as many nodes as possible, while spatial reuse capitalizes on the fact that the communication is local, such that the same time slot can be reused in space across the network. Unfortunately, these two issues are in contrast with each other.", "We propose a new scheme for increasing the throughput of video files in cellular communications systems. This scheme exploits (1) the redundancy of user requests as well as (2) the considerable storage capacity of smartphones and tablets. Users cache popular video files and-after receiving requests from other users-serve these requests via device-to-device localized transmissions. The file placement is optimal when a central control knows a priori the locations of wireless devices when file requests occur. However, even a purely random caching scheme shows only a minor performance loss compared to such a “genie-aided” scheme. We then analyze the optimal collaboration distance, trading off frequency reuse with the probability of finding a requested file within the collaboration distance. We show that an improvement of spectral efficiency of one to two orders of magnitude is possible, even if there is not very high redundancy in video requests.", "Video is the main driver for the inexorable increase in wireless data traffic. In this paper we analyze a new architecture in which device-to-device (D2D) communications is used to drastically increase the capacity of cellular networks for video transmission. Users cache popular video files and - after receiving requests from other users - serve these requests via D2D localized transmissions; the short range of the D2D transmission enables frequency reuse within the cell. We analyze the scaling behavior of the throughput with the number of devices per cell. The user content request statistics, as well as the caching distribution, are modeled by a Zipf distribution with parameters γ r and γ c , respectively. For the practically important case γ r c > 1, we derive a closed form expression for the scaling behavior of the number of D2D links that coexist without interference. Our analysis relies on a novel Poisson approximation result for wireless networks obtained through the Chen-Stein Method.", "We analyze a novel architecture for caching popular video content to enable wireless device-to-device (D2D) collaboration. We focus on the asymptotic scaling characteristics and show how they depend on video content popularity statistics. We identify a fundamental conflict between collaboration distance and interference and show how to optimize the transmission power to maximize frequency reuse. Our main result is a closed form expression of the optimal collaboration distance as a function of the model parameters. Under the common assumption of a Zipf distribution for content reuse, we show that if the Zipf exponent is greater than 1, it is possible to have a number of D2D interference-free collaboration pairs that scales linearly in the number of nodes. If the Zipf exponent is smaller than 1, we identify the best possible scaling in the number of D2D collaborating links. Surprisingly, a very simple distributed caching policy achieves the optimal scaling behavior.", "Caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users. Conventionally, these memories are used to deliver requested content in part from a locally cached copy rather than through the network. The gain offered by this approach, which we term local caching gain, depends on the local cache size (i.e., the memory available at each individual user). In this paper, we introduce and exploit a second, global, caching gain not utilized by conventional caching schemes. This gain depends on the aggregate global cache size (i.e., the cumulative memory available at all users), even though there is no cooperation among the users. To evaluate and isolate these two gains, we introduce an information-theoretic formulation of the caching problem focusing on its basic structure. For this setting, we propose a novel coded caching scheme that exploits both local and global caching gains, leading to a multiplicative improvement in the peak rate compared with previously known schemes. In particular, the improvement can be on the order of the number of users in the network. In addition, we argue that the performance of the proposed scheme is within a constant factor of the information-theoretic optimum for all values of the problem parameters.", "We argue that scalability in ad-hoc networks can be achieved by re-defining the functionality for the information transport system itself, where the functionality is driven by a new type of communication paradigm inherent in information dissemination applications. In particular, among the entire population of generated messages, each user desires only that the personally “most interesting” messages are delivered to them — we call this “star-to-one” communication. In the paper we consider a “Zipf product form” model for message preferences, and propose some decentralized algorithms for message forwarding based on this model. We discuss some simulation results for these algorithms, which suggest that it is possible for the users to efficiently obtain the messages that are of most interest to them. Essentially, the amount of “work” required of each user, on the average, is proportional to the desired number of messages to be received by each user, and is independent of the number of users and the number of messages in the network." ] }
1608.07857
2517992291
Device-to-device ( @math ) communication is a promising approach to optimize the utilization of air interface resources in 5G networks, since it allows decentralized opportunistic short-range communication. For @math to be useful, mobile nodes must possess content that other mobiles want. Thus, intelligent caching techniques are essential for @math . In this paper, we use results from stochastic geometry to derive the probability of successful content delivery in the presence of interference and noise. We employ a general transmission strategy, where multiple files are cached at the users and different files can be transmitted simultaneously throughout the network. We then formulate an optimization problem, and find the caching distribution that maximizes the density of successful receptions (DSR) under a simple transmission strategy, where a single file is transmitted at a time throughout the network. We model file requests by a Zipf distribution with exponent @math , which results in an optimal caching distribution that is also a Zipf distribution with exponent @math , which is related to @math through a simple expression involving the path loss exponent. We solve the optimal content placement problem for more general demand profiles under Rayleigh, Ricean, and Nakagami small-scale fading distributions. Our results suggest that it is required to flatten the request distribution to optimize the caching performance. We also develop strategies to optimize content caching for the more general case with multiple files, and bound the DSR for that scenario.
Under the classical protocol model of ad hoc networks @cite_9 , for a grid network model, with fixed cache size @math , as the number of users @math and the number of files @math become large with @math , the order optimal The order optimality in @cite_0 @cite_10 is in the sense of a throughput-outage tradeoff due to simple model used. caching distribution is studied and the per-node throughput is shown to behave as @math @cite_0 @cite_10 . The network diameter is shown to scale as @math for a multi-hop scenario @cite_23 . It is shown that local multi-hop yields per-node throughput scaling as @math @cite_13 .
{ "cite_N": [ "@cite_13", "@cite_9", "@cite_0", "@cite_23", "@cite_10" ], "mid": [ "2740177153", "2137775453", "2964101002", "2164549277", "1611674276" ], "abstract": [ "We consider a wireless device-to-device (D2D) network in which the nodes are uniformly distributed at random over the network area and can cache information from a library of possible messages (files). Each node requests a file in the library independently at random, according to a given popularity distribution, and downloads from other nodes having the requested file in their local cache via multihop transmission. Under the classical “protocol model” of wireless ad hoc networks, we characterize the optimal throughput scaling law by presenting a feasible scheme formed by a decentralized caching policy for the parameter regimes of interest and a local multihop transmission protocol. The scaling law optimality of the proposed strategy is shown by deriving a new throughput upper bound. Surprisingly, we show that decentralized uniform random caching yields optimal scaling in most of the system interesting regimes. We also observe that caching improves the throughput scaling law of classical ad hoc networks, and that multihop improves the previously derived scaling law of caching wireless networks under one-hop transmission.", "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.", "We consider a wireless device-to-device (D2D) network where the nodes have precached information from a library of available files. Nodes request files at random. If the requested file is not in the on-board cache, then it is downloaded from some neighboring node via one-hop local communication. An outage event occurs when a requested file is not found in the neighborhood of the requesting node, or if the network admission control policy decides not to serve the request. We characterize the optimal throughput-outage tradeoff in terms of tight scaling laws for various regimes of the system parameters, when both the number of nodes and the number of files in the library grow to infinity. Our analysis is based on Gupta and Kumar protocol model for the underlying D2D wireless network, widely used in the literature on capacity scaling laws of wireless networks without caching. Our results show that the combination of D2D spectrum reuse and caching at the user nodes yields a per-user throughput independent of the number of users, for any fixed outage probability in (0, 1). This implies that the D2D caching network is scalable: even though the number of users increases, each user achieves constant throughput. This behavior is very different from the classical Gupta and Kumar result on ad hoc wireless networks, for which the per-user throughput vanishes as the number of users increases. Furthermore, we show that the user throughput is directly proportional to the fraction of cached information over the whole file library size. Therefore, we can conclude that D2D caching networks can turn memory into bandwidth (i.e., doubling the on-board cache memory on the user devices yields a 100 increase of the user throughout).", "We investigate the scalability of multihop wireless communications, a major concern in networking, for the case that users access content replicated across the nodes. In contrast to the standard paradigm of randomly selected communicating pairs, content replication is efficient for certain regimes of file popularity, cache, and network size. Our study begins with the detailed joint content replication and delivery problem on a 2-D square grid, a hard combinatorial optimization. This is reduced to a simpler problem based on replication density, whose performance is of the same order as the original. Assuming a Zipf popularity law, and letting the size of content and network both go to infinity, we identify the scaling laws and regimes of the required link capacity, ranging from O(√N) down to O(1) .", "As wireless video is the fastest growing form of data traffic, methods for spectrally efficient on-demand wireless video streaming are essential to both service providers and users. A key property of video on-demand is the asynchronous content reuse , such that a few popular files account for a large part of the traffic but are viewed by users at different times. Caching of content on wireless devices in conjunction with device-to-device (D2D) communications allows to exploit this property, and provide a network throughput that is significantly in excess of both the conventional approach of unicasting from cellular base stations and the traditional D2D networks for “regular” data traffic. This paper presents in a tutorial and concise form some recent results on the throughput scaling laws of wireless networks with caching and asynchronous content reuse, contrasting the D2D approach with other alternative approaches such as conventional unicasting, harmonic broadcasting , and a novel coded multicasting approach based on caching in the user devices and network-coded transmission from the cellular base station only. Somehow surprisingly, the D2D scheme with spatial reuse and simple decentralized random caching achieves the same near-optimal throughput scaling law as coded multicasting. Both schemes achieve an unbounded throughput gain (in terms of scaling law) with respect to conventional unicasting and harmonic broadcasting, in the relevant regime where the number of video files in the library is smaller than the total size of the distributed cache capacity in the network. To better understand the relative merits of these competing approaches, we consider a holistic D2D system design incorporating traditional microwave (2 GHz) and millimeter-wave (mm-wave) D2D links; the direct connections to the base station can be used to provide those rare video requests that cannot be found in local caches. We provide extensive simulation results under a variety of system settings and compare our scheme with the systems that exploit transmission from the base station only. We show that, also in realistic conditions and nonasymptotic regimes, the proposed D2D approach offers very significant throughput gains." ] }
1608.07857
2517992291
Device-to-device ( @math ) communication is a promising approach to optimize the utilization of air interface resources in 5G networks, since it allows decentralized opportunistic short-range communication. For @math to be useful, mobile nodes must possess content that other mobiles want. Thus, intelligent caching techniques are essential for @math . In this paper, we use results from stochastic geometry to derive the probability of successful content delivery in the presence of interference and noise. We employ a general transmission strategy, where multiple files are cached at the users and different files can be transmitted simultaneously throughout the network. We then formulate an optimization problem, and find the caching distribution that maximizes the density of successful receptions (DSR) under a simple transmission strategy, where a single file is transmitted at a time throughout the network. We model file requests by a Zipf distribution with exponent @math , which results in an optimal caching distribution that is also a Zipf distribution with exponent @math , which is related to @math through a simple expression involving the path loss exponent. We solve the optimal content placement problem for more general demand profiles under Rayleigh, Ricean, and Nakagami small-scale fading distributions. Our results suggest that it is required to flatten the request distribution to optimize the caching performance. We also develop strategies to optimize content caching for the more general case with multiple files, and bound the DSR for that scenario.
Spatial caching for a client requesting a large file that is stored at the caches with limited storage, is studied @cite_27 . Using Poisson point process ( @math ) to model the user locations, optimal geographic content placement and outage in wireless networks are studied @cite_28 . The probability that the typical user finds the content in one of its nearby base stations ( @math )s is optimized using the distribution of the number of @math s simultaneously covering a user @cite_17 . Performance of randomized caching in @math networks from a @math maximization perspective has not been studied, which we study in this paper.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_17" ], "mid": [ "1481341908", "1550774584", "2049496443" ], "abstract": [ "In this work we consider the problem of an optimal geographic placement of content in wireless cellular networks modelled by Poisson point processes. Specifically, for the typical user requesting some particular content and whose popularity follows a given law (e.g. Zipf), we calculate the probability of finding the content cached in one of the base stations. Wireless coverage follows the usual signal-to-interference-and noise ratio (SINR) model, or some variants of it. We formulate and solve the problem of an optimal randomized content placement policy, to maximize the user's hit probability. The result dictates that it is not always optimal to follow the standard policy “cache the most popular content, everywhere”. In fact, our numerical results regarding three different coverage scenarios, show that the optimal policy significantly increases the chances of hit under high-coverage regime, i.e., when the probabilities of coverage by more than just one station are high enough.", "We consider wireless caches located in the plane according to general point process and specialize the results for the homogeneous Poisson process. A large data file is stored at the caches, which have limited storage capabilities. Hence, they can only store parts of the data. Clients can contact the caches to retrieve the data. We compare the expected cost of obtaining the complete data under uncoded as well as coded data allocation strategies. It is shown that for the general class of cost measures where the cost of retrieving data is increasing with the distance between client and caches, coded allocation outperforms uncoded allocation. The improvement offered by coding is quantified for two more specific classes of performance measures. Finally, our results are validated by computing the costs of the allocation strategies for the case that caches coincide with currently deployed mobile base stations.", "We give numerically tractable, explicit integral expressions for the distribution of the signal-to-interference-and-noise-ratio (SINR) experienced by a typical user in the downlink channel from the k-th strongest base stations of a cellular network modelled by Poisson point process on the plane. Our signal propagation-loss model comprises of a power-law path-loss function with arbitrarily distributed shadowing, independent across all base stations, with and without Rayleigh fading. Our results are valid in the whole domain of SINR, in particular for SINR <; 1, where one observes multiple coverage. In this latter aspect our paper complements previous studies reported in [1]." ] }
1608.07857
2517992291
Device-to-device ( @math ) communication is a promising approach to optimize the utilization of air interface resources in 5G networks, since it allows decentralized opportunistic short-range communication. For @math to be useful, mobile nodes must possess content that other mobiles want. Thus, intelligent caching techniques are essential for @math . In this paper, we use results from stochastic geometry to derive the probability of successful content delivery in the presence of interference and noise. We employ a general transmission strategy, where multiple files are cached at the users and different files can be transmitted simultaneously throughout the network. We then formulate an optimization problem, and find the caching distribution that maximizes the density of successful receptions (DSR) under a simple transmission strategy, where a single file is transmitted at a time throughout the network. We model file requests by a Zipf distribution with exponent @math , which results in an optimal caching distribution that is also a Zipf distribution with exponent @math , which is related to @math through a simple expression involving the path loss exponent. We solve the optimal content placement problem for more general demand profiles under Rayleigh, Ricean, and Nakagami small-scale fading distributions. Our results suggest that it is required to flatten the request distribution to optimize the caching performance. We also develop strategies to optimize content caching for the more general case with multiple files, and bound the DSR for that scenario.
Although the work conducted in @cite_8 @cite_29 focused on the optimal caching distribution to maximize the average number of connections, the system model was overly simplistic. They assumed a cellular network where each @math serves the users in a square cell. The cell is divided into small clusters. @math communications are allowed within each cluster. To avoid intra-cluster interference, only one transmitter-receiver pair per cluster is allowed, and it does not introduce interference for other clusters. In this paper, we aim to overcome these serious limitations using a more realistic @math network model that captures the simultaneous transmissions where there is no restriction in the number of @math pairs.
{ "cite_N": [ "@cite_29", "@cite_8" ], "mid": [ "1964721119", "2054581561" ], "abstract": [ "We analyze a novel architecture for caching popular video content to enable wireless device-to-device (D2D) collaboration. We focus on the asymptotic scaling characteristics and show how they depend on video content popularity statistics. We identify a fundamental conflict between collaboration distance and interference and show how to optimize the transmission power to maximize frequency reuse. Our main result is a closed form expression of the optimal collaboration distance as a function of the model parameters. Under the common assumption of a Zipf distribution for content reuse, we show that if the Zipf exponent is greater than 1, it is possible to have a number of D2D interference-free collaboration pairs that scales linearly in the number of nodes. If the Zipf exponent is smaller than 1, we identify the best possible scaling in the number of D2D collaborating links. Surprisingly, a very simple distributed caching policy achieves the optimal scaling behavior.", "Video is the main driver for the inexorable increase in wireless data traffic. In this paper we analyze a new architecture in which device-to-device (D2D) communications is used to drastically increase the capacity of cellular networks for video transmission. Users cache popular video files and - after receiving requests from other users - serve these requests via D2D localized transmissions; the short range of the D2D transmission enables frequency reuse within the cell. We analyze the scaling behavior of the throughput with the number of devices per cell. The user content request statistics, as well as the caching distribution, are modeled by a Zipf distribution with parameters γ r and γ c , respectively. For the practically important case γ r c > 1, we derive a closed form expression for the scaling behavior of the number of D2D links that coexist without interference. Our analysis relies on a novel Poisson approximation result for wireless networks obtained through the Chen-Stein Method." ] }
1608.07710
2963251075
Abstract Label ranking aims to learn a mapping from instances to rankings over a finite number of predefined labels. Random forest is a powerful and one of the most successful general-purpose machine learning algorithms of modern times. In this paper, we present a powerful random forest label ranking method which uses random decision trees to retrieve nearest neighbors. We have developed a novel two-step rank aggregation strategy to effectively aggregate neighboring rankings discovered by the random forest into a final predicted ranking. Compared with existing methods, the new random forest method has many advantages including its intrinsically scalable tree data structure, highly parallel-able computational architecture and much superior performance. We present extensive experimental results to demonstrate that our new method achieves the highly competitive performance compared with state-of-the-art methods for datasets with complete ranking and datasets with only partial ranking information.
Due to the practical significance, label ranking has attracted increasing attention in the recent machine learning literature, and a large number of methods have been proposed or adapted for label ranking @cite_2 @cite_8 @cite_9 @cite_6 @cite_13 @cite_17 @cite_27 @cite_30 @cite_21 @cite_25 @cite_12 . An overview of label ranking algorithms can be found in @cite_14 @cite_18 . Existing label ranking methods can be mainly divided into three categories.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_8", "@cite_9", "@cite_21", "@cite_6", "@cite_27", "@cite_2", "@cite_13", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2062051372", "2150398422", "1932717476", "2102705755", "2107853397", "2605228598", "1526445072", "2264170626", "2118353660", "2085247959", "2508745106", "2783218948", "1256219768" ], "abstract": [ "Label Ranking (LR) problems are becoming increasingly important in Machine Learning. While there has been a significant amount of work on the development of learning algorithms for LR in recent years, there are not many pre-processing methods for LR. Some methods, like Naive Bayes for LR and APRIORI-LR, cannot handle real-valued data directly. Conventional discretization methods used in classification are not suitable for LR problems, due to the different target variable. In this work, we make an extensive analysis of the existing methods using simple approaches. We also propose a new method called EDiRa (Entropy-based Discretization for Ranking) for the discretization of ranking data. We illustrate the advantages of the method using synthetic data and also on several benchmark datasets. The results clearly indicate that the discretization is performing as expected and also improves the results and efficiency of the learning algorithms.", "The problem of learning label rankings is receiving increasing attention from machine learning and data mining community. Its goal is to learn a mapping from instances to rankings over a finite number of labels. In this paper, we devote to giving an overview of the state-of-the-art in the area of label ranking, and providing a basic taxonomy of the label ranking algorithms. Specifically, we classify these label ranking algorithms into four categories, namely decomposition methods, probabilistic methods, similarity-based methods, and other methods. We pay particular attention to the latest advances in each. Also, we discuss their strengths and weaknesses, and highlight some interesting challenges that remain to be solved.", "Label ranking is a complex prediction task where the goal is to map instances to a total order over a finite set of predefined labels. An interesting aspect of this problem is that it subsumes several supervised learning problems, such as multiclass prediction, multilabel classification, and hierarchical classification. Unsurprisingly, there exists a plethora of label ranking algorithms in the literature due, in part, to this versatile nature of the problem. In this paper, we survey these algorithms.", "Preference learning is an emerging topic that appears in different guises in the recent literature. This work focuses on a particular learning scenario called label ranking, where the problem is to learn a mapping from instances to rankings over a finite number of labels. Our approach for learning such a mapping, called ranking by pairwise comparison (RPC), first induces a binary preference relation from suitable training data using a natural extension of pairwise classification. A ranking is then derived from the preference relation thus obtained by means of a ranking procedure, whereby different ranking methods can be used for minimizing different loss functions. In particular, we show that a simple (weighted) voting strategy minimizes risk with respect to the well-known Spearman rank correlation. We compare RPC to existing label ranking methods, which are based on scoring individual labels instead of comparing pairs of labels. Both empirically and theoretically, it is shown that RPC is superior in terms of computational efficiency, and at least competitive in terms of accuracy.", "The label ranking problem consists of learning a model that maps instances to total orders over a finite set of predefined labels. This paper introduces new methods for label ranking that complement and improve upon existing approaches. More specifically, we propose extensions of two methods that have been used extensively for classification and regression so far, namely instance-based learning and decision tree induction. The unifying element of the two methods is a procedure for locally estimating predictive probability models for label rankings.", "This work is financed by the ERDF - European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme within project POCI-01-0145-FEDER-006961, and by National Funds through the FCT - Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) as part of project UID EEA 50014 2013.", "This paper introduces two new methods for label ranking based on a probabilistic model of ranking data, called the Plackett-Luce model. The idea of the first method is to use the PL model to fit locally constant probability models in the context of instance-based learning. As opposed to this, the second method estimates a global model in which the PL parameters are represented as functions of the instance. Comparing our methods with previous approaches to label ranking, we find that they offer a number of advantages. Experimentally, we moreover show that they are highly competitive to start-of-the-art methods in terms of predictive accuracy, especially in the case of training data with incomplete ranking information.", "In standard supervised learning, each training instance is associated with an outcome from a corresponding output space (e.g., a class label in classification or a real number in regression). In the superset learning problem, the outcome is only characterized in terms of a superset—a subset of candidates that covers the true outcome but may also contain additional ones. Thus, superset learning can be seen as a specific type of weakly supervised learning, in which training examples are ambiguous. In this paper, we introduce a generic approach to superset learning, which is motivated by the idea of performing model identification and \"data disambiguation\" simultaneously. This idea is realized by means of a generalized risk minimization approach, using an extended loss function that compares precise predictions with set-valued observations. As an illustration, we instantiate our meta learning technique for the problem of label ranking, in which the output space consists of all permutations of a fixed set of items. The label ranking method thus obtained is compared to existing approaches tackling the same problem.", "We discuss the problem of learning to rank labels from a real valued feedback associated with each label. We cast the feedback as a preferences graph where the nodes of the graph are the labels and edges express preferences over labels. We tackle the learning problem by defining a loss function for comparing a predicted graph with a feedback graph. This loss is materialized by decomposing the feedback graph into bipartite sub-graphs. We then adopt the maximum-margin framework which leads to a quadratic optimization problem with linear constraints. While the size of the problem grows quadratically with the number of the nodes in the feedback graph, we derive a problem of a significantly smaller size and prove that it attains the same minimum. We then describe an efficient algorithm, called SOPOPO, for solving the reduced problem by employing a soft projection onto the polyhedron defined by a reduced set of constraints. We also describe and analyze a wrapper procedure for batch learning when multiple graphs are provided for training. We conclude with a set of experiments which show significant improvements in run time over a state of the art interior-point algorithm.", "Label ranking studies the issue of learning a model that maps instances to rankings over a finite set of predefined labels. In order to relieve the cost of memory and time during training and prediction, we propose a novel approach for label ranking problem based on Gaussian mixture model in this paper. The key idea of the approach is to divide the label ranking training data into multiple clusters using clustering algorithm, and each cluster is described by a Gaussian prototype. Then, a Gaussian mixture model is introduced to model the mapping from instances to rankings. Finally, a predicted ranking is obtained with maximum posterior probability. In the experiments, we compare our method with two state-of-the-art label ranking approaches. Experimental results show that our method is fully competitive in terms of predictive accuracy. Moreover, the proposed method also provides a measure of the reliability of the corresponding predicted ranking.", "Abstract Preference learning is the branch of machine learning in charge of inducing preference models from data. In this paper we focus on the task known as label ranking problem , whose goal is to predict a ranking among the different labels the class variable can take. Our contribution is twofold: (i) taking as basis the tree-based algorithm LRT described in [1], we design weaker tree-based models which can be learnt more efficiently; and (ii) we show that bagging these weak learners improves not only the LRT algorithm, but also the state-of-the-art one (IBLR [1]). Furthermore, the bagging algorithm which takes the weak LRT-based models as base classifiers is competitive in time with respect to LRT and IBLR methods. To check the goodness of our proposal, we conduct a broad experimental study over the standard benchmark used in the label ranking problem literature.", "Label ranking is a specific type of preference learning problem, namely the problem of learning a model that maps instances to rankings over a finite set of predefined alternatives. Like in conventional classification, these alternatives are identified by their name or label while not being characterized in terms of any properties or features that could be potentially useful for learning. In this paper, we consider a generalization of the label ranking problem that we call dyad ranking. In dyad ranking, not only the instances but also the alternatives are represented in terms of attributes. For learning in the setting of dyad ranking, we propose an extension of an existing label ranking method based on the Plackett---Luce model, a statistical model for rank data. This model is combined with a suitable feature representation of dyads. Concretely, we propose a method based on a bilinear extension, where the representation is given in terms of a Kronecker product, as well as a method based on neural networks, which allows for learning a (highly nonlinear) joint feature representation. The usefulness of the additional information provided by the feature description of alternatives is shown in several experimental studies. Finally, we propose a method for the visualization of dyad rankings, which is based on the technique of multidimensional unfolding.", "Label ranking is a specific type of preference learning problem, namely the problem of learning a model that maps instances to rankings over a finite set of predefined alternatives. These alternatives are identified by their name or label while not being characterized in terms of any properties or features that could be potentially useful for learning. In this paper, we consider a generalization of the label ranking problem that we call dyad ranking. In dyad ranking, not only the instances but also the alternatives are represented in terms of attributes. For learning in the setting of dyad ranking, we propose an extension of an existing label ranking method based on the Plackett-Luce model, a statistical model for rank data. Moreover, we present first experimental results confirming the usefulness of the additional information provided by the feature description of alternatives." ] }
1608.07710
2963251075
Abstract Label ranking aims to learn a mapping from instances to rankings over a finite number of predefined labels. Random forest is a powerful and one of the most successful general-purpose machine learning algorithms of modern times. In this paper, we present a powerful random forest label ranking method which uses random decision trees to retrieve nearest neighbors. We have developed a novel two-step rank aggregation strategy to effectively aggregate neighboring rankings discovered by the random forest into a final predicted ranking. Compared with existing methods, the new random forest method has many advantages including its intrinsically scalable tree data structure, highly parallel-able computational architecture and much superior performance. We present extensive experimental results to demonstrate that our new method achieves the highly competitive performance compared with state-of-the-art methods for datasets with complete ranking and datasets with only partial ranking information.
One is known as reduction approaches which transform the label ranking problem into several simpler binary classification problems, and then the solutions of these classification problems are combined into a predicted ranking. Label ranking by learning pairwise preferences and by learning utility functions are two widely used schemes in the reduction approaches. For example, ranking by pairwise comparison (RPC) learns binary models for each pair of labels, and the predictions of these binary models are then aggregated into a ranking @cite_8 ; while constraint classification (CC) and log-linear models for label ranking (LL) seek to learn linear utility functions for each individual label instead of preference relations for each pair of labels @cite_29 @cite_31 .
{ "cite_N": [ "@cite_29", "@cite_31", "@cite_8" ], "mid": [ "2121692343", "", "2102705755" ], "abstract": [ "The constraint classification framework captures many flavors of multiclass classification including winner-take-all multiclass classification, multilabel classification and ranking. We present a meta-algorithm for learning in this framework that learns via a single linear classifier in high dimension. We discuss distribution independent as well as margin-based generalization bounds and present empirical and theoretical evidence showing that constraint classification benefits over existing methods of multiclass classification.", "", "Preference learning is an emerging topic that appears in different guises in the recent literature. This work focuses on a particular learning scenario called label ranking, where the problem is to learn a mapping from instances to rankings over a finite number of labels. Our approach for learning such a mapping, called ranking by pairwise comparison (RPC), first induces a binary preference relation from suitable training data using a natural extension of pairwise classification. A ranking is then derived from the preference relation thus obtained by means of a ranking procedure, whereby different ranking methods can be used for minimizing different loss functions. In particular, we show that a simple (weighted) voting strategy minimizes risk with respect to the well-known Spearman rank correlation. We compare RPC to existing label ranking methods, which are based on scoring individual labels instead of comparing pairs of labels. Both empirically and theoretically, it is shown that RPC is superior in terms of computational efficiency, and at least competitive in terms of accuracy." ] }
1608.07710
2963251075
Abstract Label ranking aims to learn a mapping from instances to rankings over a finite number of predefined labels. Random forest is a powerful and one of the most successful general-purpose machine learning algorithms of modern times. In this paper, we present a powerful random forest label ranking method which uses random decision trees to retrieve nearest neighbors. We have developed a novel two-step rank aggregation strategy to effectively aggregate neighboring rankings discovered by the random forest into a final predicted ranking. Compared with existing methods, the new random forest method has many advantages including its intrinsically scalable tree data structure, highly parallel-able computational architecture and much superior performance. We present extensive experimental results to demonstrate that our new method achieves the highly competitive performance compared with state-of-the-art methods for datasets with complete ranking and datasets with only partial ranking information.
The second category is probabilistic approaches which represent label ranking based on statistical models for ranking data, i.e., parametrized probability distributions on the class of all rankings. For example, have developed instance-based (IB) learning algorithms based on the Mallows (M) and Plackett-Luce (PL) models @cite_9 @cite_6 , and proposed a label ranking method based on Gaussian mixture models @cite_13 .
{ "cite_N": [ "@cite_9", "@cite_13", "@cite_6" ], "mid": [ "2107853397", "2085247959", "1526445072" ], "abstract": [ "The label ranking problem consists of learning a model that maps instances to total orders over a finite set of predefined labels. This paper introduces new methods for label ranking that complement and improve upon existing approaches. More specifically, we propose extensions of two methods that have been used extensively for classification and regression so far, namely instance-based learning and decision tree induction. The unifying element of the two methods is a procedure for locally estimating predictive probability models for label rankings.", "Label ranking studies the issue of learning a model that maps instances to rankings over a finite set of predefined labels. In order to relieve the cost of memory and time during training and prediction, we propose a novel approach for label ranking problem based on Gaussian mixture model in this paper. The key idea of the approach is to divide the label ranking training data into multiple clusters using clustering algorithm, and each cluster is described by a Gaussian prototype. Then, a Gaussian mixture model is introduced to model the mapping from instances to rankings. Finally, a predicted ranking is obtained with maximum posterior probability. In the experiments, we compare our method with two state-of-the-art label ranking approaches. Experimental results show that our method is fully competitive in terms of predictive accuracy. Moreover, the proposed method also provides a measure of the reliability of the corresponding predicted ranking.", "This paper introduces two new methods for label ranking based on a probabilistic model of ranking data, called the Plackett-Luce model. The idea of the first method is to use the PL model to fit locally constant probability models in the context of instance-based learning. As opposed to this, the second method estimates a global model in which the PL parameters are represented as functions of the instance. Comparing our methods with previous approaches to label ranking, we find that they offer a number of advantages. Experimentally, we moreover show that they are highly competitive to start-of-the-art methods in terms of predictive accuracy, especially in the case of training data with incomplete ranking information." ] }
1608.07710
2963251075
Abstract Label ranking aims to learn a mapping from instances to rankings over a finite number of predefined labels. Random forest is a powerful and one of the most successful general-purpose machine learning algorithms of modern times. In this paper, we present a powerful random forest label ranking method which uses random decision trees to retrieve nearest neighbors. We have developed a novel two-step rank aggregation strategy to effectively aggregate neighboring rankings discovered by the random forest into a final predicted ranking. Compared with existing methods, the new random forest method has many advantages including its intrinsically scalable tree data structure, highly parallel-able computational architecture and much superior performance. We present extensive experimental results to demonstrate that our new method achieves the highly competitive performance compared with state-of-the-art methods for datasets with complete ranking and datasets with only partial ranking information.
Both reduction approaches and probabilistic approaches have shown good performances in the experimental studies, while they also come with some disadvantages. For reduction approaches, theoretical assumptions on the sought ranking-valued'' mapping, which may serve as a proper learning bias, may not be easily translated into corresponding assumptions for the classification problems. Moreover, it is often not clear that minimizing the loss function on the binary problems leads to maximizing the performance of the label ranking model in terms of the desired loss function on rankings @cite_11 . For probabilistic approaches, their success also do not come for free but at a large cost associated with both memory and time. For example, the instances-based approaches involve costly nearest neighbour search and the aggregation of neighboring rankings is also slow as it requires using complex optimization procedures, such as the approximate expectation maximization in IB-M and the minorization maximization in IB-PL @cite_28 . Both IB-M and IB-PL are lazy learners, with almost no cost at training phase but a higher cost at predicting phase. It can be costly or even impossible in the resources-constrained applications.
{ "cite_N": [ "@cite_28", "@cite_11" ], "mid": [ "2149166361", "2114028889" ], "abstract": [ "Label ranking is the task of inferring a total order over a predefined set of labels for each given instance. We present a general framework for batch learning of label ranking functions from supervised data. We assume that each instance in the training data is associated with a list of preferences over the label-set, however we do not assume that this list is either complete or consistent. This enables us to accommodate a variety of ranking problems. In contrast to the general form of the supervision, our goal is to learn a ranking function that induces a total order over the entire set of labels. Special cases of our setting are multilabel categorization and hierarchical classification. We present a general boosting-based learning algorithm for the label ranking problem and prove a lower bound on the progress of each boosting iteration. The applicability of our approach is demonstrated with a set of experiments on a large-scale text corpus.", "We present a theoretical analysis of supervised ranking, providing necessary and sufficient conditions for the asymptotic consistency of algorithms based on minimizing a surrogate loss function. We show that many commonly used surrogate losses are inconsistent; surprisingly, we show inconsistency even in low-noise settings. We present a new value-regularized linear loss, establish its consistency under reasonable assumptions on noise, and show that it outperforms conventional ranking losses in a collaborative filtering experiment." ] }
1608.07710
2963251075
Abstract Label ranking aims to learn a mapping from instances to rankings over a finite number of predefined labels. Random forest is a powerful and one of the most successful general-purpose machine learning algorithms of modern times. In this paper, we present a powerful random forest label ranking method which uses random decision trees to retrieve nearest neighbors. We have developed a novel two-step rank aggregation strategy to effectively aggregate neighboring rankings discovered by the random forest into a final predicted ranking. Compared with existing methods, the new random forest method has many advantages including its intrinsically scalable tree data structure, highly parallel-able computational architecture and much superior performance. We present extensive experimental results to demonstrate that our new method achieves the highly competitive performance compared with state-of-the-art methods for datasets with complete ranking and datasets with only partial ranking information.
Besides the reduction approaches and probabilistic approaches, tree-based approaches are also very popular in label ranking. Several label ranking methods based on decision tree were designed for label ranking. For example, , proposed the first adaptation of decision tree algorithm for label ranking, called label ranking tree (LRT) @cite_9 . A new version of decision trees for label ranking called entropy-based ranking tree (ERT) and a label ranking forest using this ERT as base learner were developed @cite_21 . Recently, a bagging algorithm which takes the weak LRT-based models as base classifiers was proposed. Experimental results show that bagging these weak learners improves not only the LRT algorithm, but also the instances-based algorithms @cite_25 . Actually, our proposed random forest for label ranking (LR-RF) in this work falls this category. In the Section , we experimentally compare our LR-RF with the state-of-the-art algorithms from these categories.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_25" ], "mid": [ "2107853397", "2605228598", "2508745106" ], "abstract": [ "The label ranking problem consists of learning a model that maps instances to total orders over a finite set of predefined labels. This paper introduces new methods for label ranking that complement and improve upon existing approaches. More specifically, we propose extensions of two methods that have been used extensively for classification and regression so far, namely instance-based learning and decision tree induction. The unifying element of the two methods is a procedure for locally estimating predictive probability models for label rankings.", "This work is financed by the ERDF - European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme within project POCI-01-0145-FEDER-006961, and by National Funds through the FCT - Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) as part of project UID EEA 50014 2013.", "Abstract Preference learning is the branch of machine learning in charge of inducing preference models from data. In this paper we focus on the task known as label ranking problem , whose goal is to predict a ranking among the different labels the class variable can take. Our contribution is twofold: (i) taking as basis the tree-based algorithm LRT described in [1], we design weaker tree-based models which can be learnt more efficiently; and (ii) we show that bagging these weak learners improves not only the LRT algorithm, but also the state-of-the-art one (IBLR [1]). Furthermore, the bagging algorithm which takes the weak LRT-based models as base classifiers is competitive in time with respect to LRT and IBLR methods. To check the goodness of our proposal, we conduct a broad experimental study over the standard benchmark used in the label ranking problem literature." ] }
1608.07728
2510078559
In this paper, we derive key-rate expressions for different quantum key distribution protocols. Our key-rate equations utilize multiple channel statistics, including those gathered from mismatched measurement bases - i.e., when Alice and Bob choose incompatible bases. In particular, we will consider an Extended B92 and a two-way semi-quantum protocol. For both these protocols, we demonstrate that their tolerance to noise is higher than previously thought - in fact, we will show the semi-quantum protocol can actually tolerate the same noise level as the fully quantum BB84 protocol. Along the way, we will also consider an optimal QKD protocol for various quantum channels. Finally, all the key-rate expressions which we derive in this paper are applicable to any arbitrary, not necessarily symmetric, quantum channel.
We are not the first to consider the use of mismatched measurement outcomes for quantum key distribution. Nor are we the first to show that these statistics can lead to improved key rates. Indeed, in the 1990's @cite_3 showed that mismatched measurement results may be used to better detect an eavesdropper using an intercept-and-resend attack.
{ "cite_N": [ "@cite_3" ], "mid": [ "2034040784" ], "abstract": [ "Abstract We show that rejected-data protocols for quantum cryptography are secure against generalized intercept resend strategies of an eavesdropper provided that the legitimate users of the communication channel use at least three bases. We discuss the connection between this result and the recently-developed protocol based on violations of a suitably-constructed Bell-type inequality for single particles. We also give a new estimate of the probability that an eavesdropper remains undetected under the original protocol and thereby show that the optimal strategies available to an eavesdropper are further limited to those which randomize the errors between the sub-ensembles of data. This result also has implications for the way in which the legitimate users of the channel choose their test data." ] }
1608.07728
2510078559
In this paper, we derive key-rate expressions for different quantum key distribution protocols. Our key-rate equations utilize multiple channel statistics, including those gathered from mismatched measurement bases - i.e., when Alice and Bob choose incompatible bases. In particular, we will consider an Extended B92 and a two-way semi-quantum protocol. For both these protocols, we demonstrate that their tolerance to noise is higher than previously thought - in fact, we will show the semi-quantum protocol can actually tolerate the same noise level as the fully quantum BB84 protocol. Along the way, we will also consider an optimal QKD protocol for various quantum channels. Finally, all the key-rate expressions which we derive in this paper are applicable to any arbitrary, not necessarily symmetric, quantum channel.
In @cite_17 , mismatched measurement bases were applied to the four-state and six-state BB84 protocols @cite_7 . This method was shown to improve the key rate for certain quantum channels, namely the amplitude damping channel and rotation channel. They also derived expressions for non symmetric channels. In @cite_19 , mismatched measurement results were actually used to distill a raw key (as opposed to being used only for channel tomography) - a modified BB84 protocol was adopted and this method was shown to improve the key rate for certain channels.
{ "cite_N": [ "@cite_19", "@cite_7", "@cite_17" ], "mid": [ "2109309079", "", "1988304269" ], "abstract": [ "We consider the mismatched measurements in the BB84 quantum key distribution protocol, in which measuring bases are different from transmitting bases. We give a lower bound on the amount of a secret key that can be extracted from the mismatched measurements. Our lower bound shows that we can extract a secret key from the mismatched measurements with certain quantum channels, such as the channel over which the Hadamard matrix is applied to each qubit with high probability. Moreover, the entropic uncertainty principle implies that one cannot extract the secret key from both matched measurements and mismatched ones simultaneously, when we use the standard information reconciliation and privacy amplification procedure.", "", "We construct a practically implementable classical processing for the Bennett-Brassard 1984 (BB84) protocol and the six-state protocol that fully utilizes the accurate channel estimation method, which is also known as the quantum tomography. Our proposed processing yields at least as high a key rate as the standard processing by Shor and Preskill. We show two examples of quantum channels over which the key rate of our proposed processing is strictly higher than the standard processing. In the second example, the BB84 protocol with our proposed processing yields a positive key rate even though the so-called error rate is higher than the 25 limit." ] }
1608.07728
2510078559
In this paper, we derive key-rate expressions for different quantum key distribution protocols. Our key-rate equations utilize multiple channel statistics, including those gathered from mismatched measurement bases - i.e., when Alice and Bob choose incompatible bases. In particular, we will consider an Extended B92 and a two-way semi-quantum protocol. For both these protocols, we demonstrate that their tolerance to noise is higher than previously thought - in fact, we will show the semi-quantum protocol can actually tolerate the same noise level as the fully quantum BB84 protocol. Along the way, we will also consider an optimal QKD protocol for various quantum channels. Finally, all the key-rate expressions which we derive in this paper are applicable to any arbitrary, not necessarily symmetric, quantum channel.
In @cite_11 , a modified two-basis BB84 was developed where the first basis was the standard computational @math basis ( @math ), while the second consisted of states @math and @math where @math . The authors of that work showed that for small @math , mismatched measurement bases can still be used to gain good channel estimates while also allowing @math and @math to use mismatched measurement bases to distill their key (since, for @math small, even with differing bases, their measurement results will be nearly correlated).
{ "cite_N": [ "@cite_11" ], "mid": [ "1982876469" ], "abstract": [ "We consider a modified version of the BB84 quantum key distribution protocol in which the angle between two different bases is less than π 4. We show that the channel parameter estimate becomes the same as the original protocol with sufficiently transmitted qubits. On the other hand, the statistical correlation between bits transmitted in one basis and those received in the other basis becomes stronger as the angle between two bases becomes narrower. If the angle is very small, the statistical correlation between bits transmitted in one basis and those received in the other basis is as strong as those received in the same basis as the transmitting basis, which means that the modified protocol can generate almost twice as long secret key as in the original protocol, provided that Alice and Bob choose two different bases with almost the same probability. We also point out that the reverse reconciliation often gives a different amount of secret key to the direct reconciliation over Pauli channels with our modified protocol." ] }
1608.07728
2510078559
In this paper, we derive key-rate expressions for different quantum key distribution protocols. Our key-rate equations utilize multiple channel statistics, including those gathered from mismatched measurement bases - i.e., when Alice and Bob choose incompatible bases. In particular, we will consider an Extended B92 and a two-way semi-quantum protocol. For both these protocols, we demonstrate that their tolerance to noise is higher than previously thought - in fact, we will show the semi-quantum protocol can actually tolerate the same noise level as the fully quantum BB84 protocol. Along the way, we will also consider an optimal QKD protocol for various quantum channels. Finally, all the key-rate expressions which we derive in this paper are applicable to any arbitrary, not necessarily symmetric, quantum channel.
Mismatched measurements were used in @cite_2 in order to get better channel statistics for a single-state semi-quantum protocol first introduced in @cite_12 . Though single-state semi-quantum protocols utilize two-way quantum channels, they admit many simplifications which ease their security analysis. In this paper, we consider a multi-state semi-quantum protocol (which are more difficult to analyze) and show mismatched measurements improve its key-rate; indeed, our new key rate bound derived in this paper shows this semi-quantum protocol has the same noise tolerance as the fully-quantum BB84 protocol.
{ "cite_N": [ "@cite_12", "@cite_2" ], "mid": [ "2050618241", "2234042337" ], "abstract": [ "In this paper, we investigate single-state, semi-quantum key distribution protocols. These are protocols whereby one party is limited to measuring only in the computational basis, while the other, though capable of measuring in both computational and Hadamard bases, is limited to preparing and sending only a single, publicly known qubit state. Such protocols rely necessarily on a two-way quantum communication channel making their security analysis difficult. However, we will show that, for single-state protocols, we need only consider a restricted attack operation by Eve. We will also describe a new single-state protocol that permits \"reflections\" to carry information and use our results concerning restricted attacks to show its robustness.", "In this paper, we provide a proof of unconditional security for a semi-quantum key distribution protocol introduced in a previous work. This particular protocol demonstrated the possibility of using X basis states to contribute to the raw key of the two users (as opposed to using only direct measurement results) even though a semi-quantum participant cannot directly manipulate such states. In this work, we provide a complete proof of security by deriving a lower bound of the protocol's key rate in the asymptotic scenario. Using this bound, we are able to find an error threshold value such that for all error rates less than this threshold, it is guaranteed that A and B may distill a secure secret key; for error rates larger than this threshold, A and B should abort. We demonstrate that this error threshold compares favorably to several fully quantum protocols. We also comment on some interesting observations about the behavior of this protocol under certain noise scenarios." ] }
1608.07728
2510078559
In this paper, we derive key-rate expressions for different quantum key distribution protocols. Our key-rate equations utilize multiple channel statistics, including those gathered from mismatched measurement bases - i.e., when Alice and Bob choose incompatible bases. In particular, we will consider an Extended B92 and a two-way semi-quantum protocol. For both these protocols, we demonstrate that their tolerance to noise is higher than previously thought - in fact, we will show the semi-quantum protocol can actually tolerate the same noise level as the fully quantum BB84 protocol. Along the way, we will also consider an optimal QKD protocol for various quantum channels. Finally, all the key-rate expressions which we derive in this paper are applicable to any arbitrary, not necessarily symmetric, quantum channel.
In @cite_0 , it was proven, using mismatched measurement bases, that the three-state BB84 protocol from @cite_10 @cite_16 has a key rate equal to that of the full four state BB84 protocol assuming a symmetric attack. Also a four-state protocol using three bases has a key rate equal to that of the full six-state BB84 protocol.
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_10" ], "mid": [ "", "2151905466", "2121957834" ], "abstract": [ "", "This is a study of the security of the Coherent One-Way (COW) protocol for quantumcryptography, proposed recently as a simple and fast experimental scheme. In thezero-error regime, the eavesdropper Eve can only take advantage of the losses in thetransmission. We consider new attacks, based on unambiguous state discrimination,which perform better than the basic beam-splitting attack, but which can be detectedby a careful analysis of the detection statistics. These results stress the importance oftesting several statistical parameters in order to achieve higher rates of secret bits.", "Standard security proofs of quantum-key-distribution (QKD) protocols often rely on symmetry arguments. In this paper, we prove the security of a three-state protocol that does not possess rotational symmetry. The three-state QKD protocol we consider involves three qubit states, where the first two states @math and @math can contribute to key generation, and the third state @math is for channel estimation. This protocol has been proposed and implemented experimentally in some frequency-based QKD systems where the three states can be prepared easily. Thus, by founding on the security of this three-state protocol, we prove that these QKD schemes are, in fact, unconditionally secure against any attacks allowed by quantum mechanics. The main task in our proof is to upper bound the phase error rate of the qubits given the bit error rates observed. Unconditional security can then be proved not only for the ideal case of a single-photon source and perfect detectors, but also for the realistic case of a phase-randomized weak coherent light source and imperfect threshold detectors. Our result in the phase error rate upper bound is independent of the loss in the channel. Also, we compare the three-state protocol with the Bennett-Brassard 1984 (BB84) protocol. For the single-photon source case, our result proves that the BB84 protocol strictly tolerates a higher quantum bit error rate than the three-state protocol, while for the coherent-source case, the BB84 protocol achieves a higher key generation rate and secure distance than the three-state protocol when a decoy-state method is used." ] }