aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1602.05314 | 2284646714 | Is it possible to determine the location of a photo from just its pixels? While the general problem seems exceptionally difficult, photos often contain cues such as landmarks, weather patterns, vegetation, road markings, or architectural details, which in combination allow to infer where the photo was taken. Previously, this problem has been approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, this model achieves a 50 performance improvement over the single-image model. | Image retrieval based on local features, bags-of-visual-words (BoVWs) and inverted indices @cite_17 @cite_43 has been shown to be more accurate than global descriptors at matching buildings, but requires more space and lacks the invariance to match natural scenes or articulated objects. Most local feature based approaches therefore focus on localization within cities, either based on photos from photo sharing websites @cite_20 @cite_40 or street view @cite_46 @cite_35 @cite_34 @cite_12 @cite_48 @cite_1 @cite_5 . Skyline2GPS @cite_26 also uses street view data, but takes a unique approach that segments the skyline out of an image captured by an upward-facing camera and matches it against a 3D model of the city. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_48",
"@cite_1",
"@cite_43",
"@cite_40",
"@cite_5",
"@cite_46",
"@cite_34",
"@cite_12",
"@cite_20",
"@cite_17"
],
"mid": [
"1987488988",
"2006950721",
"2134446283",
"1883248133",
"2131846894",
"2141362318",
"2121765205",
"2155823702",
"2217115543",
"1537528663",
"2090080269",
"2128017662"
],
"abstract": [
"With recent advances in mobile computing, the demand for visual localization or landmark identification on mobile devices is gaining interest. We advance the state of the art in this area by fusing two popular representations of street-level image data - facade-aligned and viewpoint-aligned - and show that they contain complementary information that can be exploited to significantly improve the recall rates on the city scale. We also improve feature detection in low contrast parts of the street-level data, and discuss how to incorporate priors on a user's position (e.g. given by noisy GPS readings or network cells), which previous approaches often ignore. Finally, and maybe most importantly, we present our results according to a carefully designed, repeatable evaluation scheme and make publicly available a set of 1.7 million images with ground truth labels, geotags, and calibration data, as well as a difficult set of cell phone query images. We provide these resources as a benchmark to facilitate further research in the area.",
"This paper investigates the problem of geo-localization in GPS challenged urban canyons using only skylines. Our proposed solution takes a sequence of upward facing omnidirectional images and coarse 3D models of cities to compute the geo-trajectory. The camera is oriented upwards to capture images of the immediate skyline, which is generally unique and serves as a fingerprint for a specific location in a city. Our goal is to estimate global position by matching skylines extracted from omni-directional images to skyline segments from coarse 3D city models. Under day-time and clear sky conditions, we propose a sky-segmentation algorithm using graph cuts for estimating the geo-location. In cases where the skyline gets affected by partial fog, night-time and occlusions from trees, we propose a shortest path algorithm that computes the location without prior sky detection. We show compelling experimental results for hundreds of images taken in New York, Boston and Tokyo under various weather and lighting conditions (daytime, foggy dawn and night-time).",
"We look at the problem of location recognition in a large image dataset using a vocabulary tree. This entails finding the location of a query image in a large dataset containing 3times104 streetside images of a city. We investigate how the traditional invariant feature matching approach falls down as the size of the database grows. In particular we show that by carefully selecting the vocabulary using the most informative features, retrieval performance is significantly improved, allowing us to increase the number of database images by a factor of 10. We also introduce a generalization of the traditional vocabulary tree search algorithm which improves performance by effectively increasing the branching factor of a fixed vocabulary tree.",
"Finding an image's exact GPS location is a challenging computer vision problem that has many real-world applications. In this paper, we address the problem of finding the GPS location of images with an accuracy which is comparable to hand-held GPS devices. We leverage a structured data set of about 100,000 images build from Google Maps Street View as the reference images. We propose a localization method in which the SIFT descriptors of the detected SIFT interest points in the reference images are indexed using a tree. In order to localize a query image, the tree is queried using the detected SIFT descriptors in the query image. A novel GPS-tag-based pruning method removes the less reliable descriptors. Then, a smoothing step with an associated voting scheme is utilized; this allows each query descriptor to vote for the location its nearest neighbor belongs to, in order to accurately localize the query image. A parameter called Confidence of Localization which is based on the Kurtosis of the distribution of votes is defined to determine how reliable the localization of a particular image is. In addition, we propose a novel approach to localize groups of images accurately in a hierarchical manner. First, each image is localized individually; then, the rest of the images in the group are matched against images in the neighboring area of the found first match. The final location is determined based on the Confidence of Localization parameter. The proposed image group localization method can deal with very unclear queries which are not capable of being geolocated individually.",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora.",
"In this paper, we present a new framework for geo-locating an image utilizing a novel multiple nearest neighbor feature matching method using Generalized Minimum Clique Graphs (GMCP). First, we extract local features (e.g., SIFT) from the query image and retrieve a number of nearest neighbors for each query feature from the reference data set. Next, we apply our GMCP-based feature matching to select a single nearest neighbor for each query feature such that all matches are globally consistent. Our approach to feature matching is based on the proposition that the first nearest neighbors are not necessarily the best choices for finding correspondences in image matching. Therefore, the proposed method considers multiple reference nearest neighbors as potential matches and selects the correct ones by enforcing consistency among their global features (e.g., GIST) using GMCP. In this context, we argue that using a robust distance function for finding the similarity between the global features is essential for the cases where the query matches multiple reference images with dissimilar global features. Towards this end, we propose a robust distance function based on the Gaussian Radial Basis Function (G-RBF). We evaluated the proposed framework on a new data set of 102k street view images; the experiments show it outperforms the state of the art by 10 percent.",
"We address the problem of large scale place-of-interest recognition in cell phone images of urban scenarios. Here, we go beyond what has been shown in earlier approaches by exploiting the nowadays often available 3D building information (e.g. from extruded floor plans) and massive street-view like image data for database creation. Exploiting vanishing points in query images and thus fully removing 3D rotation from the recognition problem allows then to simplify the feature invariance to a pure homothetic problem, which we show leaves more discriminative power in feature descriptors than classical SIFT. We rerank visual word based document queries using a fast stratified homothetic verification that is tailored for repetitive patterns like window grids on facades and in most cases boosts the correct document to top positions if it was in the short list. Since we exploit 3D building information, the approach finally outputs the camera pose in real world coordinates ready for augmenting the cell phone image with virtual 3D information. The whole system is demonstrated to outperform traditional approaches on city scale experiments for different sources of street-view like image data and a challenging set of cell phone images.",
"We address the problem of recognizing a place depicted in a query image by using a large database of geo-tagged images at a city-scale. In particular, we discover features that are useful for recognizing a place in a data-driven manner, and use this knowledge to predict useful features in a query image prior to the geo-localization process. This allows us to achieve better performance while reducing the number of features. Also, for both learning to predict features and retrieving geo-tagged images from the database, we propose per-bundle vector of locally aggregated descriptors (PBVLAD), where each maximally stable region is described by a vector of locally aggregated descriptors (VLAD) on multiple scale-invariant features detected within the region. Experimental results show the proposed approach achieves a significant improvement over other baseline methods.",
"We seek to recognize the place depicted in a query image using a database of \"street side\" images annotated with geolocation information. This is a challenging task due to changes in scale, viewpoint and lighting between the query and the images in the database. One of the key problems in place recognition is the presence of objects such as trees or road markings, which frequently occur in the database and hence cause significant confusion between different places. As the main contribution, we show how to avoid features leading to confusion of particular places by using geotags attached to database images as a form of supervision. We develop a method for automatic detection of image-specific and spatially-localized groups of confusing features, and demonstrate that suppressing them significantly improves place recognition performance while reducing the database size. We show the method combines well with the state of the art bag-of-features model including query expansion, and demonstrate place recognition that generalizes over wide range of viewpoints and lighting conditions. Results are shown on a geotagged database of over 17K images of Paris downloaded from Google Street View.",
"Recognizing the location of a query image by matching it to an image database is an important problem in computer vision, and one for which the representation of the database is a key issue. We explore new ways for exploiting the structure of an image database by representing it as a graph, and show how the rich information embedded in such a graph can improve bag-of-words-based location recognition methods. In particular, starting from a graph based on visual connectivity, we propose a method for selecting a set of overlapping subgraphs and learning a local distance function for each subgraph using discriminative techniques. For a query image, each database image is ranked according to these local distance functions in order to place the image in the right part of the graph. In addition, we propose a probabilistic method for increasing the diversity of these ranked database images, again based on the structure of the image graph. We demonstrate that our methods improve performance over standard bag-of-words methods on several existing location recognition datasets.",
"A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CDs. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images."
]
} |
1602.05314 | 2284646714 | Is it possible to determine the location of a photo from just its pixels? While the general problem seems exceptionally difficult, photos often contain cues such as landmarks, weather patterns, vegetation, road markings, or architectural details, which in combination allow to infer where the photo was taken. Previously, this problem has been approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, this model achieves a 50 performance improvement over the single-image model. | While matching against geotagged images can provide the rough location of a query photo, some applications require the exact 6-dof camera pose. approaches achieve this goal using 3D models reconstructed using structure-from-motion from internet photos. A query image is localized by establishing correspondences between its interest points and the points in the 3D model and solving the resulting perspective-n-point (PnP) problem to obtain the camera parameters @cite_52 @cite_36 @cite_47 . Because matching the query descriptors against the 3D model descriptors is expensive, some approaches combine this technique with efficient image retrieval based on inverted indices @cite_20 @cite_6 @cite_53 . | {
"cite_N": [
"@cite_36",
"@cite_53",
"@cite_52",
"@cite_6",
"@cite_47",
"@cite_20"
],
"mid": [
"1616969904",
"2046166954",
"1565312575",
"2125795712",
"2129000642",
"2090080269"
],
"abstract": [
"We address the problem of determining where a photo was taken by estimating a full 6-DOF-plus-intrincs camera pose with respect to a large geo-registered 3D point cloud, bringing together research on image localization, landmark recognition, and 3D pose estimation. Our method scales to datasets with hundreds of thousands of images and tens of millions of 3D points through the use of two new techniques: a co-occurrence prior for RANSAC and bidirectional matching of image features with 3D points. We evaluate our method on several large data sets, and show state-of-the-art results on landmark recognition as well as the ability to locate cameras to within meters, requiring only seconds per query.",
"To reliably determine the camera pose of an image relative to a 3D point cloud of a scene, correspondences between 2D features and 3D points are needed. Recent work has demonstrated that directly matching the features against the points outperforms methods that take an intermediate image retrieval step in terms of the number of images that can be localized successfully. Yet, direct matching is inherently less scalable than retrievalbased approaches. In this paper, we therefore analyze the algorithmic factors that cause the performance gap and identify false positive votes as the main source of the gap. Based on a detailed experimental evaluation, we show that retrieval methods using a selective voting scheme are able to outperform state-of-the-art direct matching methods. We explore how both selective voting and correspondence computation can be accelerated by using a Hamming embedding of feature descriptors. Furthermore, we introduce a new dataset with challenging query images for the evaluation of image-based localization.",
"We present a fast, simple location recognition and image localization method that leverages feature correspondence and geometry estimated from large Internet photo collections. Such recovered structure contains a significant amount of useful information about images and image features that is not available when considering images in isolation. For instance, we can predict which views will be the most common, which feature points in a scene are most reliable, and which features in the scene tend to co-occur in the same image. Based on this information, we devise an adaptive, prioritized algorithm for matching a representative set of SIFT features covering a large scene to a query image for efficient localization. Our approach is based on considering features in the scene database, and matching them to query image features, as opposed to more conventional methods that match image features to visual words or database features. We find this approach results in improved performance, due to the richer knowledge of characteristics of the database features compared to query image features. We present experiments on two large city-scale photo collections, showing that our algorithm compares favorably to image retrieval-style approaches to location recognition.",
"Efficient view registration with respect to a given 3D reconstruction has many applications like inside-out tracking in indoor and outdoor environments, and geo-locating images from large photo collections. We present a fast location recognition technique based on structure from motion point clouds. Vocabulary tree-based indexing of features directly returns relevant fragments of 3D models instead of documents from the images database. Additionally, we propose a compressed 3D scene representation which improves recognition rates while simultaneously reducing the computation time and the memory consumption. The design of our method is based on algorithms that efficiently utilize modern graphics processing units to deliver real-time performance for view registration. We demonstrate the approach by matching hand-held outdoor videos to known 3D urban models, and by registering images from online photo collections to the corresponding landmarks.",
"Recently developed Structure from Motion (SfM) reconstruction approaches enable the creation of large scale 3D models of urban scenes. These compact scene representations can then be used for accurate image-based localization, creating the need for localization approaches that are able to efficiently handle such large amounts of data. An important bottleneck is the computation of 2D-to-3D correspondences required for pose estimation. Current stateof- the-art approaches use indirect matching techniques to accelerate this search. In this paper we demonstrate that direct 2D-to-3D matching methods have a considerable potential for improving registration performance. We derive a direct matching framework based on visual vocabulary quantization and a prioritized correspondence search. Through extensive experiments, we show that our framework efficiently handles large datasets and outperforms current state-of-the-art methods.",
"Recognizing the location of a query image by matching it to an image database is an important problem in computer vision, and one for which the representation of the database is a key issue. We explore new ways for exploiting the structure of an image database by representing it as a graph, and show how the rich information embedded in such a graph can improve bag-of-words-based location recognition methods. In particular, starting from a graph based on visual connectivity, we propose a method for selecting a set of overlapping subgraphs and learning a local distance function for each subgraph using discriminative techniques. For a query image, each database image is ranked according to these local distance functions in order to place the image in the right part of the graph. In addition, we propose a probabilistic method for increasing the diversity of these ranked database images, again based on the structure of the image graph. We demonstrate that our methods improve performance over standard bag-of-words methods on several existing location recognition datasets."
]
} |
1602.05314 | 2284646714 | Is it possible to determine the location of a photo from just its pixels? While the general problem seems exceptionally difficult, photos often contain cues such as landmarks, weather patterns, vegetation, road markings, or architectural details, which in combination allow to infer where the photo was taken. Previously, this problem has been approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, this model achieves a 50 performance improvement over the single-image model. | Instead of matching against a flat collection of photos, landmark recognition systems @cite_39 @cite_18 @cite_19 @cite_8 @cite_45 build a database of by clustering images from internet photo collections. The landmarks in a query image are recognized by retrieving matching database images and returning the landmark associated with them. Instead of using image retrieval, @cite_15 @cite_21 use SVMs trained on BoVW of landmark clusters to decide which landmark is shown in a query image. Instead of operating on image clusters, @cite_32 , train one exemplar SVM for each image in a dataset of street view images. | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_21",
"@cite_32",
"@cite_39",
"@cite_19",
"@cite_45",
"@cite_15"
],
"mid": [
"2540698934",
"1979572481",
"2536627426",
"1995288918",
"2121348293",
"2064859577",
"2136451880",
"2147854204"
],
"abstract": [
"The state-of-the art in visual object retrieval from large databases allows to search millions of images on the object level. Recently, complementary works have proposed systems to crawl large object databases from community photo collections on the Internet. We combine these two lines of work to a large-scale system for auto-annotation of holiday snaps. The resulting method allows for automatic labeling objects such as landmark buildings, scenes, pieces of art etc. at the object level in a fully automatic manner. The labeling is multi-modal and consists of textual tags, geographic location, and related content on the Internet. Furthermore, the efficiency of the retrieval process is optimized by creating more compact and precise indices for visual vocabularies using background information obtained in the crawling stage of the system. We demonstrate the scalability and precision of the proposed method by conducting experiments on millions of images downloaded from community photo collections on the Internet.",
"In this paper, we describe an approach for mining images of objects (such as touristic sights) from community photo collections in an unsupervised fashion. Our approach relies on retrieving geotagged photos from those web-sites using a grid of geospatial tiles. The downloaded photos are clustered into potentially interesting entities through a processing pipeline of several modalities, including visual, textual and spatial proximity. The resulting clusters are analyzed and are automatically classified into objects and events. Using mining techniques, we then find text labels for these clusters, which are used to again assign each cluster to a corresponding Wikipedia article in a fully unsupervised manner. A final verification step uses the contents (including images) from the selected Wikipedia article to verify the cluster-article assignment. We demonstrate this approach on several urban areas, densely covering an area of over 700 square kilometers and mining over 200,000 photos, making it probably the largest experiment of its kind to date.",
"With the rise of photo-sharing websites such as Facebook and Flickr has come dramatic growth in the number of photographs online. Recent research in object recognition has used such sites as a source of image data, but the test images have been selected and labeled by hand, yielding relatively small validation sets. In this paper we study image classification on a much larger dataset of 30 million images, including nearly 2 million of which have been labeled into one of 500 categories. The dataset and categories are formed automatically from geotagged photos from Flickr, by looking for peaks in the spatial geotag distribution corresponding to frequently-photographed landmarks. We learn models for these landmarks with a multiclass support vector machine, using vector-quantized interest point descriptors as features. We also explore the non-visual information available on modern photo-sharing sites, showing that using textual tags and temporal constraints leads to significant improvements in classification rate. We find that in some cases image features alone yield comparable classification accuracy to using text tags as well as to the performance of human observers.",
"The aim of this work is to localize a query photograph by finding other images depicting the same place in a large geotagged image database. This is a challenging task due to changes in viewpoint, imaging conditions and the large size of the image database. The contribution of this work is two-fold. First, we cast the place recognition problem as a classification task and use the available geotags to train a classifier for each location in the database in a similar manner to per-exemplar SVMs in object recognition. Second, as only few positive training examples are available for each location, we propose a new approach to calibrate all the per-location SVM classifiers using only the negative examples. The calibration we propose relies on a significance measure essentially equivalent to the p-values classically used in statistical hypothesis testing. Experiments are performed on a database of 25,000 geotagged street view images of Pittsburgh and demonstrate improved place recognition accuracy of the proposed approach over the previous work.",
"State of the art data mining and image retrieval in community photo collections typically focus on popular subsets, e.g. images containing landmarks or associated to Wikipedia articles. We propose an image clustering scheme that, seen as vector quantization compresses a large corpus of images by grouping visually consistent ones while providing a guaranteed distortion bound. This allows us, for instance, to represent the visual content of all thousands of images depicting the Parthenon in just a few dozens of scene maps and still be able to retrieve any single, isolated, non-landmark image like a house or graffiti on a wall. Starting from a geo-tagged dataset, we first group images geographically and then visually, where each visual cluster is assumed to depict different views of the the same scene. We align all views to one reference image and construct a 2D scene map by preserving details from all images while discarding repeating visual features. Our indexing, retrieval and spatial matching scheme then operates directly on scene maps. We evaluate the precision of the proposed method on a challenging one-million urban image dataset.",
"The recognition of a place depicted in an image typically adopts methods from image retrieval in large-scale databases. First, a query image is described as a “bag-of-features” and compared to every image in the database. Second, the most similar images are passed to a geometric verification stage. However, this is an inefficient approach when considering that some database images may be almost identical, and many image features may not repeatedly occur. We address this issue by clustering similar database images to represent distinct scenes, and tracking local features that are consistently detected to form a set of real-world landmarks. Query images are then matched to landmarks rather than features, and a probabilistic model of landmark properties is learned from the cluster to appropriately verify or reject putative feature matches. We present novelties in both a bag-of-features retrieval and geometric verification stage based on this concept. Results on a database of 200K images of popular tourist destinations show improvements in both recognition performance and efficiency compared to traditional image retrieval methods.",
"Modeling and recognizing landmarks at world-scale is a useful yet challenging task. There exists no readily available list of worldwide landmarks. Obtaining reliable visual models for each landmark can also pose problems, and efficiency is another challenge for such a large scale system. This paper leverages the vast amount of multimedia data on the Web, the availability of an Internet image search engine, and advances in object recognition and clustering techniques, to address these issues. First, a comprehensive list of landmarks is mined from two sources: (1) 20 million GPS-tagged photos and (2) online tour guide Web pages. Candidate images for each landmark are then obtained from photo sharing Websites or by querying an image search engine. Second, landmark visual models are built by pruning candidate images using efficient image matching and unsupervised clustering techniques. Finally, the landmarks and their visual models are validated by checking authorship of their member images. The resulting landmark recognition engine incorporates 5312 landmarks from 1259 cities in 144 countries. The experiments demonstrate that the engine can deliver satisfactory recognition performance with high efficiency.",
"In this paper we propose a new technique for learning a discriminative codebook for local feature descriptors, specifically designed for scalable landmark classification. The key contribution lies in exploiting the knowledge of correspondences within sets of feature descriptors during code-book learning. Feature correspondences are obtained using structure from motion (SfM) computation on Internet photo collections which serve as the training data. Our codebook is defined by a random forest that is trained to map corresponding feature descriptors into identical codes. Unlike prior forest-based codebook learning methods, we utilize fine-grained descriptor labels and address the challenge of training a forest with an extremely large number of labels. Our codebook is used with various existing feature encoding schemes and also a variant we propose for importance-weighted aggregation of local features. We evaluate our approach on a public dataset of 25 landmarks and our new dataset of 620 landmarks (614K images). Our approach significantly outperforms the state of the art in landmark classification. Furthermore, our method is memory efficient and scalable."
]
} |
1602.05314 | 2284646714 | Is it possible to determine the location of a photo from just its pixels? While the general problem seems exceptionally difficult, photos often contain cues such as landmarks, weather patterns, vegetation, road markings, or architectural details, which in combination allow to infer where the photo was taken. Previously, this problem has been approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, this model achieves a 50 performance improvement over the single-image model. | A task related to image geolocation is scene recognition, for which the SUN database @cite_11 is an established benchmark. The database consists of 131k images categorized into 908 scene categories such as mountain , cathedral or staircase . The SUN survey paper @cite_11 shows that Overfeat @cite_29 , a CNN trained on ImageNet @cite_16 images, consistently outperforms other approaches, including global descriptors like GIST and local descriptors like SIFT, motivating our use of CNNs for image geolocation. | {
"cite_N": [
"@cite_29",
"@cite_16",
"@cite_11"
],
"mid": [
"1487583988",
"2108598243",
"1977766639"
],
"abstract": [
"We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Progress in scene understanding requires reasoning about the rich and diverse visual environments that make up our daily experience. To this end, we propose the Scene Understanding database, a nearly exhaustive collection of scenes categorized at the same level of specificity as human discourse. The database contains 908 distinct scene categories and 131,072 images. Given this data with both scene and object labels available, we perform in-depth analysis of co-occurrence statistics and the contextual relationship. To better understand this large scale taxonomy of scene categories, we perform two human experiments: we quantify human scene recognition accuracy, and we measure how typical each image is of its assigned scene category. Next, we perform computational experiments: scene recognition with global image features, indoor versus outdoor classification, and \"scene detection,\" in which we relax the assumption that one image depicts only one scene category. Finally, we relate human experiments to machine performance and explore the relationship between human and machine recognition errors and the relationship between image \"typicality\" and machine recognition accuracy."
]
} |
1602.05314 | 2284646714 | Is it possible to determine the location of a photo from just its pixels? While the general problem seems exceptionally difficult, photos often contain cues such as landmarks, weather patterns, vegetation, road markings, or architectural details, which in combination allow to infer where the photo was taken. Previously, this problem has been approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, this model achieves a 50 performance improvement over the single-image model. | In Sec. , we extend PlaNet to geolocate sequences of images using LSTMs. Several previous approaches have also realized the potential of exploiting temporal coherence to geolocate images. @cite_31 @cite_21 first cluster the photo collection into landmarks and then learn to predict the sequence of landmarks in a query photo sequence. While @cite_31 train a Hidden Markov Model (HMM) on a dataset of photo albums to learn popular tourist routes, @cite_21 train a structured SVM that uses temporal information as an additional feature. Images2GPS @cite_9 also trains an HMM, but instead of landmarks, its classes are a set of geographical cells partitioning the surface of the earth. This is similar to our approach, however we use a much finer discretization. | {
"cite_N": [
"@cite_9",
"@cite_31",
"@cite_21"
],
"mid": [
"2537480791",
"2002249106",
"2536627426"
],
"abstract": [
"This paper presents a method for estimating geographic location for sequences of time-stamped photographs. A prior distribution over travel describes the likelihood of traveling from one location to another during a given time interval. This distribution is based on a training database of 6 million photographs from Flickr.com. An image likelihood for each location is defined by matching a test photograph against the training database. Inferring location for images in a test sequence is then performed using the Forward-Backward algorithm, and the model can be adapted to individual users as well. Using temporal constraints allows our method to geolocate images without recognizable landmarks, and images with no geographic cues whatsoever. This method achieves a substantial performance improvement over the best-available baseline, and geolocates some users' images with near-perfect accuracy.",
"Image-based location estimation methods typically recognize every photo independently, and their resulting reliance on strong visual feature matches makes them most suited for distinctive landmark scenes. We observe that when touring a city, people tend to follow common travel patterns — for example, a stroll down Wall Street might be followed by a ferry ride, then a visit to the Statue of Liberty. We propose an approach that learns these trends directly from online image data, and then leverages them within a Hidden Markov Model to robustly estimate locations for novel sequences of tourist photos. We further devise a set-to-set matching-based likelihood that treats each “burst” of photos from the same camera as a single observation, thereby better accommodating images that may not contain particularly distinctive scenes. Our experiments with two large datasets of major tourist cities clearly demonstrate the approach's advantages over methods that recognize each photo individually, as well as a simpler HMM baseline that lacks the proposed burst-based observation model.",
"With the rise of photo-sharing websites such as Facebook and Flickr has come dramatic growth in the number of photographs online. Recent research in object recognition has used such sites as a source of image data, but the test images have been selected and labeled by hand, yielding relatively small validation sets. In this paper we study image classification on a much larger dataset of 30 million images, including nearly 2 million of which have been labeled into one of 500 categories. The dataset and categories are formed automatically from geotagged photos from Flickr, by looking for peaks in the spatial geotag distribution corresponding to frequently-photographed landmarks. We learn models for these landmarks with a multiclass support vector machine, using vector-quantized interest point descriptors as features. We also explore the non-visual information available on modern photo-sharing sites, showing that using textual tags and temporal constraints leads to significant improvements in classification rate. We find that in some cases image features alone yield comparable classification accuracy to using text tags as well as to the performance of human observers."
]
} |
1602.05314 | 2284646714 | Is it possible to determine the location of a photo from just its pixels? While the general problem seems exceptionally difficult, photos often contain cues such as landmarks, weather patterns, vegetation, road markings, or architectural details, which in combination allow to infer where the photo was taken. Previously, this problem has been approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, this model achieves a 50 performance improvement over the single-image model. | In summary, most previous approaches to photo geolocation are restricted to urban areas which are densely covered by street view imagery and tourist photos. Exceptions are Im2GPS @cite_7 @cite_28 and @cite_38 @cite_23 @cite_24 , which make additional use of satellite imagery. Prior work has shown that CNNs are well-suited for scene classification @cite_11 and geographical attribute prediction @cite_27 , but to our knowledge ours is the first method that directly takes a classification approach to geolocation using CNNs. | {
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_28",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_11"
],
"mid": [
"2081418428",
"1905312050",
"2103163130",
"2199890863",
"2087475273",
"1946093182",
"1977766639"
],
"abstract": [
"The recent availability of large amounts of geotagged imagery has inspired a number of data driven solutions to the image geolocalization problem. Existing approaches predict the location of a query image by matching it to a database of georeferenced photographs. While there are many geotagged images available on photo sharing and street view sites, most are clustered around landmarks and urban areas. The vast majority of the Earth's land area has no ground level reference photos available, which limits the applicability of all existing image geolocalization methods. On the other hand, there is no shortage of visual and geographic data that densely covers the Earth - we examine overhead imagery and land cover survey data - but the relationship between this data and ground level query photographs is complex. In this paper, we introduce a cross-view feature translation approach to greatly extend the reach of image geolocalization methods. We can often localize a query even if it has no corresponding ground level images in the database. A key idea is to learn the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images. We perform experiments over a 1600 km2 region containing a variety of scenes and land cover types. For each query, our algorithm produces a probability density over the region of interest.",
"In this chapter, we explore the task of global image geolocalization—estimating where on the Earth a photograph was captured. We examine variants of the “im2gps” algorithm using millions of “geotagged” Internet photographs as training data. We first discuss a simple to understand nearest-neighbor baseline. Next, we introduce a lazy-learning approach with more sophisticated features that doubles the performance of the original “im2gps” algorithm. Beyond quantifying geolocalization accuracy, we also analyze (a) how the nonuniform distribution of training data impacts the algorithm (b) how performance compares to baselines such as random guessing and land-cover recognition and (c) whether geolocalization is simply landmark or “instance level” recognition at a large scale. We also show that geolocation estimates can provide the basis for image understanding tasks such as population density estimation or land cover estimation. This work was originally described, in part, in “im2gps” [9] which was the first attempt at global geolocalization using Internet-derived training data.",
"Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban rural classification.",
"We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales.",
"Geographic location is a powerful property for organizing large-scale photo collections, but only a small fraction of online photos are geo-tagged. Most work in automatically estimating geo-tags from image content is based on comparison against models of buildings or landmarks, or on matching to large reference collections of geotagged images. These approaches work well for frequently photographed places like major cities and tourist destinations, but fail for photos taken in sparsely photographed places where few reference photos exist. Here we consider how to recognize general geo-informative attributes of a photo, e.g. the elevation gradient, population density, demographics, etc. of where it was taken, instead of trying to estimate a precise geo-tag. We learn models for these attributes using a large (noisy) set of geo-tagged images from Flickr by training deep convolutional neural networks (CNNs). We evaluate on over a dozen attributes, showing that while automatically recognizing some attributes is very difficult, others can be automatically estimated with about the same accuracy as a human.",
"The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.",
"Progress in scene understanding requires reasoning about the rich and diverse visual environments that make up our daily experience. To this end, we propose the Scene Understanding database, a nearly exhaustive collection of scenes categorized at the same level of specificity as human discourse. The database contains 908 distinct scene categories and 131,072 images. Given this data with both scene and object labels available, we perform in-depth analysis of co-occurrence statistics and the contextual relationship. To better understand this large scale taxonomy of scene categories, we perform two human experiments: we quantify human scene recognition accuracy, and we measure how typical each image is of its assigned scene category. Next, we perform computational experiments: scene recognition with global image features, indoor versus outdoor classification, and \"scene detection,\" in which we relax the assumption that one image depicts only one scene category. Finally, we relate human experiments to machine performance and explore the relationship between human and machine recognition errors and the relationship between image \"typicality\" and machine recognition accuracy."
]
} |
1602.05437 | 2277970215 | For a pair of positive parameters @math , a partition @math of the vertex set @math of an @math -vertex graph @math into disjoint clusters of diameter at most @math each is called a @math network decomposition, if the supergraph @math , obtained by contracting each of the clusters of @math , can be properly @math -colored. The decomposition @math is said to be strong (resp., weak) if each of the clusters has strong (resp., weak) diameter at most @math , i.e., if for every cluster @math and every two vertices @math , the distance between them in the induced graph @math of @math (resp., in @math ) is at most @math . Network decomposition is a powerful construct, very useful in distributed computing and beyond. It was shown by Awerbuch al AGLP89 and Panconesi and Srinivasan PS92 , that strong @math network decompositions can be computed in @math distributed time. Linial and Saks LS93 devised an ingenious randomized algorithm that constructs weak @math network decompositions in @math time. It was however open till now if strong network decompositions with both parameters @math can be constructed in distributed @math time. In this paper we answer this long-standing open question in the affirmative, and show that strong @math network decompositions can be computed in @math time. We also present a tradeoff between parameters of our network decomposition. Our work is inspired by and relies on the "shifted shortest path approach", due to Blelloch al BGKMPT11 , and Miller al MPX13 . These authors developed this approach for PRAM algorithms for padded partitions. We adapt their approach to network decompositions in the distributed model of computation. | Barenboim al @cite_14 devised a randomized constant time algorithm for constructing strong @math network decompositions, for an arbitrarily small constant @math . Kutten al @cite_1 extended the algorithm of Linial and Saks @cite_11 for constructing network decompositions to hypergraphs. A long line of research developed network decompositions for graphs of bounded growth, see, e.g., @cite_13 @cite_5 @cite_2 . | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"2398784061",
"103874125",
"1997707549",
"2114986147",
"2052811387",
"2092349993"
],
"abstract": [
"",
"Fundamental local symmetry breaking problems such as Maximal Independent Set (MIS) and coloring have been recognized as important by the community, and studied extensively in (standard) graphs. In particular, fast (i.e., logarithmic run time) randomized algorithms are well-established for MIS and Δ + 1-coloring in both the LOCAL and CONGEST distributed computing models. On the other hand, comparatively much less is known on the complexity of distributed symmetry breaking in hypergraphs. In particular, a key question is whether a fast (randomized) algorithm for MIS exists for hypergraphs.",
"We present a novel distributed algorithm for the maximal independent set (MIS) problem. On growth-bounded graphs (GBG) our deterministic algorithm finishes in O(log* n) time, n being the number of nodes. In light of Linial's Ω(log* n) lower bound our algorithm is asymptotically optimal. Our algorithm answers prominent open problems in the ad hoc sensor network domain. For instance, it solves the connected dominating set problem for unit disk graphs in O(log* n) time, exponentially faster than the state-of-the-art algorithm. With a new extension our algorithm also computes a delta+1 coloring in O(log* n) time, where delta is the maximum degree of the graph.",
"Many large-scale networks such as ad hoc and sensor networks, peer-to-peer networks, or the Internet have the property that the number of independent nodes does not grow arbitrarily when looking at neighborhoods of increasing size. Due to this bounded \"volume growth,\" one could expect that distributed algorithms are able to solve many problems more efficiently than on general graphs. The goal of this paper is to help understanding the distributed complexity of problems on \"bounded growth\" graphs. We show that on the widely used unit disk graph, covering and packing linear programs can be approximated by constant factors in constant time. For a more general network model which is based on the assumption that nodes are in a metric space of constant doubling dimension, we show that in O(log*!n) rounds it is possible to construct a (O(1), O(1))-network decomposition. This results in asymptotically optimal O(log*!n) time algorithms for many important problems.",
"The efficient distributed construction of a maximal independent set (MIS) of a graph is of fundamental importance. We study the problem in the class of Growth-Bounded Graphs, which includes for example the well-known Unit Disk Graphs. In contrast to the fastest (time-optimal) existing approach [11], we assume that no geometric information (e.g., distances in the graph's embedding) is given. Instead, nodes employ randomization for their decisions. Our algorithm computes a MIS in O(log log n • log* n) rounds with very high probability for graphs with bounded growth, where n denotes the number of nodes in the graph. In view of Linial's Ω(log* n) lower bound for computing a MIS in ring networks [12], which was extended to randomized algorithms independently by Naor [18] and Linial [13], our solution is close to optimal. In a nutshell, our algorithm shows that for computing a MIS, randomization is a viable alternative to distance information.",
"Chapter 36 Decomposing Graphs into Regions of Small Diameter* Nathan Linialt Michael .lss A decomposition of a graph G = (V, E) is a partition of the vertex set into subsets (called lhks). The diameter of a decomposition is the least. d such that any two vertices belonging to the same connected component of a block are at distance < d. In this paper we prove (nearly best possible) statements of the form: .4ny n–vertex graph has a decomposition into a small number of blocks each having small diameter. Such decompositions provide a tool for efficiently decentralizing distributed computations. In [AGLP1 it was shown that every graph has a decomposition into at most s(n) blocks of diameter at most s(n) for s(n) = o( loglog d h n). usinga,technique of Awerbuch [A] and Awerbuch and Peleg [AP], we improve this result by showing that every graph has a decomposition of diameter ()(log n) into O(log n) blocks. In addition, we give a randomized distributed algorithm that produces such a decomposition and runs in time 0(log2 n). The construction can be parametrized to provide decompositions that trade-off between the number of blocks and the diameter. We show that this trade-off is nearly best possible for two families of graphs the first consists of skeletons of certain triangulations of a simplex and the second consists of grid graphs with added diagonals. The proofs in both cases rely on basic results in combinatorial topology, Sperner’s lemma for the first class and Tucker’s lemma for the second. *This work was supported in part by NSF contracts DMS87-03541 and CCR-8911388 tDepartment of Computer Science, Hebrew University, Jerusalem, Israel. + ‘Department of Computer Science and Engineering, Mail Code C-014, University of California, San Diego, La Jolla, CA 92093-0114."
]
} |
1602.05352 | 2952613481 | Most data for evaluating and training recommender systems is subject to selection biases, either through self-selection by the users or through the actions of the recommendation system itself. In this paper, we provide a principled approach to handling selection biases, adapting models and estimation techniques from causal inference. The approach leads to unbiased performance estimators despite biased data, and to a matrix factorization method that provides substantially improved prediction performance on real-world data. We theoretically and empirically characterize the robustness of the approach, finding that it is highly practical and scalable. | Past work that explicitly dealt with the MNAR nature of recommendation data approached the problem as missing-data imputation based on the joint likelihood of the missing data model and the rating model @cite_18 @cite_23 @cite_17 . This has led to sophisticated and highly complex methods. We take a fundamentally different approach that treats both models separately, making our approach modular and scalable. Furthermore, our approach is robust to mis-specification of the rating model, and we characterize how the overall learning process degrades gracefully under a mis-specified missing-data model. We empirically compare against the state-of-the-art joint likelihood model @cite_17 in this paper. | {
"cite_N": [
"@cite_18",
"@cite_23",
"@cite_17"
],
"mid": [
"",
"2020631728",
"2157519573"
],
"abstract": [
"",
"A fundamental aspect of rating-based recommender systems is the observation process, the process by which users choose the items they rate. Nearly all research on collaborative filtering and recommender systems is founded on the assumption that missing ratings are missing at random. The statistical theory of missing data shows that incorrect assumptions about missing data can lead to biased parameter estimation and prediction. In a recent study, we demonstrated strong evidence for violations of the missing at random condition in a real recommender system. In this paper we present the first study of the effect of non-random missing data on collaborative ranking, and extend our previous results regarding the impact of non-random missing data on collaborative prediction.",
"We propose a probabilistic matrix factorization model for collaborative filtering that learns from data that is missing not at random (MNAR). Matrix factorization models exhibit state-of-the-art predictive performance in collaborative filtering. However, these models usually assume that the data is missing at random (MAR), and this is rarely the case. For example, the data is not MAR if users rate items they like more than ones they dislike. When the MAR assumption is incorrect, inferences are biased and predictive performance can suffer. Therefore, we model both the generative process for the data and the missing data mechanism. By learning these two models jointly we obtain improved performance over state-of-the-art methods when predicting the ratings and when modeling the data observation process. We present the first viable MF model for MNAR data. Our results are promising and we expect that further research on NMAR models will yield large gains in collaborative filtering."
]
} |
1602.05352 | 2952613481 | Most data for evaluating and training recommender systems is subject to selection biases, either through self-selection by the users or through the actions of the recommendation system itself. In this paper, we provide a principled approach to handling selection biases, adapting models and estimation techniques from causal inference. The approach leads to unbiased performance estimators despite biased data, and to a matrix factorization method that provides substantially improved prediction performance on real-world data. We theoretically and empirically characterize the robustness of the approach, finding that it is highly practical and scalable. | Related but different from the problem we consider is recommendation from positive feedback alone @cite_4 @cite_22 . Related to this setting are also alternative approaches to learning with MNAR data @cite_11 @cite_24 @cite_25 , which aim to avoid the problem by considering performance measures less affected by selection bias under mild assumptions. Of these works, the approach of is most closely related to ours, since it defines a recall estimator that uses item popularity as a proxy for propensity. Similar to our work, and also derive weighted matrix factorization methods, but with weighting schemes that are either heuristic or need to be tuned via cross validation. In contrast, our weighted matrix factorization method enjoys rigorous learning guarantees in an ERM framework. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_24",
"@cite_25",
"@cite_11"
],
"mid": [
"2101409192",
"2950152545",
"",
"2026773017",
"1992665562"
],
"abstract": [
"A common task of recommender systems is to improve customer experience through personalized recommendations based on prior implicit feedback. These systems passively track different sorts of user behavior, such as purchase history, watching habits and browsing activity, in order to model user preferences. Unlike the much more extensively researched explicit feedback, we do not have any direct input from the users regarding their preferences. In particular, we lack substantial evidence on which products consumer dislike. In this work we identify unique properties of implicit feedback datasets. We propose treating the data as indication of positive and negative preference associated with vastly varying confidence levels. This leads to a factor model which is especially tailored for implicit feedback recommenders. We also suggest a scalable optimization procedure, which scales linearly with the data size. The algorithm is used successfully within a recommender system for television shows. It compares favorably with well tuned implementations of other known methods. In addition, we offer a novel way to give explanations to recommendations given by this factor model.",
"Collaborative filtering analyzes user preferences for items (e.g., books, movies, restaurants, academic papers) by exploiting the similarity patterns across users. In implicit feedback settings, all the items, including the ones that a user did not consume, are taken into consideration. But this assumption does not accord with the common sense understanding that users have a limited scope and awareness of items. For example, a user might not have heard of a certain paper, or might live too far away from a restaurant to experience it. In the language of causal analysis, the assignment mechanism (i.e., the items that a user is exposed to) is a latent variable that may change for various user item combinations. In this paper, we propose a new probabilistic approach that directly incorporates user exposure to items into collaborative filtering. The exposure is modeled as a latent variable and the model infers its value from data. In doing so, we recover one of the most successful state-of-the-art approaches as a special case of our model, and provide a plug-in method for conditioning exposure on various forms of exposure covariates (e.g., topics in text, venue locations). We show that our scalable inference algorithm outperforms existing benchmarks in four different domains both with and without exposure covariates.",
"",
"In implicit feedback datasets, non-interaction of a user with an item does not necessarily indicate that an item is irrelevant for the user. Thus, evaluation measures computed on the observed feedback may not accurately reflect performance on the complete data. In this paper, we discuss a missing data model for implicit feedback and propose a novel evaluation measure oriented towards Top-N recommendation. Our evaluation measure admits unbiased estimation under our missing data model, unlike the popular Normalized Discounted Cumulative Gain (NDCG) measure. We also derive an efficient algorithm to optimize the measure on the training data. We run several experiments which demonstrate the utility of our proposed measure.",
"Users typically rate only a small fraction of all available items. We show that the absence of ratings carries useful information for improving the top-k hit rate concerning all items, a natural accuracy measure for recommendations. As to test recommender systems, we present two performance measures that can be estimated, under mild assumptions, without bias from data even when ratings are missing not at random (MNAR). As to achieve optimal test results, we present appropriate surrogate objective functions for efficient training on MNAR data. Their main property is to account for all ratings - whether observed or missing in the data. Concerning the top-k hit rate on test data, our experiments indicate dramatic improvements over even sophisticated methods that are optimized on observed ratings only."
]
} |
1602.05352 | 2952613481 | Most data for evaluating and training recommender systems is subject to selection biases, either through self-selection by the users or through the actions of the recommendation system itself. In this paper, we provide a principled approach to handling selection biases, adapting models and estimation techniques from causal inference. The approach leads to unbiased performance estimators despite biased data, and to a matrix factorization method that provides substantially improved prediction performance on real-world data. We theoretically and empirically characterize the robustness of the approach, finding that it is highly practical and scalable. | Propensity-based approaches have been widely used in causal inference from observational studies @cite_39 , as well as in complete-case analysis for missing data @cite_0 @cite_12 and in survey sampling @cite_7 . However, their use in matrix completion is new to our knowledge. Weighting approaches are also widely used in domain adaptation and covariate shift, where data from one source is used to train for a different problem [e.g.,][] Huang et al 06,Bickel et al 09,Sugiyama Kawanabe 12 . We will draw upon this work, especially the learning theory of weighting approaches in @cite_27 @cite_6 . | {
"cite_N": [
"@cite_7",
"@cite_6",
"@cite_39",
"@cite_0",
"@cite_27",
"@cite_12"
],
"mid": [
"",
"2111355007",
"2263423262",
"2044758663",
"2949407303",
"1968185435"
],
"abstract": [
"",
"This paper presents an analysis of importance weighting for learning from finite samples and gives a series of theoretical and algorithmic results. We point out simple cases where importance weighting can fail, which suggests the need for an analysis of the properties of this technique. We then give both upper and lower bounds for generalization with bounded importance weights and, more significantly, give learning guarantees for the more common case of unbounded importance weights under the weak assumption that the second moment is bounded, a condition related to the Renyi divergence of the traning and test distributions. These results are based on a series of novel and general bounds we derive for unbounded loss functions, which are of independent interest. We use these bounds to guide the definition of an alternative reweighting algorithm and report the results of experiments demonstrating its benefits. Finally, we analyze the properties of normalized importance weights which are also commonly used.",
"Most questions in social and biomedical sciences are causal in nature: what would happen to individuals, or to groups, if part of their environment were changed? In this groundbreaking text, two world-renowned experts present statistical methods for studying such questions. This book starts with the notion of potential outcomes, each corresponding to the outcome that would be realized if a subject were exposed to a particular treatment or regime. In this approach, causal effects are comparisons of such potential outcomes. The fundamental problem of causal inference is that we can only observe one of the potential outcomes for a particular subject. The authors discuss how randomized experiments allow us to assess causal effects and then turn to observational studies. They lay out the assumptions needed for causal inference and describe the leading analysis methods, including matching, propensity-score methods, and instrumental variables. Many detailed applications are included, with special focus on practical aspects for the empirical researcher.",
"Preface.PART I: OVERVIEW AND BASIC APPROACHES.Introduction.Missing Data in Experiments.Complete-Case and Available-Case Analysis, Including Weighting Methods.Single Imputation Methods.Estimation of Imputation Uncertainty.PART II: LIKELIHOOD-BASED APPROACHES TO THE ANALYSIS OF MISSING DATA.Theory of Inference Based on the Likelihood Function.Methods Based on Factoring the Likelihood, Ignoring the Missing-Data Mechanism.Maximum Likelihood for General Patterns of Missing Data: Introduction and Theory with Ignorable Nonresponse.Large-Sample Inference Based on Maximum Likelihood Estimates.Bayes and Multiple Imputation.PART III: LIKELIHOOD-BASED APPROACHES TO THE ANALYSIS OF MISSING DATA: APPLICATIONS TO SOME COMMON MODELS.Multivariate Normal Examples, Ignoring the Missing-Data Mechanism.Models for Robust Estimation.Models for Partially Classified Contingency Tables, Ignoring the Missing-Data Mechanism.Mixed Normal and Nonnormal Data with Missing Values, Ignoring the Missing-Data Mechanism.Nonignorable Missing-Data Models.References.Author Index.Subject Index.",
"This paper presents a theoretical analysis of sample selection bias correction. The sample bias correction technique commonly used in machine learning consists of reweighting the cost of an error on each training point of a biased sample to more closely reflect the unbiased distribution. This relies on weights derived by various estimation techniques based on finite samples. We analyze the effect of an error in that estimation on the accuracy of the hypothesis returned by the learning algorithm for two estimation techniques: a cluster-based estimation technique and kernel mean matching. We also report the results of sample bias correction experiments with several data sets using these techniques. Our analysis is based on the novel concept of distributional stability which generalizes the existing concept of point-based stability. Much of our work and proof techniques can be used to analyze other importance weighting techniques and their effect on accuracy when using a distributionally stable algorithm.",
"The simplest approach to dealing with missing data is to restrict the analysis to complete cases, i.e. individuals with no missing values. This can induce bias, however. Inverse probability weighting (IPW) is a commonly used method to correct this bias. It is also used to adjust for unequal sampling fractions in sample surveys. This article is a review of the use of IPW in epidemiological research. We describe how the bias in the complete-case analysis arises and how IPW can remove it. IPW is compared with multiple imputation (MI) and we explain why, despite MI generally being more efficient, IPW may sometimes be preferred. We discuss the choice of missingness model and methods such as weight truncation, weight stabilisation and augmented IPW. The use of IPW is illustrated on data from the 1958 British Birth Cohort."
]
} |
1602.04918 | 2952530979 | Robotic manipulation of deformable objects remains a challenging task. One such task is to iron a piece of cloth autonomously. Given a roughly flattened cloth, the goal is to have an ironing plan that can iteratively apply a regular iron to remove all the major wrinkles by a robot. We present a novel solution to analyze the cloth surface by fusing two surface scan techniques: a curvature scan and a discontinuity scan. The curvature scan can estimate the height deviation of the cloth surface, while the discontinuity scan can effectively detect sharp surface features, such as wrinkles. We use this information to detect the regions that need to be pulled and extended before ironing, and the other regions where we want to detect wrinkles and apply ironing to remove the wrinkles. We demonstrate that our hybrid scan technique is able to capture and classify wrinkles over the surface robustly. Given detected wrinkles, we enable a robot to iron them using shape features. Experimental results show that using our wrinkle analysis algorithm, our robot is able to iron the cloth surface and effectively remove the wrinkles. | There are many challenges associated with the manipulation of a deformable object such as a garment. Many researchers started with recognizing the category and pose of a deformable object using a large database, which contains exemplars either from off-line simulation or real garments @cite_15 @cite_17 @cite_7 @cite_3 . By iterative regrasping of the garment by hands by a robot, the garment finally reaches a stable state that can be placed flat on a table @cite_12 @cite_18 @cite_6 . These methods proceed to garment folding by first parsing of its shape @cite_18 @cite_0 @cite_8 @cite_6 . With the shape parameters, a folding plan can be generated and executed either by a humanoid robot @cite_10 @cite_13 , or by two industrial arms @cite_6 . | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_15",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"1808898315",
"",
"2085100591",
"2158292574",
"",
"2032200462",
"1982920857",
"566166310",
"2950229073",
"2105537999",
"2047444538"
],
"abstract": [
"We consider how the tedious chore of folding clothes can be performed by a robot. At the core of our approach is the definition of a cloth model that allows us to reason about the geometry rather than the physics of the cloth in significant parts of the state space. We present an algorithm that, given the geometry of the cloth, computes how many grippers are needed and what the motion of these grippers are to achieve a final configuration specified as a sequence of g − folds—folds that can be achieved while staying in the subset of the state space to which the geometric model applies. G-folds are easy to specify and are sufficiently rich to capture most common cloth folding procedures.We consider folds involving single and stacked layers of material and describe experiments folding towels, shirts, sweaters, and slacks with a Willow Garage PR2 robot. Experiments based on the planner had success rates varying between 5 9 and 9 9 for different clothing articles.",
"",
"Pose estimation of deformable objects is a funda- mental and challenging problem in robotics. We present a novel solution to this problem by first reconstructing a 3D model of the object from a low-cost depth sensor such as Kinect, and then searching a database of simulated models in different poses to predict the pose. Given noisy depth images from 360- degree views of the target object acquired from the Kinect sensor, we reconstruct a smooth 3D model of the object using depth image segmentation and volumetric fusion. Then with an efficient feature extraction and matching scheme, we search the database, which contains a large number of deformable objects in different poses, to obtain the most similar model, whose pose is then adopted as the prediction. Extensive experiments demonstrate better accuracy and orders of magnitude speed- up compared to our previous work. An additional benefit of our method is that it produces a high-quality mesh model and camera pose, which is necessary for other tasks such as regrasping and object manipulation. I. INTRODUCTION In robotics and computer vision, recognition and ma- nipulation of deformable objects such as garments, are well-known challenging tasks. Recently, mature solutions to manipulating rigid objects have emerged and been applied in industry (3). However, in the fabric and food industry, which involve a large number of deformable objects, there is still a large gap between the high demand for automatic operations, and the lack of reliable solutions. Compared with rigid objects, deformable objects are much harder to recognize and manipulate, especially because of the large variance of appearance in materials and the way they deform. This variance subsequently makes it difficult to establish a robust recognition pipeline to predict the pose of the deformable objects based on traditional visual sensors, such as regular cameras. However, newly emerged low-cost depth sensors such as Microsoft Kinect can provide accurate depth measurements. With this depth information, a robotic system is able to resolve the ambiguity of visual appearance better, and thus provide higher performance on recognition tasks. Our interests are in detecting the pose of deformable objects such as garments as a part of a larger pipeline for manipulating these objects. Once the robot has identified the pose of the objects, it can then proceed to manipulate those objects, for tasks such as regarsping and garment folding. y indicates equal contribution",
"We consider the problem of recognizing the configuration of clothing articles when crudely spread out on a flat surface, prior to and during folding. At the core of our approach are parametrized shape models for clothing articles. Each clothing category has its own shape model, and the variety in shapes for a given category is achieved through variation of the parameters. We present an efficient algorithm to find the parameters that provide the best fit when given an image of a clothing article. The models are such that, once the parameters have been fit, they provide a basic parse of the clothing article, allowing it to be followed by autonomous folding from category level specifications of fold sequences. Our approach is also able to recover the configuration of a clothing article when folds are being introduced—an important feature towards closing the perception-action loop. Additionally, our approach provides a reliable method of shape-based classification, simply by examining which model yields the best fit. Our experiments illustrate the effectiveness of our approach on a large set of clothing articles. Furthermore, we present an end-to-end system, which starts from an unknown spread-out clothing article, performs a parametrized model fit, then follows a category-level (rather than article specific) set of folding instructions, closing the loop with perceptual feedback by re-fitting between folds.",
"",
"The work addresses the problem of clothing perception and manipulation by a two armed industrial robot aiming at a real-time automated folding of a piece of garment spread out on a flat surface. A complete solution combining vision sensing, garment segmentation and understanding, planning of the manipulation and its real execution on a robot is proposed. A new polygonal model of a garment is introduced. Fitting the model into a segmented garment contour is used to detect garment landmark points. It is shown how folded variants of the unfolded model can be derived automatically. Universality and usefulness of the model is demonstrated by its favorable performance within the whole folding procedure which is applicable to a variety of garments categories (towel, pants, shirt, etc.) and evaluated experimentally using the two armed robot. The principal novelty with respect to the state of the art is in the new garment polygonal model and its manipulation planning algorithm which leads to the speed up by two orders of magnitude.",
"We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.",
"We present Active Random Forests, a novel framework to address active vision problems. State of the art focuses on best viewing parameters selection based on single view classifiers. We propose a multi-view classifier where the decision mechanism of optimally changing viewing parameters is inherent to the classification process. This has many advantages: a) the classifier exploits the entire set of captured images and does not simply aggregate probabilistically per view hypotheses; b) actions are based on learnt disambiguating features from all views and are optimally selected using the powerful voting scheme of Random Forests and c) the classifier can take into account the costs of actions. The proposed framework is applied to the task of autonomously unfolding clothes by a robot, addressing the problem of best viewpoint selection in classification, grasp point and pose estimation of garments. We show great performance improvement compared to state of the art methods.",
"Robotic manipulation of deformable objects remains a challenging task. One such task is folding a garment autonomously. Given start and end folding positions, what is an optimal trajectory to move the robotic arm to fold a garment? Certain trajectories will cause the garment to move, creating wrinkles, and gaps, other trajectories will fail altogether. We present a novel solution to find an optimal trajectory that avoids such problematic scenarios. The trajectory is optimized by minimizing a quadratic objective function in an off-line simulator, which includes material properties of the garment and frictional force on the table. The function measures the dissimilarity between a user folded shape and the folded garment in simulation, which is then used as an error measurement to create an optimal trajectory. We demonstrate that our two-arm robot can follow the optimized trajectories, achieving accurate and efficient manipulations of deformable objects.",
"We consider the problem of autonomously bringing an article of clothing into a desired configuration using a general-purpose two-armed robot. We propose a hidden Markov model (HMM) for estimating the identity of the article and tracking the article's configuration throughout a specific sequence of manipulations and observations. At the end of this sequence, the article's configuration is known, though not necessarily desired. The estimated identity and configuration of the article are then used to plan a second sequence of manipulations that brings the article into the desired configuration. We propose a relaxation of a strain-limiting finite element model for cloth simulation that can be solved via convex optimization; this serves as the basis of the transition and observation models of the HMM. The observation model uses simple perceptual cues consisting of the height of the article when held by a single gripper and the silhouette of the article when held by two grippers. The model accurately estimates the identity and configuration of clothing articles, enabling our procedure to autonomously bring a variety of articles into desired configurations that are useful for other tasks, such as folding.",
"We present a novel method for classifying and estimating the categories and poses of deformable objects, such as clothing, from a set of depth images. The framework presented here represents the recognition part of the entire pipeline of dexterous manipulation of deformable objects, which contains grasping, recognition, regrasping, placing flat, and folding. We first create an off-line simulation of the deformable objects and capture depth images from different view points as training data. Then by extracting features and applying sparse coding and dictionary learning, we build up a codebook for a set of different poses of a particular deformable object category. The whole framework contains two layers which yield a robust system that first classifies deformable objects on category level and then estimates the current pose from a group of predefined poses of a single deformable object. The system is tested on a variety of similar deformable objects and achieves a high output accuracy. By knowing the current pose of the garment, we can continue with further tasks such as regrasping and folding."
]
} |
1602.04918 | 2952530979 | Robotic manipulation of deformable objects remains a challenging task. One such task is to iron a piece of cloth autonomously. Given a roughly flattened cloth, the goal is to have an ironing plan that can iteratively apply a regular iron to remove all the major wrinkles by a robot. We present a novel solution to analyze the cloth surface by fusing two surface scan techniques: a curvature scan and a discontinuity scan. The curvature scan can estimate the height deviation of the cloth surface, while the discontinuity scan can effectively detect sharp surface features, such as wrinkles. We use this information to detect the regions that need to be pulled and extended before ironing, and the other regions where we want to detect wrinkles and apply ironing to remove the wrinkles. We demonstrate that our hybrid scan technique is able to capture and classify wrinkles over the surface robustly. Given detected wrinkles, we enable a robot to iron them using shape features. Experimental results show that using our wrinkle analysis algorithm, our robot is able to iron the cloth surface and effectively remove the wrinkles. | Robotic ironing of deformable garments is a difficult task primarily because of the complex surface analysis, regrasping, and hybrid force position control of the iron. Without wrinkle detection, Dai al introduced an ironing plan that spreads out the whole garment surface by dividing it into several functional regions @cite_16 . For each region, in terms of the size and shape, an ironing plan is automatically generated. Dai al also addressed the ironing problem considering the folding lines @cite_2 . | {
"cite_N": [
"@cite_16",
"@cite_2"
],
"mid": [
"2055885908",
"2086635207"
],
"abstract": [
"Robotic ironing needs multidiscipline and requires a quantitative analysis of garment unfolding and ironing motion. This paper investigates the trajectories and orientation of the ironing process where particular geometry is presented in an analytical way. The trajectories produced from this process are analysed and presented with mathematical models to be possibly implemented in robotic automation. This paper further investigates the orientation of iron during the ironing process. It is revealed that the orientation is dependent on the regions of garment and on the closeness to an operator. The orientation is then integrated into the trajectory and presented in a 3D form in which the vertical axis represent the orientation and horizontal axis represent the position. This type of orientation analysis is then used to find similarity in motions to determine the most effective and efficient way of ironing a garment.",
"Automating domestic ironing is a challenge to the robotic community, particularly in terms of modelling and advanced mechanism design. This paper investigates the ironing process, its relevant folding algorithms and analysis techniques, presents the advanced mechanism synthesis and introduces cross‐disciplinary research. It summarises the second part of the results of a technology study carried out under an EPSRC grant “A Feasibility Study into Robotic Ironing”, and proposes new techniques in developing a folding and unfolding algorithm and in developing a task‐oriented mechanism synthesis for robotic ironing."
]
} |
1602.04984 | 2462784004 | A weakly-supervised semantic segmentation framework with a tied deconvolutional neural network is presented. Each deconvolution layer in the framework consists of unpooling and deconvolution operations. 'Unpooling' upsamples the input feature map based on unpooling switches defined by corresponding convolution layer's pooling operation. 'Deconvolution' convolves the input unpooled features by using convolutional weights tied with the corresponding convolution layer's convolution operation. The unpooling-deconvolution combination helps to eliminate less discriminative features in a feature extraction stage, since output features of the deconvolution layer are reconstructed from the most discriminative unpooled features instead of the raw one. This results in reduction of false positives in a pixel-level inference stage. All the feature maps restored from the entire deconvolution layers can constitute a rich discriminative feature set according to different abstraction levels. Those features are stacked to be selectively used for generating class-specific activation maps. Under the weak supervision (image-level labels), the proposed framework shows promising results on lesion segmentation in medical images (chest X-rays) and achieves state-of-the-art performance on the PASCAL VOC segmentation dataset in the same experimental condition. | Semantic segmentation can be divided into three categories according to its supervision level; fully-supervised, semi-supervised, and weakly-supervised approaches. In fully-supervised semantic segmentation, pixel-level labels are used for training so it is relatively easier to discriminate details of ROIs on an input image @cite_16 @cite_22 @cite_31 @cite_3 @cite_15 @cite_29 @cite_30 @cite_5 . The semi-supervised semantic segmentation approach is sub-classified into two types according to the type of supervision; bounding box annotations @cite_28 which is useful for multi-scale dataset augmentation or a limited number of segmentation annotations @cite_20 @cite_6 . Although the fully- or semi-supervised learning for semantic segmentation performs well in real applications, they require heavy annotation efforts in terms of the quality and the amount of annotations. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_15",
"@cite_28",
"@cite_29",
"@cite_3",
"@cite_6",
"@cite_5",
"@cite_31",
"@cite_16",
"@cite_20"
],
"mid": [
"2949295283",
"1923697677",
"1938976761",
"2949086864",
"2952637581",
"2950612966",
"1529410181",
"809122546",
"",
"1903029394",
"2949847866"
],
"abstract": [
"Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multi-task Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"We introduce a purely feed-forward architecture for semantic segmentation. We map small image elements (superpixels) to rich feature representations extracted from a sequence of nested regions of increasing extent. These regions are obtained by “zooming out” from the superpixel all the way to scene-level resolution. This approach exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference. Instead superpixels are classified by a feedforward multilayer network. Our architecture achieves 69.6 average accuracy on the PASCAL VOC 2012 test set.",
"Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with human-annotated, pixel-level segmentation masks. Such pixel-accurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called BoxSup, produces competitive results supervised by boxes only, on par with strong baselines fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further unleashes the power of deep convolutional networks and yields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT.",
"We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network.",
"We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top- down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.",
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL",
"Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-the-art object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.",
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches even with much less training images with strong annotations in PASCAL VOC dataset."
]
} |
1602.04984 | 2462784004 | A weakly-supervised semantic segmentation framework with a tied deconvolutional neural network is presented. Each deconvolution layer in the framework consists of unpooling and deconvolution operations. 'Unpooling' upsamples the input feature map based on unpooling switches defined by corresponding convolution layer's pooling operation. 'Deconvolution' convolves the input unpooled features by using convolutional weights tied with the corresponding convolution layer's convolution operation. The unpooling-deconvolution combination helps to eliminate less discriminative features in a feature extraction stage, since output features of the deconvolution layer are reconstructed from the most discriminative unpooled features instead of the raw one. This results in reduction of false positives in a pixel-level inference stage. All the feature maps restored from the entire deconvolution layers can constitute a rich discriminative feature set according to different abstraction levels. Those features are stacked to be selectively used for generating class-specific activation maps. Under the weak supervision (image-level labels), the proposed framework shows promising results on lesion segmentation in medical images (chest X-rays) and achieves state-of-the-art performance on the PASCAL VOC segmentation dataset in the same experimental condition. | To overcome the limitations of the fully- or semi-supervised approaches, weakly-supervised semantic segmentation methods trained only with image-level labels are presented recently @cite_14 @cite_7 @cite_0 @cite_21 . @cite_14 , coarse-grained per-class activation maps are generated from the top convolution layer followed by per-map aggregation (global pooling) using . It is quite similar to @cite_12 , a weakly-supervised approach for object localization, which builds per-class activation maps using image-level labels based on max-pooling for per-map aggregation. A radical difference between those two works is that @cite_14 uses several segmentation priors on coarse-grained output activation maps in order to reduce false positives for improving segmentation performance. Especially the smoothing prior used in this work is based on the assumption that objects have well defined boundaries and shapes. But in unusual cases like medical images, the assumption is quite ambiguous to be identically applied. Global pooling from the class-specific activation maps such as max-pooling @cite_12 or @cite_14 is quite straightforward, so we set this method as a baseline for our work in following sections. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_21",
"@cite_0",
"@cite_12"
],
"mid": [
"1945608308",
"2952004933",
"1901229278",
"2949163329",
"1994488211"
],
"abstract": [
"We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches.",
"We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.",
"Image semantic segmentation is the task of partitioning image into several regions based on semantic concepts. In this paper, we learn a weakly supervised semantic segmentation model from social images whose labels are not pixel-level but image-level; furthermore, these labels might be noisy. We present a joint conditional random field model leveraging various contexts to address this issue. More specifically, we extract global and local features in multiple scales by convolutional neural network and topic model. Inter-label correlations are captured by visual contextual cues and label co-occurrence statistics. The label consistency between image-level and pixel-level is finally achieved by iterative refinement. Experimental results on two real-world image datasets PASCAL VOC2007 and SIFT-Flow demonstrate that the proposed approach outperforms state-of-the-art weakly supervised methods and even achieves accuracy comparable with fully supervised methods.",
"We propose a novel weakly-supervised semantic segmentation algorithm based on Deep Convolutional Neural Network (DCNN). Contrary to existing weakly-supervised approaches, our algorithm exploits auxiliary segmentation annotations available for different categories to guide segmentations on images with only image-level class labels. To make the segmentation knowledge transferrable across categories, we design a decoupled encoder-decoder architecture with attention model. In this architecture, the model generates spatial highlights of each category presented in an image using an attention model, and subsequently generates foreground segmentation for each highlighted region using decoder. Combining attention model, we show that the decoder trained with segmentation annotations in different categories can boost the performance of weakly-supervised semantic segmentation. The proposed algorithm demonstrates substantially improved performance compared to the state-of-the-art weakly-supervised techniques in challenging PASCAL VOC 2012 dataset when our model is trained with the annotations in 60 exclusive categories in Microsoft COCO dataset.",
"Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training."
]
} |
1602.04984 | 2462784004 | A weakly-supervised semantic segmentation framework with a tied deconvolutional neural network is presented. Each deconvolution layer in the framework consists of unpooling and deconvolution operations. 'Unpooling' upsamples the input feature map based on unpooling switches defined by corresponding convolution layer's pooling operation. 'Deconvolution' convolves the input unpooled features by using convolutional weights tied with the corresponding convolution layer's convolution operation. The unpooling-deconvolution combination helps to eliminate less discriminative features in a feature extraction stage, since output features of the deconvolution layer are reconstructed from the most discriminative unpooled features instead of the raw one. This results in reduction of false positives in a pixel-level inference stage. All the feature maps restored from the entire deconvolution layers can constitute a rich discriminative feature set according to different abstraction levels. Those features are stacked to be selectively used for generating class-specific activation maps. Under the weak supervision (image-level labels), the proposed framework shows promising results on lesion segmentation in medical images (chest X-rays) and achieves state-of-the-art performance on the PASCAL VOC segmentation dataset in the same experimental condition. | @cite_7 , training objective of nonlinear deep networks is defined as a linear biconvex optimization model. Based on this model, additional weak supervision such as sizes of background, foreground, or objects can be used as constraints to relax learning target objectives. The constraints used in this work are less informative than pixel-level annotations, but acquisition of those needs additional annotation efforts as fully- or semi-supervised approaches did. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2952004933"
],
"abstract": [
"We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm."
]
} |
1602.04984 | 2462784004 | A weakly-supervised semantic segmentation framework with a tied deconvolutional neural network is presented. Each deconvolution layer in the framework consists of unpooling and deconvolution operations. 'Unpooling' upsamples the input feature map based on unpooling switches defined by corresponding convolution layer's pooling operation. 'Deconvolution' convolves the input unpooled features by using convolutional weights tied with the corresponding convolution layer's convolution operation. The unpooling-deconvolution combination helps to eliminate less discriminative features in a feature extraction stage, since output features of the deconvolution layer are reconstructed from the most discriminative unpooled features instead of the raw one. This results in reduction of false positives in a pixel-level inference stage. All the feature maps restored from the entire deconvolution layers can constitute a rich discriminative feature set according to different abstraction levels. Those features are stacked to be selectively used for generating class-specific activation maps. Under the weak supervision (image-level labels), the proposed framework shows promising results on lesion segmentation in medical images (chest X-rays) and achieves state-of-the-art performance on the PASCAL VOC segmentation dataset in the same experimental condition. | Weakly-supervised semantic segmentation for noisy images such as wrong or omitted labels is presented in @cite_21 . They extract superpixels from input images in order to perform superpixel-level inference. This is based on the assumption that objects have clear boundaries according to spatial coherency between pixels. But, this assumption cannot be guaranteed in medical images, since lesions have different characteristics from the general objects in natural images. | {
"cite_N": [
"@cite_21"
],
"mid": [
"1901229278"
],
"abstract": [
"Image semantic segmentation is the task of partitioning image into several regions based on semantic concepts. In this paper, we learn a weakly supervised semantic segmentation model from social images whose labels are not pixel-level but image-level; furthermore, these labels might be noisy. We present a joint conditional random field model leveraging various contexts to address this issue. More specifically, we extract global and local features in multiple scales by convolutional neural network and topic model. Inter-label correlations are captured by visual contextual cues and label co-occurrence statistics. The label consistency between image-level and pixel-level is finally achieved by iterative refinement. Experimental results on two real-world image datasets PASCAL VOC2007 and SIFT-Flow demonstrate that the proposed approach outperforms state-of-the-art weakly supervised methods and even achieves accuracy comparable with fully supervised methods."
]
} |
1602.04984 | 2462784004 | A weakly-supervised semantic segmentation framework with a tied deconvolutional neural network is presented. Each deconvolution layer in the framework consists of unpooling and deconvolution operations. 'Unpooling' upsamples the input feature map based on unpooling switches defined by corresponding convolution layer's pooling operation. 'Deconvolution' convolves the input unpooled features by using convolutional weights tied with the corresponding convolution layer's convolution operation. The unpooling-deconvolution combination helps to eliminate less discriminative features in a feature extraction stage, since output features of the deconvolution layer are reconstructed from the most discriminative unpooled features instead of the raw one. This results in reduction of false positives in a pixel-level inference stage. All the feature maps restored from the entire deconvolution layers can constitute a rich discriminative feature set according to different abstraction levels. Those features are stacked to be selectively used for generating class-specific activation maps. Under the weak supervision (image-level labels), the proposed framework shows promising results on lesion segmentation in medical images (chest X-rays) and achieves state-of-the-art performance on the PASCAL VOC segmentation dataset in the same experimental condition. | @cite_0 , the authors demonstrate that knowledge is transferable between two different datasets. The trained knowledge on a dataset which has pixel-level segmentation annotations can be exploited for training another network under the dataset only with weak image-level labels. Although the segmentation annotations used for knowledge transfer do not contain categories in the target dataset under the weakly-supervised setting, it can be classified into another type of semi-supervised approaches in terms of using pixel-level segmentation annotations. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2949163329"
],
"abstract": [
"We propose a novel weakly-supervised semantic segmentation algorithm based on Deep Convolutional Neural Network (DCNN). Contrary to existing weakly-supervised approaches, our algorithm exploits auxiliary segmentation annotations available for different categories to guide segmentations on images with only image-level class labels. To make the segmentation knowledge transferrable across categories, we design a decoupled encoder-decoder architecture with attention model. In this architecture, the model generates spatial highlights of each category presented in an image using an attention model, and subsequently generates foreground segmentation for each highlighted region using decoder. Combining attention model, we show that the decoder trained with segmentation annotations in different categories can boost the performance of weakly-supervised semantic segmentation. The proposed algorithm demonstrates substantially improved performance compared to the state-of-the-art weakly-supervised techniques in challenging PASCAL VOC 2012 dataset when our model is trained with the annotations in 60 exclusive categories in Microsoft COCO dataset."
]
} |
1602.04868 | 2951606661 | We propose a deep feature-based face detector for mobile devices to detect user's face acquired by the front facing camera. The proposed method is able to detect faces in images containing extreme pose and illumination variations as well as partial faces. The main challenge in developing deep feature-based algorithms for mobile devices is the constrained nature of the mobile platform and the non-availability of CUDA enabled GPUs on such devices. Our implementation takes into account the special nature of the images captured by the front-facing camera of mobile devices and exploits the GPUs present in mobile devices without CUDA-based frameorks, to meet these challenges. | Cascade classifiers form an important and influential family of face detectors. Viola-Jones detector @cite_18 is a classic method, which provides realtime face detection, but works best for full, frontal, and well lit faces. Extending the work of cascade classifiers, some authors @cite_3 have trained multiple models to address pose variations. An extensive survey of such methods can be found in @cite_16 . | {
"cite_N": [
"@cite_18",
"@cite_16",
"@cite_3"
],
"mid": [
"2137401668",
"",
"2169696215"
],
"abstract": [
"This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.",
"",
"Rotation invariant multiview face detection (MVFD) aims to detect faces with arbitrary rotation-in-plane (RIP) and rotation-off-plane (ROP) angles in still images or video sequences. MVFD is crucial as the first step in automatic face processing for general applications since face images are seldom upright and frontal unless they are taken cooperatively. In this paper, we propose a series of innovative methods to construct a high-performance rotation invariant multiview face detector, including the width-first-search (WFS) tree detector structure, the vector boosting algorithm for learning vector-output strong classifiers, the domain-partition-based weak learning method, the sparse feature in granular space, and the heuristic search for sparse feature selection. As a result of that, our multiview face detector achieves low computational complexity, broad detection scope, and high detection accuracy on both standard testing sets and real-life images"
]
} |
1602.04868 | 2951606661 | We propose a deep feature-based face detector for mobile devices to detect user's face acquired by the front facing camera. The proposed method is able to detect faces in images containing extreme pose and illumination variations as well as partial faces. The main challenge in developing deep feature-based algorithms for mobile devices is the constrained nature of the mobile platform and the non-availability of CUDA enabled GPUs on such devices. Our implementation takes into account the special nature of the images captured by the front-facing camera of mobile devices and exploits the GPUs present in mobile devices without CUDA-based frameorks, to meet these challenges. | Modeling of the face by parts is another popular approach. Zhu @cite_26 proposed a deformable parts model that detected faces by identifying face parts and modeling the whole face as a collection of face parts joined together using springs . The springs like constraints were useful in modeling deformations, hence this method is somewhat robust to pose and expression changes. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2047508432"
],
"abstract": [
"We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com)."
]
} |
1602.04868 | 2951606661 | We propose a deep feature-based face detector for mobile devices to detect user's face acquired by the front facing camera. The proposed method is able to detect faces in images containing extreme pose and illumination variations as well as partial faces. The main challenge in developing deep feature-based algorithms for mobile devices is the constrained nature of the mobile platform and the non-availability of CUDA enabled GPUs on such devices. Our implementation takes into account the special nature of the images captured by the front-facing camera of mobile devices and exploits the GPUs present in mobile devices without CUDA-based frameorks, to meet these challenges. | Specific to the mobile platform, Hadid @cite_5 have demonstrated a local binary pattern (LBP)-based method on a Nokia N90 phone. Though it is fast, it is not a robust method and was designed for an older phone. Current phones have more powerful CPUs and more importantly, even GPUs, which can implement DCNNs. | {
"cite_N": [
"@cite_5"
],
"mid": [
"1971329761"
],
"abstract": [
"Computer vision applications for mobile phones are gaining increasing attention due to several practical needs resulting from the popularity of digital cameras in today's mobile phones. In this work, we consider the task of face detection and authentication in mobile phones and experimentally analyze a face authentication scheme using Haar-like features with Ad-aBoost for face and eye detection, and local binary pattern (LBP) approach for face authentication. For comparison, another approach to face detection using skin color for fast processing is also considered and implemented. Despite the limited CPU and memory capabilities of today's mobile phones, our experimental results show good face detection performance and average authentication rates of 82 for small-sized faces (40times40 pixels) and 96 for faces of 80times80 pixels. The system is running at 2 frames per second for images of 320times240 pixels. The obtained results are very promising and assess the feasibility of face authentication in mobile phones. Directions for further enhancing the performance of the system are also discussed."
]
} |
1602.04805 | 2278642510 | Performing exact posterior inference in complex generative models is often difficult or impossible due to an expensive to evaluate or intractable likelihood function. Approximate Bayesian computation (ABC) is an inference framework that constructs an approximation to the true likelihood based on the similarity between the observed and simulated data as measured by a predefined set of summary statistics. Although the choice of appropriate problem-specific summary statistics crucially influences the quality of the likelihood approximation and hence also the quality of the posterior sample in ABC, there are only few principled general-purpose approaches to the selection or construction of such summary statistics. In this paper, we develop a novel framework for this task using kernel-based distribution regression. We model the functional relationship between data distributions and the optimal choice (with respect to a loss function) of summary statistics using kernel-based distribution regression. We show that our approach can be implemented in a computationally and statistically efficient way using the random Fourier features framework for large-scale kernel learning. In addition to that, our framework shows superior performance when compared to related methods on toy and real-world problems. | A second category consists of methods that construct summary statistics from auxiliary models. An example of this approach is indirect score ABC @cite_1 . Here, a score vector that is calculated from the partial derivatives of the auxiliary likelihood plays the role of the summary statistics. Motivated by the fact that the score of the observed data is zero when the auxiliary model parameters are set by maximum-likelihood estimation (MLE), the method searches the parameter space for values whose simulated data produce a score close to zero under the same auxiliary model parameters. Thus, the discrepancy measure between the observed and simulated data is defined in terms of scores of the simulated data at the parameter values estimated with MLE from the observed data. A detailed review of this class of approaches can be found in . | {
"cite_N": [
"@cite_1"
],
"mid": [
"2109606780"
],
"abstract": [
"Approximate Bayesian computation (ABC) techniques permit inferences in complex demographic models, but are computationally inefficient. A Markov chain Monte Carlo (MCMC) approach has been proposed ( 2003), but it suffers from computational problems and poor mixing. We propose several methodological developments to overcome the shortcomings of this MCMC approach and hence realize substantial computational advances over standard ABC. The principal idea is to relax the tolerance within MCMC to permit good mixing, but retain a good approximation to the posterior by a combination of subsampling the output and regression adjustment. We also propose to use a partial least-squares (PLS) transformation to choose informative statistics. The accuracy of our approach is examined in the case of the divergence of two populations with and without migration. In that case, our ABC–MCMC approach needs considerably lower computation time to reach the same accuracy than conventional ABC. We then apply our method to a more complex case with the estimation of divergence times and migration rates between three African populations."
]
} |
1602.04805 | 2278642510 | Performing exact posterior inference in complex generative models is often difficult or impossible due to an expensive to evaluate or intractable likelihood function. Approximate Bayesian computation (ABC) is an inference framework that constructs an approximation to the true likelihood based on the similarity between the observed and simulated data as measured by a predefined set of summary statistics. Although the choice of appropriate problem-specific summary statistics crucially influences the quality of the likelihood approximation and hence also the quality of the posterior sample in ABC, there are only few principled general-purpose approaches to the selection or construction of such summary statistics. In this paper, we develop a novel framework for this task using kernel-based distribution regression. We model the functional relationship between data distributions and the optimal choice (with respect to a loss function) of summary statistics using kernel-based distribution regression. We show that our approach can be implemented in a computationally and statistically efficient way using the random Fourier features framework for large-scale kernel learning. In addition to that, our framework shows superior performance when compared to related methods on toy and real-world problems. | A third, and last, category is comprised of methods that construct summary statistics using regression from either the full dataset or a set of candidate statistics, e.g. . provides a general overview of such approaches, while we discuss the aforementioned method in more detail. The semi-automatic ABC (SA-ABC) method @cite_24 focuses on deriving summary statistics that will allow inference about certain parameters of interest to be as accurate as possible. focus on the construction of summary statistics that allow inference to be accurate with respect to a specific loss function. They show that the true posterior mean of the model parameters is the optimal choice of summary statistics under the quadratic loss function. As this quantity cannot be analytically calculated, they estimate it by fitting a regression model from simulated data. In particular, given simulated data @math , a linear model @math is fitted; here, @math is taken to be either the identity function or a power function. The resulting estimates @math are used as the summary statistics in ABC. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2121794292"
],
"abstract": [
"The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θanc = 4Neu) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L2-loss performs best. Applying that method to the ibex data, we estimate θ^anc≈1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10−4 and 3.5 × 10−3 per locus per generation. The proportion of males with access to matings is estimated as ω^≈0.21, which is in good agreement with recent independent estimates."
]
} |
1602.04568 | 2952307256 | This paper defines the (first-order) conflict resolution calculus: an extension of the resolution calculus inspired by techniques used in modern SAT-solvers. The resolution inference is restricted to (first-order) unit-propagation and the calculus is extended with a mechanism for assuming decision literals and a new inference rule for clause learning, which is a first-order generalization of the propositional conflict-driven clause learning (CDCL) procedure. The calculus is sound (because it can be simulated by natural deduction) and refutationally complete (because it can simulate resolution), and these facts are proven in detail here. | The variety of approaches attempting to generalize CDCL to first-order logic shows that this is not a trivial task. The most pragmatically successful approaches so far have harnessed the power of SAT-solvers in first-order (or even higher-order) logic not by generalizing their underlying procedures but simply by employing them as inside a theorem prover @cite_4 @cite_13 @cite_2 . | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_2"
],
"mid": [
"2221495694",
"2785616602",
"79561349"
],
"abstract": [
"This paper describes a new architecture for first-order resolution and superposition theorem provers called AVATAR (Advanced Vampire Architecture for Theories and Resolution). Its original motivation comes from a problem well-studied in the past dealing with problems having clauses containing propositional variables and other clauses that can be split into components with disjoint sets of variables. Such clauses are common for problems coming from applications, for example in program verification and program analysis, where many ground literals occur in the problems and even more are generated during the proof-search. This problem was previously studied by adding various versions of splitting. The addition of splitting resulted in some improvements in performance of theorem provers. However, even with various versions of splitting, the performance of superposition theorem provers is nowhere near SMT solvers on variable-free problems or SAT solvers on propositional problems. This paper describes a new architecture for superposition theorem provers, where a superposition theorem prover is tightly integrated with a SAT or an SMT solver. Its implementation in our theorem prover Vampire resulted in drastic improvements over all previous implementations of splitting. Over four hundred TPTP problems previously unsolvable by any modern prover, including Vampire itself, have been proved, most of them with short runtimes. Nearly all problems solved with one of 481 variants of splitting previously implemented in Vampire can also be solved with AVATAR. We also believe that AVATAR is an important step towards efficient reasoning with both quantifiers and theories, which is one of the key areas in modern applications of theorem provers in program analysis and verification.",
"This invention provides for a relay apparatus for use with optical fibers. A first set of optical fiber ends is mechanically secured in a suitably shaped retainer. A second set of optical fiber ends is secured in a second retainer appropriately shaped to move into one of a plurality of mechanically stable positions with respect to the first retainer when biased against the first retainer. These mechanically stable positions bring members of the first and second set of optical fibers into optical alignment. A switching mechanism is provided for moving the first retainer across the second retainer thereby making and breaking optical connections between the first and second set of optical fiber ends. In one embodiment, the retainers are cooperatively shaped by hemispheres, which define a set of mechanically stable alignment positions.",
"Satallax is an automatic higher-order theorem prover that generates propositional clauses encoding (ground) tableau rules and uses MiniSat to test for unsatisfiability. We describe the implementation, focusing on flags that control search and examples that illustrate how the search proceeds."
]
} |
1602.04364 | 2953092061 | Speaker identification refers to the task of localizing the face of a person who has the same identity as the ongoing voice in a video. This task not only requires collective perception over both visual and auditory signals, the robustness to handle severe quality degradations and unconstrained content variations are also indispensable. In this paper, we describe a novel multimodal Long Short-Term Memory (LSTM) architecture which seamlessly unifies both visual and auditory modalities from the beginning of each sequence input. The key idea is to extend the conventional LSTM by not only sharing weights across time steps, but also sharing weights across modalities. We show that modeling the temporal dependency across face and voice can significantly improve the robustness to content quality degradations and variations. We also found that our multimodal LSTM is robustness to distractors, namely the non-speaking identities. We applied our multimodal LSTM to The Big Bang Theory dataset and showed that our system outperforms the state-of-the-art systems in speaker identification with lower false alarm rate and higher recognition accuracy. | The revived interest on RNN is mainly attributed to its recent success in many practical applications such as language modeling @cite_6 , speech recognition @cite_23 @cite_13 , machine translation @cite_24 @cite_2 , conversation modeling @cite_7 to name a few. Among many variants of RNNs, LSTM is arguably one of the most widely used model. LSTM is a type of RNN in which the memory cells are carefully designed to store useful information to model long term dependency in sequential data @cite_4 . Other than supervised learning, LSTM is also used in recent work in image generation @cite_0 @cite_17 , demonstrating its capability of modeling statistical dependencies of imagery data. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"2159640018",
"",
"2949888546",
"2953250761",
"2062826588",
"2950344723",
"2950689855",
"1850742715"
],
"abstract": [
"",
"We propose Neural Responding Machine (NRM), a neural network-based response generator for Short-Text Conversation. NRM takes the general encoder-decoder framework: it formalizes the generation of response as a decoding process based on the latent representation of the input text, while both encoding and decoding are realized with recurrent neural networks (RNN). The NRM is trained with a large amount of one-round conversation data collected from a microblogging service. Empirical study shows that NRM can generate grammatically correct and content-wise appropriate responses to over 75 of the input text, outperforming state-of-the-arts in the same setting, including retrieval-based and SMT-based models.",
"",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multi-dimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.",
"Standard Mel frequency cepstrum coefficient (MFCC) computation technique utilizes discrete cosine transform (DCT) for decorrelating log energies of filter bank output. The use of DCT is reasonable here as the covariance matrix of Mel filter bank log energy (MFLE) can be compared with that of highly correlated Markov-I process. This full-band based MFCC computation technique where each of the filter bank output has contribution to all coefficients, has two main disadvantages. First, the covariance matrix of the log energies does not exactly follow Markov-I property. Second, full-band based MFCC feature gets severely degraded when speech signal is corrupted with narrow-band channel noise, though few filter bank outputs may remain unaffected. In this work, we have studied a class of linear transformation techniques based on block wise transformation of MFLE which effectively decorrelate the filter bank log energies and also capture speech information in an efficient manner. A thorough study has been carried out on the block based transformation approach by investigating a new partitioning technique that highlights associated advantages. This article also reports a novel feature extraction scheme which captures complementary information to wide band information; that otherwise remains undetected by standard MFCC and proposed block transform (BT) techniques. The proposed features are evaluated on NIST SRE databases using Gaussian mixture model-universal background model (GMM-UBM) based speaker recognition system. We have obtained significant performance improvement over baseline features for both matched and mismatched condition, also for standard and narrow-band noises. The proposed method achieves significant performance improvement in presence of narrow-band noise when clubbed with missing feature theory based score computation scheme.",
"Neural machine translation, a recently proposed approach to machine translation based purely on neural networks, has shown promising results compared to the existing approaches such as phrase-based statistical machine translation. Despite its recent success, neural machine translation has its limitation in handling a larger vocabulary, as training complexity as well as decoding complexity increase proportionally to the number of target words. In this paper, we propose a method that allows us to use a very large target vocabulary without increasing training complexity, based on importance sampling. We show that decoding can be efficiently done even with the model having a very large target vocabulary by selecting only a small subset of the whole target vocabulary. The models trained by the proposed approach are empirically found to outperform the baseline models with a small vocabulary as well as the LSTM-based neural machine translation models. Furthermore, when we use the ensemble of a few models with very large target vocabularies, we achieve the state-of-the-art translation performance (measured by BLEU) on the English->German translation and almost as high performance as state-of-the-art English->French translation system.",
"Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.",
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye."
]
} |
1602.04504 | 2951574323 | After decades of study, automatic face detection and recognition systems are now accurate and widespread. Naturally, this means users who wish to avoid automatic recognition are becoming less able to do so. Where do we stand in this cat-and-mouse race? We currently live in a society where everyone carries a camera in their pocket. Many people willfully upload most or all of the pictures they take to social networks which invest heavily in automatic face recognition systems. In this setting, is it still possible for privacy-conscientious users to avoid automatic face detection and recognition? If so, how? Must evasion techniques be obvious to be effective, or are there still simple measures that users can use to protect themselves? In this work, we find ways to evade face detection on Facebook, a representative example of a popular social network that uses automatic face detection to enhance their service. We challenge widely-held beliefs about evading face detection: do our old techniques such as blurring the face region or wearing "privacy glasses" still work? We show that in general, state-of-the-art detectors can often find faces even if the subject wears occluding clothing or even if the uploader damages the photo to prevent faces from being detected. | Both of these approaches are practical and attractive because they allow the to take steps to protect their privacy, without having to remind their friends to alter their photos. However, there are two principal drawbacks with these kinds of approaches: first, approaches based on pushing the user's appearance away from the average face only make the user iconic. No one wants to be that person with the funny glasses makeup.'' Distinctive faces are easier for people to remember and recognize @cite_15 @cite_7 , which could work against the user's wishes to remain anonymous in the physical world. Second, neither countermeasure is always effective against Facebook's face detector. Fig. parts (E) and (F) shows screenshots of Facebook's image upload process. Though they may detection rates, both Privacy Visor and CV Dazzle are ineffective for the example images we chose: Facebook detects the face and asks the user to tag the identity. Granted, Facebook could not find the face in either method, but only a few good detections are necessary to begin building a recognition model. | {
"cite_N": [
"@cite_15",
"@cite_7"
],
"mid": [
"2063716390",
"2094090158"
],
"abstract": [
"This study investigated the independent and combined effects of attractiveness and distinctiveness on face recognition. In a preliminary study, subjects rated the distinctiveness of 90 standardized facial photographs. These ratings and previously obtained ratings of attractiveness were used to select 12 target faces for a standard face recognition task. The results confirmed that more distinctive faces are remembered better. Attractiveness was a poor predictor of recognition, however, especially when variation in distinctiveness was controlled. These results indicate that a new perspective on the role of attractiveness in face recognition is needed, and they also support arguments that facial distinctiveness is a fundamental variable in recognition performance.",
"In this study we examine the relationship between objective aspects of facial appearance and facial “distinctiveness”. Specifically, we examine whether the extent to which a face deviates from “average” correlates with rated distinctiveness and measures of memorability. We find that, provided the faces are rated with hair concealed, reasonable correlations can be achieved between their physical deviation and their rated distinctiveness. More modest correlations are obtained between physical deviation and the extent to which faces are remembered, either correctly or falsely, after previous study. Furthermore, memory ratings obtained to “target” faces when they have been previously seen (i.e. “hits”) do not show the expected negative correlation with the scores obtained to the same faces when acting as distractors (i.e. “false positives”), though each correlates with rated distinctiveness. This confirms the theory of Vokey and Read (1992) that the typicality distinctiveness dimension can be broken down into..."
]
} |
1602.04504 | 2951574323 | After decades of study, automatic face detection and recognition systems are now accurate and widespread. Naturally, this means users who wish to avoid automatic recognition are becoming less able to do so. Where do we stand in this cat-and-mouse race? We currently live in a society where everyone carries a camera in their pocket. Many people willfully upload most or all of the pictures they take to social networks which invest heavily in automatic face recognition systems. In this setting, is it still possible for privacy-conscientious users to avoid automatic face detection and recognition? If so, how? Must evasion techniques be obvious to be effective, or are there still simple measures that users can use to protect themselves? In this work, we find ways to evade face detection on Facebook, a representative example of a popular social network that uses automatic face detection to enhance their service. We challenge widely-held beliefs about evading face detection: do our old techniques such as blurring the face region or wearing "privacy glasses" still work? We show that in general, state-of-the-art detectors can often find faces even if the subject wears occluding clothing or even if the uploader damages the photo to prevent faces from being detected. | Many studies also investigate how image appearance affects face detection, albeit usually in a How can we make face detection better?'' sense. For example, Parris al @cite_3 organized a Face and Eye Detection on Hard Datasets'' challenge at IJCB 2011. Over a dozen commercial and academic contestants submitted face detection entries. The results reveal that state-of-the-art face detectors generally have trouble recognizing faces that are severely out-of-focus or small. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2063803778"
],
"abstract": [
"Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups companies focusing on long-range face detection outperform leading commercial applications."
]
} |
1602.04504 | 2951574323 | After decades of study, automatic face detection and recognition systems are now accurate and widespread. Naturally, this means users who wish to avoid automatic recognition are becoming less able to do so. Where do we stand in this cat-and-mouse race? We currently live in a society where everyone carries a camera in their pocket. Many people willfully upload most or all of the pictures they take to social networks which invest heavily in automatic face recognition systems. In this setting, is it still possible for privacy-conscientious users to avoid automatic face detection and recognition? If so, how? Must evasion techniques be obvious to be effective, or are there still simple measures that users can use to protect themselves? In this work, we find ways to evade face detection on Facebook, a representative example of a popular social network that uses automatic face detection to enhance their service. We challenge widely-held beliefs about evading face detection: do our old techniques such as blurring the face region or wearing "privacy glasses" still work? We show that in general, state-of-the-art detectors can often find faces even if the subject wears occluding clothing or even if the uploader damages the photo to prevent faces from being detected. | Other studies in this area include Scheirer al 's Face in the Branches'' detection task @cite_4 . Scheirer varies the amount of occlusion of face images and compares detection accuracy between humans, Google Picasa, and Face.com. There is still a large gap in performance---human workers achieve 60 | {
"cite_N": [
"@cite_4"
],
"mid": [
"2101123607"
],
"abstract": [
"For many problems in computer vision, human learners are considerably better than machines. Humans possess highly accurate internal recognition and learning mechanisms that are not yet understood, and they frequently have access to more extensive training data through a lifetime of unbiased experience with the visual world. We propose to use visual psychophysics to directly leverage the abilities of human subjects to build better machine learning systems. First, we use an advanced online psychometric testing platform to make new kinds of annotation data available for learning. Second, we develop a technique for harnessing these new kinds of information—“perceptual annotations”—for support vector machines. A key intuition for this approach is that while it may remain infeasible to dramatically increase the amount of data and high-quality labels available for the training of a given system, measuring the exemplar-by-exemplar difficulty and pattern of errors of human annotators can provide important information for regularizing the solution of the system at hand. A case study for the problem face detection demonstrates that this approach yields state-of-the-art results on the challenging FDDB data set."
]
} |
1602.04493 | 2283678503 | In a Public Safety (PS) situation, agents may require critical and personally identifiable information. Therefore, not only does context and location-aware information need to be available, but also the privacy of such information should be preserved. Existing solutions do not address such a problem in a PS environment. This paper proposes a framework in which anonymized Personal Information (PI) is accessible to authorized public safety agents under a PS circumstance. In particular, we propose a secure data storage structure along with privacy-preserving mobile search framework, suitable for Public Safety Networks (PSNs). As a result, availability and privacy of PI are achieved simultaneously. However, the design of such a framework encounters substantial challenges, including scalability, reliability of the data, computation and communication and storage efficiency, etc. We leverage Secure Indexing (SI) methods and modify Bloom Filters (BFs) to create a secure data storage structure to store encrypted meta-data. As a result, our construction enables secure and privacy-preserving multi-keyword search capability. In addition, our system scales very well, maintains availability of data, imposes minimum delay, and has affordable storage overhead. We provide extensive security analysis, simulation studies, and performance comparison with the state-of-the-art solutions to demonstrate the efficiency and effectiveness of the proposed approach. To the best of our knowledge, this work is the first to address such issues in the context of PSNs. | Surveying the literature, the works proposed for centralized and mobile healthcare and emergency handling, and search over encrypted data, are the most germane to ours. In such research fields, data availability is achieved in two ways; Centralized Availability ( @math ) and Decentralized Availability ( @math ). In @math , Data Owners (DOs) outsource the encrypted information to one many cloud server(s) to which PSAs should send information retrieval requests, while in @math , in an emergency, DOs broadcast their encrypted PI using smart phones or Personal Digital Assistants (PDAs) to the users in their local proximity or to Health Service Providers (HSPs) to ask for help. The PDA monitors and collects health information using the sensors attached to the patient's body. To achieve data privacy, in addition to data confidentiality, Direct Authorization (DA) or Indirect Authorization (IA) algorithms are utilized. DA methods are usually used in private domains which are comprised of family, personal physician, friends, and neighbours, while IA is applied in public domains that include researchers, healthcare personnel, other doctors, and so forth @cite_6 . | {
"cite_N": [
"@cite_6"
],
"mid": [
"2165589425"
],
"abstract": [
"In cloud computing, clients usually outsource their data to the cloud storage servers to reduce the management costs. While those data may contain sensitive personal information, the cloud servers cannot be fully trusted in protecting them. Encryption is a promising way to protect the confidentiality of the outsourced data, but it also introduces much difficulty to performing effective searches over encrypted information. Most existing works do not support efficient searches with complex query conditions, and care needs to be taken when using them because of the potential privacy leakages about the data owners to the data users or the cloud server. In this paper, using on line Personal Health Record (PHR) as a case study, we first show the necessity of search capability authorization that reduces the privacy exposure resulting from the search results, and establish a scalable framework for Authorized Private Keyword Search (APKS) over encrypted cloud data. We then propose two novel solutions for APKS based on a recent cryptographic primitive, Hierarchical Predicate Encryption (HPE). Our solutions enable efficient multi-dimensional keyword searches with range query, allow delegation and revocation of search capabilities. Moreover, we enhance the query privacy which hides users' query keywords against the server. We implement our scheme on a modern workstation, and experimental results demonstrate its suitability for practical usage."
]
} |
1602.04493 | 2283678503 | In a Public Safety (PS) situation, agents may require critical and personally identifiable information. Therefore, not only does context and location-aware information need to be available, but also the privacy of such information should be preserved. Existing solutions do not address such a problem in a PS environment. This paper proposes a framework in which anonymized Personal Information (PI) is accessible to authorized public safety agents under a PS circumstance. In particular, we propose a secure data storage structure along with privacy-preserving mobile search framework, suitable for Public Safety Networks (PSNs). As a result, availability and privacy of PI are achieved simultaneously. However, the design of such a framework encounters substantial challenges, including scalability, reliability of the data, computation and communication and storage efficiency, etc. We leverage Secure Indexing (SI) methods and modify Bloom Filters (BFs) to create a secure data storage structure to store encrypted meta-data. As a result, our construction enables secure and privacy-preserving multi-keyword search capability. In addition, our system scales very well, maintains availability of data, imposes minimum delay, and has affordable storage overhead. We provide extensive security analysis, simulation studies, and performance comparison with the state-of-the-art solutions to demonstrate the efficiency and effectiveness of the proposed approach. To the best of our knowledge, this work is the first to address such issues in the context of PSNs. | Tong, et al @cite_22 proposes that DOs delegate the access authorization to a private cloud. This scheme enhances Searchable Symmetric Encryption (SSE) using pseudo-random number generators to avoid linkability of file identities. SSE uses linked lists in which file identities containing similar keywords are linked together in a secure way. The algorithm imposes minimum search delay since it does not need to search over the entire database to find the result. However, its efficiency drops in dynamic situations in which files are added removed to from the system frequently. In addition, the scheme is not able to perform multi-keyword search and the private cloud learns the keywords for which a user would like to search the database. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2031496381"
],
"abstract": [
"Motivated by the privacy issues, curbing the adoption of electronic healthcare systems and the wild success of cloud service models, we propose to build privacy into mobile healthcare systems with the help of the private cloud. Our system offers salient features including efficient key management, privacy-preserving data storage, and retrieval, especially for retrieval at emergencies, and auditability for misusing health data. Specifically, we propose to integrate key management from pseudorandom number generator for unlinkability, a secure indexing method for privacy-preserving keyword search which hides both search and access patterns based on redundancy, and integrate the concept of attribute-based encryption with threshold signing for providing role-based access control with auditability to prevent potential misbehavior, in both normal and emergency cases."
]
} |
1602.04493 | 2283678503 | In a Public Safety (PS) situation, agents may require critical and personally identifiable information. Therefore, not only does context and location-aware information need to be available, but also the privacy of such information should be preserved. Existing solutions do not address such a problem in a PS environment. This paper proposes a framework in which anonymized Personal Information (PI) is accessible to authorized public safety agents under a PS circumstance. In particular, we propose a secure data storage structure along with privacy-preserving mobile search framework, suitable for Public Safety Networks (PSNs). As a result, availability and privacy of PI are achieved simultaneously. However, the design of such a framework encounters substantial challenges, including scalability, reliability of the data, computation and communication and storage efficiency, etc. We leverage Secure Indexing (SI) methods and modify Bloom Filters (BFs) to create a secure data storage structure to store encrypted meta-data. As a result, our construction enables secure and privacy-preserving multi-keyword search capability. In addition, our system scales very well, maintains availability of data, imposes minimum delay, and has affordable storage overhead. We provide extensive security analysis, simulation studies, and performance comparison with the state-of-the-art solutions to demonstrate the efficiency and effectiveness of the proposed approach. To the best of our knowledge, this work is the first to address such issues in the context of PSNs. | The work in @cite_18 uses the Public-key Encryption with Keyword Search (PEKS) algorithm to preserve keyword privacy. With PEKS, a trapdoor is computed for a keyword and upon search, it is compared against the entire database to find the results. However, the scheme is not efficient, firstly, because to retrieve proper information the entire database should be searched, and secondly, it is computationally expensive as PEKS employs pairing-based cryptography (PBC). To tackle the latter, @cite_2 @cite_7 proposed to outsource the heavy computations of PBC to a proxy server. The approach converts a ciphertext in such a way that the decryption process is more lightweight at the user side. Despite the preceding improvement, in a PS environment, the number of data outsourcing requests may be quite large because of the large amount of information. This causes the delay to be increased. Furthermore, in such situations, the network infrastructure might be down which may result in lack of access to the proxy servers. Therefore, the applicability of such techniques is questionable in this context. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_2"
],
"mid": [
"2102162364",
"",
"1768601545"
],
"abstract": [
"Cloud storage services enable users to remotely access data in a cloud anytime and anywhere, using any device, in a pay-as-you-go manner. Moving data into a cloud offers great convenience to users since they do not have to care about the large capital investment in both the deployment and management of the hardware infrastructures. However, allowing a cloud service provider (CSP), whose purpose is mainly for making a profit, to take the custody of sensitive data, raises underlying security and privacy issues. To keep user data confidential against an untrusted CSP, a natural way is to apply cryptographic approaches, by disclosing the data decryption key only to authorized users. However, when a user wants to retrieve files containing certain keywords using a thin client, the adopted encryption system should not only support keyword searching over encrypted data, but also provide high performance. In this paper, we investigate the characteristics of cloud storage services and propose a secure and privacy preserving keyword searching (SPKS) scheme, which allows the CSP to participate in the decipherment, and to return only files containing certain keywords specified by the users, so as to reduce both the computational and communication overhead in decryption for users, on the condition of preserving user data privacy and user querying privacy. Performance analysis shows that the SPKS scheme is applicable to a cloud environment.",
"",
"Current security mechanisms are not suitable for organisations that outsource their data management to untrusted servers. Encrypting and decrypting sensitive data at the client side is the normal approach in this situation but has high communication and computation overheads if only a subset of the data is required, for example, selecting records in a database table based on a keyword search. New cryptographic schemes have been proposed that support encrypted queries over encrypted data. But they all depend on a single set of secret keys, which implies single user access or sharing keys among multiple users, with key revocation requiring costly data re-encryption. In this paper, we propose an encryption scheme where each authorised user in the system has his own keys to encrypt and decrypt data. The scheme supports keyword search which enables the server to return only the encrypted data that satisfies an encrypted query without decrypting it. We provide a concrete construction of the scheme and give formal proofs of its security. We also report on the results of our implementation."
]
} |
1602.04493 | 2283678503 | In a Public Safety (PS) situation, agents may require critical and personally identifiable information. Therefore, not only does context and location-aware information need to be available, but also the privacy of such information should be preserved. Existing solutions do not address such a problem in a PS environment. This paper proposes a framework in which anonymized Personal Information (PI) is accessible to authorized public safety agents under a PS circumstance. In particular, we propose a secure data storage structure along with privacy-preserving mobile search framework, suitable for Public Safety Networks (PSNs). As a result, availability and privacy of PI are achieved simultaneously. However, the design of such a framework encounters substantial challenges, including scalability, reliability of the data, computation and communication and storage efficiency, etc. We leverage Secure Indexing (SI) methods and modify Bloom Filters (BFs) to create a secure data storage structure to store encrypted meta-data. As a result, our construction enables secure and privacy-preserving multi-keyword search capability. In addition, our system scales very well, maintains availability of data, imposes minimum delay, and has affordable storage overhead. We provide extensive security analysis, simulation studies, and performance comparison with the state-of-the-art solutions to demonstrate the efficiency and effectiveness of the proposed approach. To the best of our knowledge, this work is the first to address such issues in the context of PSNs. | To achieve IA, a DO can also enforce access authorization into the ciphertext using functional encryption (for example, Attribute-based Encryption (ABE) or Predicate Encryption (PE)). ABE enables a DO-centric authorization model. In @cite_9 , DOs send data to an HSP and delegate access authorization to that entity. Then, the HSP first classifies the data using the attribute set chosen by the DO and then uses ABE to enforce the DO's access policy for the users. The works @cite_12 @cite_21 use ABE and suggest to form an emergency version of encrypted data in which the owner only uses the "emergency" attribute to produce an emergency ciphertext. Then, the DO delegates emergency keys to a trusted authority. In an emergency, healthcare personnel can retrieve the emergency key to decrypt data. The authors in @cite_6 propose authorized multi-keyword search using predicate encryption. The delay corresponding to the search process is proportional to the size of the database and it involves pairing computations. Similarly, the preceding schemes are all based on PBC which are suitable for delay tolerant situations; thus, in large settings of a PS environment with delay constraints, those schemes lose their functionality. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_12"
],
"mid": [
"1977411270",
"2118875948",
"2165589425",
"1983471723"
],
"abstract": [
"In this paper, we propose an efficient and secure patient-centric access control (PEACE) scheme for the emerging electronic health care (eHealth) system. In order to assure the privacy of patient personal health information (PHI), we define different access privileges to data requesters according to their roles, and then assign different attribute sets to the data requesters. By using these different sets of attribute, we construct the patient-centric access policies of patient PHI. The PEACE scheme can guarantee PHI integrity and confidentiality by adopting digital signature and pseudo-identity techniques. It encompasses identity based cryptography to aggregate remote patient PHI securely. Extensive security and performance analyses demonstrate that the PEACE scheme is able to achieve desired security requirements at the cost of an acceptable communication delay.",
"Personal health record (PHR) is an emerging patient-centric model of health information exchange, which is often outsourced to be stored at a third party, such as cloud providers. However, there have been wide privacy concerns as personal health information could be exposed to those third party servers and to unauthorized parties. To assure the patients' control over access to their own PHRs, it is a promising method to encrypt the PHRs before outsourcing. Yet, issues such as risks of privacy exposure, scalability in key management, flexible access, and efficient user revocation, have remained the most important challenges toward achieving fine-grained, cryptographically enforced data access control. In this paper, we propose a novel patient-centric framework and a suite of mechanisms for data access control to PHRs stored in semitrusted servers. To achieve fine-grained and scalable data access control for PHRs, we leverage attribute-based encryption (ABE) techniques to encrypt each patient's PHR file. Different from previous works in secure data outsourcing, we focus on the multiple data owner scenario, and divide the users in the PHR system into multiple security domains that greatly reduces the key management complexity for owners and users. A high degree of patient privacy is guaranteed simultaneously by exploiting multiauthority ABE. Our scheme also enables dynamic modification of access policies or file attributes, supports efficient on-demand user attribute revocation and break-glass access under emergency scenarios. Extensive analytical and experimental results are presented which show the security, scalability, and efficiency of our proposed scheme.",
"In cloud computing, clients usually outsource their data to the cloud storage servers to reduce the management costs. While those data may contain sensitive personal information, the cloud servers cannot be fully trusted in protecting them. Encryption is a promising way to protect the confidentiality of the outsourced data, but it also introduces much difficulty to performing effective searches over encrypted information. Most existing works do not support efficient searches with complex query conditions, and care needs to be taken when using them because of the potential privacy leakages about the data owners to the data users or the cloud server. In this paper, using on line Personal Health Record (PHR) as a case study, we first show the necessity of search capability authorization that reduces the privacy exposure resulting from the search results, and establish a scalable framework for Authorized Private Keyword Search (APKS) over encrypted cloud data. We then propose two novel solutions for APKS based on a recent cryptographic primitive, Hierarchical Predicate Encryption (HPE). Our solutions enable efficient multi-dimensional keyword searches with range query, allow delegation and revocation of search capabilities. Moreover, we enhance the query privacy which hides users' query keywords against the server. We implement our scheme on a modern workstation, and experimental results demonstrate its suitability for practical usage.",
"Distributed m-healthcare cloud computing system significantly facilitates efficient patient treatment for medical consultation by sharing personal health information among healthcare providers. However, it brings about the challenge of keeping both the data confidentiality and patients’ identity privacy simultaneously. Many existing access control and anonymous authentication schemes cannot be straightforwardly exploited. To solve the problem, in this paper, a novel authorized accessible privacy model (AAPM) is established. Patients can authorize physicians by setting an access tree supporting flexible threshold predicates. Then, based on it, by devising a new technique of attribute-based designated verifier signature, a patient self-controllable multi-level privacy-preserving cooperative authentication scheme (PSMPA) realizing three levels of security and privacy requirement in distributed m-healthcare cloud computing system is proposed. The directly authorized physicians, the indirectly authorized physicians and the unauthorized persons in medical consultation can respectively decipher the personal health information and or verify patients’ identities by satisfying the access tree with their own attribute sets. Finally, the formal security proof and simulation results illustrate our scheme can resist various kinds of attacks and far outperforms the previous ones in terms of computational, communication and storage overhead."
]
} |
1602.04422 | 2274158024 | In this work, we study the challenging problem of identifying the irregular status of objects from images in an "open world" setting, that is, distinguishing the irregular status of an object category from its regular status as well as objects from other categories in the absence of "irregular object" training data. To address this problem, we propose a novel approach by inspecting the distribution of the detection scores at multiple image regions based on the detector trained from the "regular object" and "other objects". The key observation motivating our approach is that for "regular object" images as well as "other objects" images, the region-level scores follow their own essential patterns in terms of both the score values and the spatial distributions while the detection scores obtained from an "irregular object" image tend to break these patterns. To model this distribution, we propose to use Gaussian Processes (GP) to construct two separate generative models for the case of the "regular object" and the "other objects". More specifically, we design a new covariance function to simultaneously model the detection score at a single region and the score dependencies at multiple regions. We finally demonstrate the superior performance of our method on a large dataset newly proposed in this paper. | There exists a variety of work focusing on irregular image and or video detection. While some approaches attempt to detect irregular image parts or video segments given a regular database @cite_2 @cite_21 @cite_6 @cite_4 , other efforts are dedicated to addressing some specific types of irregularities @cite_9 @cite_3 such as out-of-context via building some corresponding models. | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_3",
"@cite_2"
],
"mid": [
"2132670931",
"145548212",
"2026418062",
"2124658620",
"",
"2021659075"
],
"abstract": [
"We present a novel representation and method for detecting and explaining anomalous activities in a video stream. Drawing from natural language processing, we introduce a representation of activities as bags of event n-grams, where we analyze the global structural information of activities using their local event statistics. We demonstrate how maximal cliques in an undirected edge-weighted graph of activities, can be used in an unsupervised manner, to discover regular sub-classes of an activity class. Based on these discovered sub-classes, we formulate a definition of anomalous activities and present a way to detect them. Finally, we characterize each discovered sub-class in terms of its \"most representative member\" and present an information-theoretic method to explain the detected anomalies in a human-interpretable form.",
"Contextual modeling is a critical issue in scene understanding. Object detection accuracy can be improved by exploiting tendencies that are common among object configurations. However, conventional contextual models only exploit the tendencies of normal objects; abnormal objects that do not follow the same tendencies are hard to detect through contextual model. This paper proposes a novel generative model that detects abnormal objects by meeting four proposed criteria of success. This model generates normal as well as abnormal objects, each following their respective tendencies. Moreover, this generation is controlled by a latent scene variable. All latent variables of the proposed model are predicted through optimization via population-based Markov Chain Monte Carlo, which has a relatively short convergence time. We present a new abnormal dataset classified into three categories to thoroughly measure the accuracy of the proposed model for each category; the results demonstrate the superiority of our proposed approach over existing methods.",
"We address the problem of detecting irregularities in visual data, e.g., detecting suspicious behaviors in video sequences, or identifying salient patterns in images. The term \"irregular\" depends on the context in which the \"regular\" or \"valid\" are defined. Yet, it is not realistic to expect explicit definition of all possible valid configurations for a given context. We pose the problem of determining the validity of visual data as a process of constructing a puzzle: We try to compose a new observed image region or a new video segment (\"the query\") using chunks of data (\"pieces of puzzle\") extracted from previous visual examples (\"the database\"). Regions in the observed data which can be composed using large contiguous chunks of data from the database are considered very likely, whereas regions in the observed data which cannot be composed from the database (or can be composed, but only using small fragmented pieces) are regarded as unlikely suspicious. The problem is posed as an inference process in a probabilistic graphical model. We show applications of this approach to identifying saliency in images and video, for detecting suspicious behaviors and for automatic visual inspection for quality assurance.",
"We present an unsupervised technique for detecting unusual activity in a large video set using many simple features. No complex activity models and no supervised feature selections are used. We divide the video into equal length segments and classify the extracted features into prototypes, from which a prototype-segment co-occurrence matrix is computed. Motivated by a similar problem in document-keyword analysis, we seek a correspondence relationship between prototypes and video segments which satisfies the transitive closure constraint. We show that an important sub-family of correspondence functions can be reduced to co-embedding prototypes and segments to N-D Euclidean space. We prove that an efficient, globally optimal algorithm exists for the co-embedding problem. Experiments on various real-life videos have validated our approach.",
"",
"Real-time unusual event detection in video stream has been a difficult challenge due to the lack of sufficient training information, volatility of the definitions for both normality and abnormality, time constraints, and statistical limitation of the fitness of any parametric models. We propose a fully unsupervised dynamic sparse coding approach for detecting unusual events in videos based on online sparse re-constructibility of query signals from an atomically learned event dictionary, which forms a sparse coding bases. Based on an intuition that usual events in a video are more likely to be reconstructible from an event dictionary, whereas unusual events are not, our algorithm employs a principled convex optimization formulation that allows both a sparse reconstruction code, and an online dictionary to be jointly inferred and updated. Our algorithm is completely un-supervised, making no prior assumptions of what unusual events may look like and the settings of the cameras. The fact that the bases dictionary is updated in an online fashion as the algorithm observes more data, avoids any issues with concept drift. Experimental results on hours of real world surveillance video and several Youtube videos show that the proposed algorithm could reliably locate the unusual events in the video sequence, outperforming the current state-of-the-art methods."
]
} |
1602.04422 | 2274158024 | In this work, we study the challenging problem of identifying the irregular status of objects from images in an "open world" setting, that is, distinguishing the irregular status of an object category from its regular status as well as objects from other categories in the absence of "irregular object" training data. To address this problem, we propose a novel approach by inspecting the distribution of the detection scores at multiple image regions based on the detector trained from the "regular object" and "other objects". The key observation motivating our approach is that for "regular object" images as well as "other objects" images, the region-level scores follow their own essential patterns in terms of both the score values and the spatial distributions while the detection scores obtained from an "irregular object" image tend to break these patterns. To model this distribution, we propose to use Gaussian Processes (GP) to construct two separate generative models for the case of the "regular object" and the "other objects". More specifically, we design a new covariance function to simultaneously model the detection score at a single region and the score dependencies at multiple regions. We finally demonstrate the superior performance of our method on a large dataset newly proposed in this paper. | Standard approaches for irregularity detection are based on the idea of evaluating the dissimilarity from regular. The authors of @cite_6 @cite_4 formulate the problem of unusual activity detection in video into a clustering problem where unusual activities are identified as the clusters with low inter-cluster similarity. The work @cite_21 detects the irregularities in image or video by checking whether the image regions or video segments can be composed using large continuous chunks of data from the regular database. Despite the good performance in irregularity detection, this method severely suffers from the scalability issue, because it requires to traverse the database given any new query data. Sparse coding @cite_8 is employed in @cite_2 for unusual events detection. This work is based on the assumption that unusual events cannot be well reconstructed by a set of bases learned from usual events. It is claimed in @cite_2 that it has advantages comparing to previous approaches in that it is built upon a rigorous statistical principle. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_2"
],
"mid": [
"2132670931",
"2113606819",
"2026418062",
"2124658620",
"2021659075"
],
"abstract": [
"We present a novel representation and method for detecting and explaining anomalous activities in a video stream. Drawing from natural language processing, we introduce a representation of activities as bags of event n-grams, where we analyze the global structural information of activities using their local event statistics. We demonstrate how maximal cliques in an undirected edge-weighted graph of activities, can be used in an unsupervised manner, to discover regular sub-classes of an activity class. Based on these discovered sub-classes, we formulate a definition of anomalous activities and present a way to detect them. Finally, we characterize each discovered sub-class in terms of its \"most representative member\" and present an information-theoretic method to explain the detected anomalies in a human-interpretable form.",
"Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.",
"We address the problem of detecting irregularities in visual data, e.g., detecting suspicious behaviors in video sequences, or identifying salient patterns in images. The term \"irregular\" depends on the context in which the \"regular\" or \"valid\" are defined. Yet, it is not realistic to expect explicit definition of all possible valid configurations for a given context. We pose the problem of determining the validity of visual data as a process of constructing a puzzle: We try to compose a new observed image region or a new video segment (\"the query\") using chunks of data (\"pieces of puzzle\") extracted from previous visual examples (\"the database\"). Regions in the observed data which can be composed using large contiguous chunks of data from the database are considered very likely, whereas regions in the observed data which cannot be composed from the database (or can be composed, but only using small fragmented pieces) are regarded as unlikely suspicious. The problem is posed as an inference process in a probabilistic graphical model. We show applications of this approach to identifying saliency in images and video, for detecting suspicious behaviors and for automatic visual inspection for quality assurance.",
"We present an unsupervised technique for detecting unusual activity in a large video set using many simple features. No complex activity models and no supervised feature selections are used. We divide the video into equal length segments and classify the extracted features into prototypes, from which a prototype-segment co-occurrence matrix is computed. Motivated by a similar problem in document-keyword analysis, we seek a correspondence relationship between prototypes and video segments which satisfies the transitive closure constraint. We show that an important sub-family of correspondence functions can be reduced to co-embedding prototypes and segments to N-D Euclidean space. We prove that an efficient, globally optimal algorithm exists for the co-embedding problem. Experiments on various real-life videos have validated our approach.",
"Real-time unusual event detection in video stream has been a difficult challenge due to the lack of sufficient training information, volatility of the definitions for both normality and abnormality, time constraints, and statistical limitation of the fitness of any parametric models. We propose a fully unsupervised dynamic sparse coding approach for detecting unusual events in videos based on online sparse re-constructibility of query signals from an atomically learned event dictionary, which forms a sparse coding bases. Based on an intuition that usual events in a video are more likely to be reconstructible from an event dictionary, whereas unusual events are not, our algorithm employs a principled convex optimization formulation that allows both a sparse reconstruction code, and an online dictionary to be jointly inferred and updated. Our algorithm is completely un-supervised, making no prior assumptions of what unusual events may look like and the settings of the cameras. The fact that the bases dictionary is updated in an online fashion as the algorithm observes more data, avoids any issues with concept drift. Experimental results on hours of real world surveillance video and several Youtube videos show that the proposed algorithm could reliably locate the unusual events in the video sequence, outperforming the current state-of-the-art methods."
]
} |
1602.04422 | 2274158024 | In this work, we study the challenging problem of identifying the irregular status of objects from images in an "open world" setting, that is, distinguishing the irregular status of an object category from its regular status as well as objects from other categories in the absence of "irregular object" training data. To address this problem, we propose a novel approach by inspecting the distribution of the detection scores at multiple image regions based on the detector trained from the "regular object" and "other objects". The key observation motivating our approach is that for "regular object" images as well as "other objects" images, the region-level scores follow their own essential patterns in terms of both the score values and the spatial distributions while the detection scores obtained from an "irregular object" image tend to break these patterns. To model this distribution, we propose to use Gaussian Processes (GP) to construct two separate generative models for the case of the "regular object" and the "other objects". More specifically, we design a new covariance function to simultaneously model the detection score at a single region and the score dependencies at multiple regions. We finally demonstrate the superior performance of our method on a large dataset newly proposed in this paper. | Another stream of work focus on addressing specific types of irregularities. The work of @cite_19 @cite_3 focus on exploiting contextual information for object recognition or out-of-context detection, like car floating in the sky''. In @cite_19 , they use a tree model to learn dependencies among object categories and in @cite_10 they extend it by integrating different sources of contextual information into a graph model. The work @cite_9 focuses on finding abnormal objects in given scenes. They consider wider range of irregular objects like those violate co-occurrence with surrounding objects or violate expected scale. However, the applications of these methods are very limited since they rely on pre-learned object detector to accurately localize the object-of-interest. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_10",
"@cite_3"
],
"mid": [
"1982522767",
"145548212",
"2145315825",
""
],
"abstract": [
"There has been a growing interest in exploiting contextual information in addition to local features to detect and localize multiple object categories in an image. Context models can efficiently rule out some unlikely combinations or locations of objects and guide detectors to produce a semantically coherent interpretation of a scene. However, the performance benefit from using context models has been limited because most of these methods were tested on datasets with only a few object categories, in which most images contain only one or two object categories. In this paper, we introduce a new dataset with images that contain many instances of different object categories and propose an efficient model that captures the contextual information among more than a hundred of object categories. We show that our context model can be applied to scene understanding tasks that local detectors alone cannot solve.",
"Contextual modeling is a critical issue in scene understanding. Object detection accuracy can be improved by exploiting tendencies that are common among object configurations. However, conventional contextual models only exploit the tendencies of normal objects; abnormal objects that do not follow the same tendencies are hard to detect through contextual model. This paper proposes a novel generative model that detects abnormal objects by meeting four proposed criteria of success. This model generates normal as well as abnormal objects, each following their respective tendencies. Moreover, this generation is controlled by a latent scene variable. All latent variables of the proposed model are predicted through optimization via population-based Markov Chain Monte Carlo, which has a relatively short convergence time. We present a new abnormal dataset classified into three categories to thoroughly measure the accuracy of the proposed model for each category; the results demonstrate the superiority of our proposed approach over existing methods.",
"Highlights? Review of different sources of contextual information for object detection. ? New context model based on capturing support relationships and co-occurrences among objects. ? Evaluation of several context models on the SUN database. ? Introduction of a benchmark for detecting out-of-context objects. The context of an image encapsulates rich information about how natural scenes and objects are related to each other. Such contextual information has the potential to enable a coherent understanding of natural scenes and images. However, context models have been evaluated mostly based on the improvement of object recognition performance even though it is only one of many ways to exploit contextual information. In this paper, we present a new scene understanding problem for evaluating and applying context models. We are interested in finding scenes and objects that are \"out-of-context\". Detecting \"out-of-context\" objects and scenes is challenging because context violations can be detected only if the relationships between objects are carefully and precisely modeled. To address this problem, we evaluate different sources of context information, and present a graphical model that combines these sources. We show that physical support relationships between objects can provide useful contextual information for both object recognition and out-of-context detection.",
""
]
} |
1602.04422 | 2274158024 | In this work, we study the challenging problem of identifying the irregular status of objects from images in an "open world" setting, that is, distinguishing the irregular status of an object category from its regular status as well as objects from other categories in the absence of "irregular object" training data. To address this problem, we propose a novel approach by inspecting the distribution of the detection scores at multiple image regions based on the detector trained from the "regular object" and "other objects". The key observation motivating our approach is that for "regular object" images as well as "other objects" images, the region-level scores follow their own essential patterns in terms of both the score values and the spatial distributions while the detection scores obtained from an "irregular object" image tend to break these patterns. To model this distribution, we propose to use Gaussian Processes (GP) to construct two separate generative models for the case of the "regular object" and the "other objects". More specifically, we design a new covariance function to simultaneously model the detection score at a single region and the score dependencies at multiple regions. We finally demonstrate the superior performance of our method on a large dataset newly proposed in this paper. | Due to the advantage in nonparametric data fitting, GP has widely been used in the fields like classification @cite_1 , tracking @cite_5 , motion analysis @cite_18 and object detection @cite_12 @cite_13 . The work @cite_18 uses GP regression to build spatio-temporal flow to model the motion trajectories for trajectory matching. In @cite_12 @cite_13 , object localization is done via using GP regression to predict the overlaps between image windows and the ground-truth objects from the window-level representations. | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"2142125366",
"2008400752",
"2097412577",
"2964081066",
"2056552869"
],
"abstract": [
"Recognition of motions and activities of objects in videos requires effective representations for analysis and matching of motion trajectories. In this paper, we introduce a new representation specifically aimed at matching motion trajectories. We model a trajectory as a continuous dense flow field from a sparse set of vector sequences using Gaussian Process Regression. Furthermore, we introduce a random sampling strategy for learning stable classes of motions from limited data. Our representation allows for incrementally predicting possible paths and detecting anomalous events from online trajectories. This representation also supports matching of complex motions with acceleration changes and pauses or stops within a trajectory. We use the proposed approach for classifying and predicting motion trajectories in traffic monitoring domains and test on several data sets. We show that our approach works well on various types of complete and incomplete trajectories from a variety of video data sets with different frame rates.",
"Many real-world classification tasks involve the prediction of multiple, inter-dependent class labels. A prototypical case of this sort deals with prediction of a sequence of labels for a sequence of observations. Such problems arise naturally in the context of annotating and segmenting observation sequences. This paper generalizes Gaussian Process classification to predict multiple labels by taking dependencies between neighboring labels into account. Our approach is motivated by the desire to retain rigorous probabilistic semantics, while overcoming limitations of parametric methods like Conditional Random Fields, which exhibit conceptual and computational difficulties in high-dimensional input spaces. Experiments on named entity recognition and pitch accent prediction tasks demonstrate the competitiveness of our approach.",
"We advocate the use of Gaussian Process Dynamical Models (GPDMs) for learning human pose and motion priors for 3D people tracking. A GPDM provides a lowdimensional embedding of human motion data, with a density function that gives higher probability to poses and motions close to the training data. With Bayesian model averaging a GPDM can be learned from relatively small amounts of data, and it generalizes gracefully to motions outside the training set. Here we modify the GPDM to permit learning from motions with significant stylistic variation. The resulting priors are effective for tracking a range of human walking styles, despite weak and noisy image measurements and significant occlusions.",
"",
"We propose a method for knowledge transfer between semantically related classes in ImageNet. By transferring knowledge from the images that have bounding-box annotations to the others, our method is capable of automatically populating ImageNet with many more bounding-boxes. The underlying assumption that objects from semantically related classes look alike is formalized in our novel Associative Embedding (AE) representation. AE recovers the latent low-dimensional space of appearance variations among image windows. The dimensions of AE space tend to correspond to aspects of window appearance (e.g. side view, close up, background). We model the overlap of a window with an object using Gaussian Processes (GP) regression, which spreads annotation smoothly through AE space. The probabilistic nature of GP allows our method to perform self-assessment, i.e. assigning a quality estimate to its own output. It enables trading off the amount of returned annotations for their quality. A large scale experiment on 219 classes and 0.5 million images demonstrates that our method outperforms state-of-the-art methods and baselines for object localization. Using self-assessment we can automatically return bounding-box annotations for 51 of all images with high localization accuracy (i.e. 71 average overlap with ground-truth)."
]
} |
1602.04435 | 2277369929 | Concept drift has potential in smart grid analysis because the socio-economic behaviour of consumers is not governed by the laws of physics. Likewise there are also applications in wind power forecasting. In this paper we present decision tree ensemble classification method based on the Random Forest algorithm for concept drift. The weighted majority voting ensemble aggregation rule is employed based on the ideas of Accuracy Weighted Ensemble (AWE) method. Base learner weight in our case is computed for each sample evaluation using base learners accuracy and intrinsic proximity measure of Random Forest. Our algorithm exploits both temporal weighting of samples and ensemble pruning as a forgetting strategy. We present results of empirical comparison of our method with original random forest with incorporated "replace-the-looser" forgetting andother state-of-the-art concept-drfit classifiers like AWE2. | The use of drifting concepts for huge datasets analysis is not unfamiliar to the machine learning and systems identification communities @cite_15 . In this work we restrict ourselves to considering decision tree ensamble classification methods only. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2009727399"
],
"abstract": [
"Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Beyesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models."
]
} |
1602.03960 | 2270766381 | We describe two new related resources that facilitate modelling of general knowledge reasoning in 4th grade science exams. The first is a collection of curated facts in the form of tables, and the second is a large set of crowd-sourced multiple-choice questions covering the facts in the tables. Through the setup of the crowd-sourced annotation task we obtain implicit alignment information between questions and tables. We envisage that the resources will be useful not only to researchers working on question answering, but also to people investigating a diverse range of other applications such as information extraction, question parsing, answer type identification, and lexical semantic modelling. | In related recent work create a dataset of QA pairs over tables. However, their annotation setup does not impose structural constraints from tables, and produces simple QA pairs rather than MCQs. @cite_0 and @cite_2 use tables in the context of question answering, but deal with synthetically generated query data for those tables. More generally tables have been related to QA in the context of queries over relational databases @cite_8 @cite_3 . Regarding crowd-sourcing for question creation, harvest MCQs via a gamified app. However their work does not involve tables. Monolingual alignment datasets have also been explored separately, for example by in the context of Textual Entailment. | {
"cite_N": [
"@cite_0",
"@cite_8",
"@cite_3",
"@cite_2"
],
"mid": [
"2187906936",
"2108223890",
"2066806792",
"2214429195"
],
"abstract": [
"We proposed Neural Enquirer as a neural network architecture to execute a natural language (NL) query on a knowledge-base (KB) for answers. Basically, Neural Enquirer finds the distributed representation of a query and then executes it on knowledge-base tables to obtain the answer as one of the values in the tables. Unlike similar efforts in end-to-end training of semantic parsers, Neural Enquirer is fully \"neuralized\": it not only gives distributional representation of the query and the knowledge-base, but also realizes the execution of compositional queries as a series of differentiable operations, with intermediate results (consisting of annotations of the tables at different levels) saved on multiple layers of memory. Neural Enquirer can be trained with gradient descent, with which not only the parameters of the controlling components and semantic parsing component, but also the embeddings of the tables and query words can be learned from scratch. The training can be done in an end-to-end fashion, but it can take stronger guidance, e.g., the step-by-step supervision for complicated queries, and benefit from it. Neural Enquirer is one step towards building neural network systems which seek to understand language by executing it on real-world. Our experiments show that Neural Enquirer can learn to execute fairly complicated NL queries on tables with rich structures.",
"The World-Wide Web consists of a huge number of unstructured documents, but it also contains structured data in the form of HTML tables. We extracted 14.1 billion HTML tables from Google's general-purpose web crawl, and used statistical classification techniques to find the estimated 154M that contain high-quality relational data. Because each relational table has its own \"schema\" of labeled and typed columns, each such table can be considered a small structured database. The resulting corpus of databases is larger than any other corpus we are aware of, by at least five orders of magnitude. We describe the WEBTABLES system to explore two fundamental questions about this collection of databases. First, what are effective techniques for searching for structured data at search-engine scales? Second, what additional power can be derived by analyzing such a huge corpus? First, we develop new techniques for keyword search over a corpus of tables, and show that they can achieve substantially higher relevance than solutions based on a traditional search engine. Second, we introduce a new object derived from the database corpus: the attribute correlation statistics database (AcsDB) that records corpus-wide statistics on co-occurrences of schema elements. In addition to improving search relevance, the AcsDB makes possible several novel applications: schema auto-complete, which helps a database designer to choose schema elements; attribute synonym finding, which automatically computes attribute synonym pairs for schema matching; and join-graph traversal, which allows a user to navigate between extracted schemas using automatically-generated join links.",
"We present the design of a structured search engine which returns a multi-column table in response to a query consisting of keywords describing each of its columns. We answer such queries by exploiting the millions of tables on the Web because these are much richer sources of structured knowledge than free-format text. However, a corpus of tables harvested from arbitrary HTML web pages presents huge challenges of diversity and redundancy not seen in centrally edited knowledge bases. We concentrate on one concrete task in this paper. Given a set of Web tables T1,..., Tn, and a query Q with q sets of keywords Q1,..., Qq, decide for each Ti if it is relevant to Q and if so, identify the mapping between the columns of Ti and query columns. We represent this task as a graphical model that jointly maps all tables by incorporating diverse sources of clues spanning matches in different parts of the table, corpus-wide co-occurrence statistics, and content overlap across table columns. We define a novel query segmentation model for matching keywords to table columns, and a robust mechanism of exploiting content overlap across table columns. We design efficient inference algorithms based on bipartite matching and constrained graph cuts to solve the joint labeling task. Experiments on a workload of 59 queries over a 25 million web table corpus shows significant boost in accuracy over baseline IR methods.",
"Deep neural networks have achieved impressive supervised classification performance in many tasks including image recognition, speech recognition, and sequence to sequence learning. However, this success has not been translated to applications like question answering that may involve complex arithmetic and logic reasoning. A major limitation of these models is in their inability to learn even simple arithmetic and logic operations. For example, it has been shown that neural networks fail to learn to add two binary numbers reliably. In this work, we propose Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations. Neural Programmer can call these augmented operations over several steps, thereby inducing compositional programs that are more complex than the built-in operations. The model learns from a weak supervision signal which is the result of execution of the correct program, hence it does not require expensive annotation of the correct program itself. The decisions of what operations to call, and what data segments to apply to are inferred by Neural Programmer. Such decisions, during training, are done in a differentiable fashion so that the entire network can be trained jointly by gradient descent. We find that training the model is difficult, but it can be greatly improved by adding random noise to the gradient. On a fairly complex synthetic table-comprehension dataset, traditional recurrent networks and attentional models perform poorly while Neural Programmer typically obtains nearly perfect accuracy."
]
} |
1602.04433 | 2950361018 | The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can simultaneously learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into the deep network to explicitly learn the residual function with reference to the target classifier. We embed features of multiple layers into reproducing kernel Hilbert spaces (RKHSs) and match feature distributions for feature adaptation. The adaptation behaviors can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently using standard back-propagation. Empirical evidence exhibits that the approach outperforms state of art methods on standard domain adaptation datasets. | Domain adaptation @cite_14 builds models that can bridge different domains or tasks, which mitigates the burden of manual labeling for machine learning @cite_0 @cite_7 @cite_23 @cite_5 , computer vision @cite_8 @cite_9 @cite_21 and natural language processing @cite_13 . The main technical problem of domain adaptation is that the domain discrepancy in probability distributions of different domains should be formally reduced. Deep neural networks can learn abstract representations that disentangle different explanatory factors of variations behind data samples @cite_22 and manifest invariant factors underlying different populations that transfer well from original tasks to similar novel tasks @cite_27 . Thus deep neural networks have been explored for domain adaptation @cite_19 @cite_2 @cite_21 , multimodal and multi-task learning @cite_13 @cite_12 , where significant performance gains have been witnessed relative to prior shallow transfer learning methods. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"2165698076",
"2163922914",
"2120149881",
"1722318740",
"2128053425",
"2951084305",
"2115403315",
"22861983",
"2949667497",
"2153929442",
"2161381512",
"2147520416",
"2158899491",
"2184188583"
],
"abstract": [
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.",
"Cross-domain learning methods have shown promising results by leveraging labeled patterns from the auxiliary domain to learn a robust classifier for the target domain which has only a limited number of labeled samples. To cope with the considerable change between feature distributions of different domains, we propose a new cross-domain kernel learning framework into which many existing kernel methods can be readily incorporated. Our framework, referred to as Domain Transfer Multiple Kernel Learning (DTMKL), simultaneously learns a kernel function and a robust classifier by minimizing both the structural risk functional and the distribution mismatch between the labeled and unlabeled samples from the auxiliary and target domains. Under the DTMKL framework, we also propose two novel methods by using SVM and prelearned classifiers, respectively. Comprehensive experiments on three domain adaptation data sets (i.e., TRECVID, 20 Newsgroups, and email spam data sets) demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods.",
"Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.",
"Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.",
"A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories. Recently, deep convolutional neural networks (CNNs) have emerged as clear winners on object classification benchmarks, in part due to training with 1.2M+ labeled classification images. Unfortunately, only a small fraction of those labels are available for the detection task. It is much cheaper and easier to collect large quantities of image-level labels from search engines than it is to collect detection data and label it with precise bounding boxes. In this paper, we propose Large Scale Detection through Adaptation (LSDA), an algorithm which learns the difference between the two tasks and transfers this knowledge to classifiers for categories without bounding box annotated data, turning them into detectors. Our method has the potential to enable detection for the tens of thousands of categories that lack bounding box annotations, yet have plenty of classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge demonstrates the efficacy of our approach. This algorithm enables us to produce a >7.6K detector by using available classification data from leaf nodes in the ImageNet tree. We additionally demonstrate how to modify our architecture to produce a fast detector (running at 2fps for the 7.6K detector). Models and software are available at",
"Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.",
"The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.",
"Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.",
"Let X denote the feature and Y the target. We consider domain adaptation under three possible scenarios: (1) the marginal PY changes, while the conditional PX Y stays the same (target shift), (2) the marginal PY is fixed, while the conditional PX Y changes with certain constraints (conditional shift), and (3) the marginal PY changes, and the conditional PX Y changes with constraints (generalized target shift). Using background knowledge, causal interpretations allow us to determine the correct situation for a problem at hand. We exploit importance reweighting or sample transformation to find the learning machine that works well on test data, and propose to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain. Thanks to kernel embedding of conditional as well as marginal distributions, the proposed approaches avoid distribution estimation, and are applicable for high-dimensional problems. Numerical evaluations on synthetic and real-world data sets demonstrate the effectiveness of the proposed framework.",
"Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"Transfer learning algorithms are used when one has sufficient training data for one supervised learning task (the source training domain) but only very limited training data for a second task (the target test domain) that is similar but not identical to the first. Previous work on transfer learning has focused on relatively restricted settings, where specific parts of the model are considered to be carried over between tasks. Recent work on covariate shift focuses on matching the marginal distributions on observations X across domains. Similarly, work on target conditional shift focuses on matching marginal distributions on labels Y and adjusting conditional distributions P(X|Y ), such that P(X) can be matched across domains. However, covariate shift assumes that the support of test P(X) is contained in the support of training P(X), i.e., the training set is richer than the test set. Target conditional shift makes a similar assumption for P(Y). Moreover, not much work on transfer learning has considered the case when a few labels in the test domain are available. Also little work has been done when all marginal and conditional distributions are allowed to change while the changes are smooth. In this paper, we consider a general case where both the support and the model change across domains. We transform both X and Y by a location-scale shift to achieve transfer between tasks. Since we allow more flexible transformations, the proposed method yields better results on both synthetic data and real-world data.",
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.",
"Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning."
]
} |
1602.04210 | 2094163159 | The 802.11E Task Group has been established to enhance quality of service (QoS) provision for time-bounded services in the current IEEE 802.11 medium access control protocol. The QoS is introduced throughout hybrid coordination function controlled channel access (HCCA) for the rigorous QoS provision. In HCCA, the station is allocated a fixed transmission opportunity (TXOP) based on its TSPEC parameters so that it is efficient for constant bit rate streams. However, as the profile of variable bit rate traffics is inconstant, they are liable to experience a higher delay especially in bursty traffic case. In this paper, we present a dynamic TXOP assignment algorithm called adaptive multi-polling TXOP scheduling algorithm (AMTXOP) for supporting the video traffics transmission over IEEE 802.11e wireless networks. This scheme invests a piggybacked information about the size of the subsequent video frames of the uplink streams to assist the hybrid coordinator accurately assign the TXOP according to actual change in the traffic profile. The proposed scheduler is powered by integrating multi-polling scheme to further reduce the delay and polling overhead. Extensive simulation experiments have been carried out to show the efficiency of the AMTXOP over the existing schemes in terms of the packet delay and the channel utilization. | Variable Bit Rate video can be classified in terms of variability of the traffic into two types: variable in packet size such as MPEG--4; and variable on the generation interval such as H.263. As this research is aimed towards enhancing the TXOP allocation we have chosen MPEG--4 video. MPEG--4 is an efficient video encoding covering a wide domain of bit rate coding ranging from low-bit-rate for wireless transmission up to higher quality beyond high-definition television (HDTV) @cite_4 . For this reason, MPEG--4 video coding has become from among the prominent videos in the internet nowadays. This variability in the compression level is adequate to transmit the video packets over the limited wireless network resources such as low bit rate. Table displays excerpt of video trace file of Jurassic Park 1 movie @cite_11 encoded using MPEG--4 at high quality. We refer the reader to @cite_37 @cite_38 @cite_39 for more details about MPEG--4 videos. Basically, the weakness of HCCA in supporting VBR traffic is because of the lack of information about the abrupt changes of the traffic profile during the time, more particular the traffic burstiness issue. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_4",
"@cite_39",
"@cite_11"
],
"mid": [
"1496953079",
"2024459049",
"2124440426",
"1516497378",
"2098293598"
],
"abstract": [
"",
"The MPEG-4 standard explores every possibility of the digital environment. Recorded images and sounds co-exist with their computer-generated counterparts, a new language for sound promises compact-disk quality at extremely low data rates; and the multimedia content could even adjust itself to suit the transmission rate and quality. Possibly the greatest of the advances made by MPEG-4 is that viewers and listeners need no longer be passive. The height of \"interactivity\" in audiovisual systems today is the users ability merely to stop or start a video in progress. MPEG-4 is completely different: it allows the user to interact with objects within the scene, whether they derive from so-called real sources, such as moving video, or from synthetic sources, such as computer-aided design output or computer-generated cartoons. Authors of content can give users the power to modify scenes by deleting, adding, or repositioning objects, or to alter the behavior of the objects. Perhaps the most immediate need for MPEG-4 is defensive. It supplies tools with which to create uniform (and top-quality) audio and video encoders and decoders on the Internet, preempting what may become an unmanageable tangle of proprietary formats. In addition to the Internet, the standard is also designed for low bit-rate communications devices, which are usually wireless.",
"The efficient digital representation of image and video signals has been the subject of considerable research over the past 20 years. Digital video-coding technology has developed into a mature field and products have been developed that are targeted for a wide range of emerging applications, such as video on demand, digital TV HDTV broadcasting, and multimedia image video database services. With the increased commercial interest in video communications, the need for international image- and video-compression standards arose. To meet this need, the Moving Picture Experts Group (MPEG) was formed to develop coding standards. MPEG-1 and MPEG-2 video-coding standards have attracted much attention worldwide, with an increasing number of very large scale integration (VLSI) and software implementations of these standards becoming commercially available. MPEG-4, the most recent MPEG standard that is still under development, is targeted for future content-based multimedia applications. We provide an overview of the MPEG video-coding algorithms and standards and their role in video communications. We review the basic concepts and techniques that are relevant in the context of the MPEG video-compression standards and outline MPEG-1 and MPEG-2 video-coding algorithms. The specific properties of the standards related to their applications are presented, and the basic elements of the forthcoming MPEG-4 standard are also described. We also discuss the performance of the standards and their success in the market place.",
"This paper analyses the relevance and performance of the emerging MPEG-4 audiovisual coding standard for emerging mobile multimedia applications. Some results are presented for one of the MPEG-4 profiles targeting mobile scenarios.",
"MPEG-4 and H.263 encoded video is expected to account for a large portion of the traffic in future wireline and wireless networks. However, due to a lack of sufficiently long frame size traces of MPEG-4 and H.263 encoded videos, most network performance evaluations currently use MPEG-1 encodings. We present and study a publicly available library of frame size traces of long MPEG-4 and H.263 encoded videos, which we have generated at the Technical University Berlin. The frame size traces have been generated from MPEG-4 and H.263 encodings of over 10 video sequences each 60 minutes long. We conduct a thorough statistical analysis of the traces."
]
} |
1602.04210 | 2094163159 | The 802.11E Task Group has been established to enhance quality of service (QoS) provision for time-bounded services in the current IEEE 802.11 medium access control protocol. The QoS is introduced throughout hybrid coordination function controlled channel access (HCCA) for the rigorous QoS provision. In HCCA, the station is allocated a fixed transmission opportunity (TXOP) based on its TSPEC parameters so that it is efficient for constant bit rate streams. However, as the profile of variable bit rate traffics is inconstant, they are liable to experience a higher delay especially in bursty traffic case. In this paper, we present a dynamic TXOP assignment algorithm called adaptive multi-polling TXOP scheduling algorithm (AMTXOP) for supporting the video traffics transmission over IEEE 802.11e wireless networks. This scheme invests a piggybacked information about the size of the subsequent video frames of the uplink streams to assist the hybrid coordinator accurately assign the TXOP according to actual change in the traffic profile. The proposed scheduler is powered by integrating multi-polling scheme to further reduce the delay and polling overhead. Extensive simulation experiments have been carried out to show the efficiency of the AMTXOP over the existing schemes in terms of the packet delay and the channel utilization. | Adaptive Transmission Opportunity (ATXOP) scheduler @cite_28 is a feedback-based technique which reschedules TS of each QSTA in an SI based on piggybacked information, transmitted to HC, about the next frame length. This algorithm gives an actual TXOP needed by stations and ensures that the end-to-end delay is minimized without jeopardizing the channel bandwidth. An example of the ATXOP algorithm process compared to HCCA scheduler is depicted in Figure . More particularly, the QSTAs are scheduled as in Equation except that the Mean Size of MSDU ( @math ) is the actual frame size reported in the previous received packet instead of using the mean value negotiated in the TS setup phase. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2055335325"
],
"abstract": [
"Quality of Service (QoS) is provided in IEEE 802.11e protocol by means of HCF Controlled Channel Access (HCCA) scheduler which is efficient for supporting Constant Bit Rate (CBR) applications. Numerous researches have been carried out to enhance the HCCA scheduler attempting to accommodate the needs of Variable Bit Rate (VBR) video traffics which probably demonstrates a non-deterministic profile during the time. This paper presents an adaptive TXOP assignment mechanism for supporting the transmission of the prerecorded video traffics over IEEE 802.11e wireless networks. The proposed mechanism uses a feedback about the size of the subsequent video frames of the uplink traffic to assist the Hybrid Coordinator (HC) accurately assign TXOP according to the fast changes in the VBR profile. The simulation results show that our mechanism reduces the delay experienced by VBR traffic streams comparable to HCCA scheduler due to the accurate assignment of the TXOP which preserve the channel time for data transmission."
]
} |
1602.03681 | 2267051903 | The public package registry npm is one of the biggest software registry. With its 216 911 software packages, it forms a big network of software dependencies. In this paper we evaluate various methods for finding similar packages in the npm network, using only the structure of the graph. Namely, we want to find a way of categorizing similar packages, which would be useful for recommendation systems. This size enables us to compute meaningful results, as it softened the particularities of the graph. Npm is also quite famous as it is the default package repository of Node.js. We believe that it will make our results interesting for more people than a less used package repository. This makes it a good subject of analysis of software networks. | The modern science of network is particularly interested in decomposing nodes of large networks into independent groups called "communities" @cite_19 . As a community, you think of a group of nodes that is internally densely connected but sparsely connected externally @cite_1 @cite_5 @cite_19 @cite_2 . The nodes within community are similar to each other and dissimilar to the rest of the nodes in a network. Researchers showed there is actually community structures in real-world networks @cite_14 @cite_19 @cite_13 . However, the definition of community is not universally accepted and for that matter we have multiple definitions. Community structure emphasizes cohesive groups of nodes and the absence of dependencies between the groups, but this does not say anything about the roles in a network. The concept of roles in networks is much wider that the concept of community. Prerequisite for the analysis of roles in a network is a community structure. After that, you examine how discovered communities are inter-dependent, which translates to different roles in a network. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_13"
],
"mid": [
"",
"1976412347",
"1971421925",
"2148606196",
"2061901927",
"2017987256"
],
"abstract": [
"",
"We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.",
"A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.",
"Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.",
"Part I. Introduction: Networks, Relations, and Structure: 1. Relations and networks in the social and behavioral sciences 2. Social network data: collection and application Part II. Mathematical Representations of Social Networks: 3. Notation 4. Graphs and matrixes Part III. Structural and Locational Properties: 5. Centrality, prestige, and related actor and group measures 6. Structural balance, clusterability, and transitivity 7. Cohesive subgroups 8. Affiliations, co-memberships, and overlapping subgroups Part IV. Roles and Positions: 9. Structural equivalence 10. Blockmodels 11. Relational algebras 12. Network positions and roles Part V. Dyadic and Triadic Methods: 13. Dyads 14. Triads Part VI. Statistical Dyadic Interaction Models: 15. Statistical analysis of single relational networks 16. Stochastic blockmodels and goodness-of-fit indices Part VII. Epilogue: 17. Future directions.",
"High-throughput techniques are leading to an explosive growth in the size of biological databases and creating the opportunity to revolutionize our understanding of life and disease. Interpretation of these data remains, however, a major scientific challenge. Here, we propose a methodology that enables us to extract and display information contained in complex networks1,2,3. Specifically, we demonstrate that we can find functional modules4,5 in complex networks, and classify nodes into universal roles according to their pattern of intra- and inter-module connections. The method thus yields a ‘cartographic representation’ of complex networks. Metabolic networks6,7,8 are among the most challenging biological networks and, arguably, the ones with most potential for immediate applicability9. We use our method to analyse the metabolic networks of twelve organisms from three different superkingdoms. We find that, typically, 80 of the nodes are only connected to other nodes within their respective modules, and that nodes with different roles are affected by different evolutionary constraints and pressures. Remarkably, we find that metabolites that participate in only a few reactions but that connect different modules are more conserved than hubs whose links are mostly within a single module."
]
} |
1602.03636 | 2264225523 | Foursquare is an online social network and can be represented with a bipartite network of users and venues. A user-venue pair is connected if a user has checked-in at that venue. In the case of Foursquare, network analysis techniques can be used to enhance the user experience. One such technique is link prediction, which can be used to build a personalized recommendation system of venues. Recommendation systems in bipartite networks are very often designed using the global ranking method and collaborative filtering. A less known method- network based inference is also a feasible choice for link prediction in bipartite networks and sometimes performs better than the previous two. In this paper we test these techniques on the Foursquare network. The best technique proves to be the network based inference. We also show that taking into account the available metadata can be beneficial. | @cite_1 discuss how to compute the similarity between nodes in a projected network. Since our network is not only bipartite, but also contains multilinks, we can extract more information by applying weights to the edges. We can compute similarities between users which incorporate the weights and use that information to enhance our prediction. In () authors discuss some approaches and properties that can help constructing such similarity measures. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1893639437"
],
"abstract": [
"One-mode projecting is extensively used to compress bipartite networks. Since one-mode projection is always less informative than the bipartite representation, a proper weighting method is required to better retain the original information. In this article, inspired by the network-based resource-allocation dynamics, we raise a weighting method which can be directly applied in extracting the hidden information of networks, with remarkably better performance than the widely used global ranking method as well as collaborative filtering. This work not only provides a creditable method for compressing bipartite networks, but also highlights a possible way for the better solution of a long-standing challenge in modern information science: How to do a personal recommendation."
]
} |
1602.03636 | 2264225523 | Foursquare is an online social network and can be represented with a bipartite network of users and venues. A user-venue pair is connected if a user has checked-in at that venue. In the case of Foursquare, network analysis techniques can be used to enhance the user experience. One such technique is link prediction, which can be used to build a personalized recommendation system of venues. Recommendation systems in bipartite networks are very often designed using the global ranking method and collaborative filtering. A less known method- network based inference is also a feasible choice for link prediction in bipartite networks and sometimes performs better than the previous two. In this paper we test these techniques on the Foursquare network. The best technique proves to be the network based inference. We also show that taking into account the available metadata can be beneficial. | In ( @cite_0 ) authors discuss the problems of link prediction in location-based social networks. They introduce some important parameters (e.g. latitude and longitude of a check-in) that are taken into account in their link prediction method. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2001344462"
],
"abstract": [
"Link prediction systems have been largely adopted to recommend new friends in online social networks using data about social interactions. With the soaring adoption of location-based social services it becomes possible to take advantage of an additional source of information: the places people visit. In this paper we study the problem of designing a link prediction system for online location-based social networks. We have gathered extensive data about one of these services, Gowalla, with periodic snapshots to capture its temporal evolution. We study the link prediction space, finding that about 30 of new links are added among \"place-friends\", i.e., among users who visit the same places. We show how this prediction space can be made 15 times smaller, while still 66 of future connections can be discovered. Thus, we define new prediction features based on the properties of the places visited by users which are able to discriminate potential future links among them. Building on these findings, we describe a supervised learning framework which exploits these prediction features to predict new links among friends-of-friends and place-friends. Our evaluation shows how the inclusion of information about places and related user activity offers high link prediction performance. These results open new directions for real-world link recommendation systems on location-based social networks."
]
} |
1602.03557 | 2950901115 | Recently there has been significant interest around designing specialized RDF engines, as traditional query processing mechanisms incur orders of magnitude performance gaps on many RDF workloads. At the same time researchers have released new worst-case optimal join algorithms which can be asymptotically better than the join algorithms in traditional engines. In this paper we apply worst-case optimal join algorithms to a standard RDF workload, the LUBM benchmark, for the first time. We do so using two worst-case optimal engines: (1) LogicBlox, a commercial database engine, and (2) EmptyHeaded, our prototype research engine with enhanced worst-case optimal join algorithms. We show that without any added optimizations both LogicBlox and EmptyHeaded outperform two state-of-the-art specialized RDF engines, RDF-3X and TripleBit, by up to 6x on cyclic join queries-the queries where traditional optimizers are suboptimal. On the remaining, less complex queries in the LUBM benchmark, we show that three classic query optimization techniques enable EmptyHeaded to compete with RDF engines, even when there is no asymptotic advantage to the worst-case optimal approach. We validate that our design has merit as EmptyHeaded outperforms MonetDB by three orders of magnitude and LogicBlox by two orders of magnitude, while remaining within an order of magnitude of RDF-3X and TripleBit. | * Multi-Way Engines The first worst-case optimal join algorithm was recently derived @cite_17 . The LogicBlox (LB) engine @cite_12 is the first commercial database engine to use a worst-case optimal algorithm. Recent theoretical advances @cite_7 have suggested worst-case optimal join processing is applicable beyond standard join pattern queries. We continue in this line of work, applying worst-case optimal algorithms to a standard RDF workload. | {
"cite_N": [
"@cite_7",
"@cite_12",
"@cite_17"
],
"mid": [
"2290724595",
"2008865455",
"2790840297"
],
"abstract": [
"We study a class of aggregate-join queries with multiple aggregation operators evaluated over annotated relations. We show that straightforward extensions of standard multiway join algorithms and generalized hypertree decompositions (GHDs) provide best-known runtime guarantees. In contrast, prior work uses bespoke algorithms and data structures and does not match these guarantees. Our extensions to the standard techniques are a pair of simple tests that (1) determine if two orderings of aggregation operators are equivalent and (2) determine if a GHD is compatible with a given ordering. These tests provide a means to find an optimal GHD that, when provided to standard join algorithms, will correctly answer a given aggregate-join query. The second class of our contributions is a pair of complete characterizations of (1) the set of orderings equivalent to a given ordering and (2) the set of GHDs compatible with some equivalent ordering. We show by example that previous approaches are incomplete. The key technical consequence of our characterizations is a decomposition of a compatible GHD into a set of (smaller) unconstrained GHDs, i.e. into a set of GHDs of sub-queries without aggregations. Since this decomposition is comprised of unconstrained GHDs, we are able to connect to the wide literature on GHDs for join query processing, thereby obtaining improved runtime bounds, MapReduce variants, and an efficient method to find approximately optimal GHDs.",
"The LogicBlox system aims to reduce the complexity of software development for modern applications which enhance and automate decision-making and enable their users to evolve their capabilities via a self-service'' model. Our perspective in this area is informed by over twenty years of experience building dozens of mission-critical enterprise applications that are in use by hundreds of large enterprises across industries such as retail, telecommunications, banking, and government. We designed and built LogicBlox to be the system we wished we had when developing those applications. In this paper, we discuss the design considerations behind the LogicBlox system and give an overview of its implementation, highlighting innovative aspects. These include: LogiQL, a unified and declarative language based on Datalog; the use of purely functional data structures; novel join processing strategies; advanced incremental maintenance and live programming facilities; a novel concurrency control scheme; and built-in support for prescriptive and predictive analytics.",
"Efficient join processing is one of the most fundamental and well-studied tasks in database research. In this work, we examine algorithms for natural join queries over many relations and describe a novel algorithm to process these queries optimally in terms of worst-case data complexity. Our result builds on recent work by Atserias, Grohe, and Marx, who gave bounds on the size of a full conjunctive query in terms of the sizes of the individual relations in the body of the query. These bounds, however, are not constructive: they rely on Shearer's entropy inequality which is information-theoretic. Thus, the previous results leave open the question of whether there exist algorithms whose running time achieve these optimal bounds. An answer to this question may be interesting to database practice, as we show in this paper that any project-join plan is polynomially slower than the optimal bound for some queries. We construct an algorithm whose running time is worst-case optimal for all natural join queries. Our result may be of independent interest, as our algorithm also yields a constructive proof of the general fractional cover bound by Atserias, Grohe, and Marx without using Shearer's inequality. In addition, we show that this bound is equivalent to a geometric inequality by Bollobas and Thomason, one of whose special cases is the famous Loomis-Whitney inequality. Hence, our results algorithmically prove these inequalities as well. Finally, we discuss how our algorithm can be used to compute a relaxed notion of joins."
]
} |
1602.03718 | 2952011438 | We initiate a thorough study of -- producing algorithms for the approximation problems of property testing in the CONGEST model. In particular, for the so-called testing model we emulate sequential tests for nearly all graph properties having @math -sided tests, while in the and models we obtain faster tests for triangle-freeness and bipartiteness respectively. In most cases, aided by parallelism, the distributed algorithms have a much shorter running time as compared to their counterparts from the sequential querying model of traditional property testing. The simplest property testing algorithms allow a relatively smooth transitioning to the distributed model. For the more complex tasks we develop new machinery that is of independent interest. This includes a method for distributed maintenance of multiple random walks. | Related to having information being sent to, or received by, a central authority, is the concept of proof-labelling schemes, introduced by @cite_9 (for extensions see, e.g., @cite_21 ). In this setting, each vertex is given some external label, and by exchanging labels the vertices need to decide whether a given property of the graph holds. This is different from our setting in which no information other than vertex IDs is available. Another setting that is related to proof-labelling schemes, but differs from our model, is the prover-verifier model of @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_9",
"@cite_21"
],
"mid": [
"2271364167",
"2056295140",
"2071346873"
],
"abstract": [
"In this work we study local checkability of network properties like s-t reachability, or whether the network is acyclic or contains a cycle. A structural property S of a graph G is locally checkable, if there is a prover-and-verifier pair (P, V) as follows. The prover P assigns a label to each node in graphs satisfying S. The verifier V is a constant time distributed algorithm that returns Yes at all nodes if G satisfies S and was labeled by P, and No for at least one node if G does not satisfy S, regardless of the node labels. The quality of (P, V) is measured in terms of the label size. We obtain (asymptotically) tight bounds for the bit complexity of the latter two problems for undirected as well as directed networks, where in the directed case we consider one-way and two-way communication, i.e., we distinguish whether communication is possible only in the edge direction or not. For the one-way case we obtain a new asymptotically tight lower bound for the bit complexity of s-t reachability. For the two-way case we devise an emulation technique that allows us to transfer a previously known s-t reachability upper bound without asymptotic loss in the bit complexity.",
"This paper addresses the problem of locally verifying global properties. Several natural questions are studied, such as “how expensive is local verification?” and more specifically, “how expensive is local verification compared to computation?” A suitable model is introduced in which these questions are studied in terms of the number of bits a vertex needs to communicate. The model includes the definition of a proof labeling scheme (a pair of algorithms- one to assign the labels, and one to use them to verify that the global property holds). In addition, approaches are presented for the efficient construction of schemes, and upper and lower bounds are established on the bit complexity of schemes for multiple basic problems. The paper also studies the role and cost of unique identities in terms of impossibility and complexity, in the context of proof labeling schemes. Previous studies on related questions deal with distributed algorithms that simultaneously compute a configuration and verify that this configuration has a certain desired property. It turns out that this combined approach enables the verification to be less costly sometimes, since the configuration is typically generated so as to be easily verifiable. In contrast, our approach separates the configuration design from the verification. That is, it first generates the desired configuration without bothering with the need to verify it, and then handles the task of constructing a suitable verification scheme. Our approach thus allows for a more modular design of algorithms, and has the potential to aid in verifying properties even when the original design of the structures for maintaining them was done without verification in mind.",
"Abstract Borůvka presented in 1926 the first solution of the Minimum Spanning Tree Problem (MST) which is generally regarded as a cornerstone of Combinatorial Optimization. In this paper we present the first English translation of both of his pioneering works. This is followed by the survey of development related to the MST problem and by remarks and historical perspective. Out of many available algorithms to solve MST the Borůvka's algorithm is the basis of the fastest known algorithms."
]
} |
1602.03718 | 2952011438 | We initiate a thorough study of -- producing algorithms for the approximation problems of property testing in the CONGEST model. In particular, for the so-called testing model we emulate sequential tests for nearly all graph properties having @math -sided tests, while in the and models we obtain faster tests for triangle-freeness and bipartiteness respectively. In most cases, aided by parallelism, the distributed algorithms have a much shorter running time as compared to their counterparts from the sequential querying model of traditional property testing. The simplest property testing algorithms allow a relatively smooth transitioning to the distributed model. For the more complex tasks we develop new machinery that is of independent interest. This includes a method for distributed maintenance of multiple random walks. | Finding induced subgraphs is a crucial task and has been studied in several different distributed models (see, e.g., @cite_25 @cite_12 @cite_3 @cite_7 ). Notice that for subgraphs, having instances of the desired subgraph can help speedup the computation, as in @cite_7 . This is in contrast to algorithms that perform faster if there are or only instances, as explained above, which is why we test for, e.g., the property of being , rather for the property of triangles. (Notice that these are not the same, and in fact every graph with @math or more vertices is @math -close to having a triangle.) | {
"cite_N": [
"@cite_7",
"@cite_3",
"@cite_25",
"@cite_12"
],
"mid": [
"2949944845",
"2950813619",
"2295809089",
"2045760791"
],
"abstract": [
"Let G = (V,E) be an n-vertex graph and M_d a d-vertex graph, for some constant d. Is M_d a subgraph of G? We consider this problem in a model where all n processes are connected to all other processes, and each message contains up to O(log n) bits. A simple deterministic algorithm that requires O(n^((d-2) d) log n) communication rounds is presented. For the special case that M_d is a triangle, we present a probabilistic algorithm that requires an expected O(ceil(n^(1 3) (t^(2 3) + 1))) rounds of communication, where t is the number of triangles in the graph, and O(min n^(1 3) log^(2 3) n (t^(2 3) + 1), n^(1 3) ) with high probability. We also present deterministic algorithms specially suited for sparse graphs. In any graph of maximum degree Delta, we can test for arbitrary subgraphs of diameter D in O(ceil(Delta^(D+1) n)) rounds. For triangles, we devise an algorithm featuring a round complexity of O(A^2 n + log_(2+n A^2) n), where A denotes the arboricity of G.",
"In this work, we use algebraic methods for studying distance computation and subgraph detection tasks in the congested clique model. Specifically, we adapt parallel matrix multiplication implementations to the congested clique, obtaining an @math round matrix multiplication algorithm, where @math is the exponent of matrix multiplication. In conjunction with known techniques from centralised algorithmics, this gives significant improvements over previous best upper bounds in the congested clique model. The highlight results include: -- triangle and 4-cycle counting in @math rounds, improving upon the @math triangle detection algorithm of [DISC 2012], -- a @math -approximation of all-pairs shortest paths in @math rounds, improving upon the @math -round @math -approximation algorithm of Nanongkai [STOC 2014], and -- computing the girth in @math rounds, which is the first non-trivial solution in this model. In addition, we present a novel constant-round combinatorial algorithm for detecting 4-cycles.",
"We study the message size complexity of recognizing, under the broadcast congested clique model, whether a fixed graph H appears in a given graph G as a minor, as a subgraph or as an induced subgraph. The n nodes of the input graph G are the players, and each player only knows the identities of its immediate neighbors. We are mostly interested in the one-round, simultaneous setup where each player sends a message of size @math to a referee that should be able then to determine whether H appears in G. We consider randomized protocols where the players have access to a common random sequence. We completely characterize which graphs H admit such a protocol. For the particular case where H is the path of 4 nodes, we present a new notion called twin ordering, which may be of independent interest.",
"We consider a distributed task allocation problem in which m players must divide a set of n tasks between them. Each player i receives as input a set Xi of tasks such that the union of all input sets covers the task set. The goal is for each player to output a subset Yi ⊆ Xi, such that the outputs (Y1,...,Ym) form a partition of the set of tasks. The problem can be viewed as a distributed one-shot variant of the well-known k-server problem, and we also show that it is closely related to the problem of finding a rooted spanning tree in directed broadcast networks. We study the communication complexity and round complexity of the task allocation problem. We begin with the classical two-player communication model, and show that the randomized communication complexity of task allocation is Ω(n), even when the set of tasks is known to the players in advance. For the multi-player setting with m = O(n) we give two upper bounds in the shared-blackboard model of communication. We show that the problem can be solved in O(log n) rounds and O(n log n) total bits for arbitrary inputs; moreover, if for any set X of tasks, there are at least α|X| players that have at least one task from X in their inputs, then O((1 α + log m)log n) rounds suffice even if each player can only write O(log n) bits on the blackboard in each round. Finally, we extend our results to the case where the players communicate over an arbitrary directed communication graph instead of a shared blackboard. As an application of these results, we also consider the related problem of constructing a directed spanning tree in strongly-connected directed networks and we show lower and upper bounds for that problem."
]
} |
1602.03481 | 2553923866 | Crowdsourcing platforms provide marketplaces where task requesters can pay to get labels on their data. Such markets have emerged recently as popular venues for collecting annotations that are crucial in training machine learning models in various applications. However, as jobs are tedious and payments are low, errors are common in such crowdsourced labels. A common strategy to overcome such noise in the answers is to add redundancy by getting multiple answers for each task and aggregating them using some methods such as majority voting. For such a system, there is a fundamental question of interest: how can we maximize the accuracy given a fixed budget on how many responses we can collect on the crowdsourcing system. We characterize this fundamental trade-off between the budget (how many answers the requester can collect in total) and the accuracy in the estimated labels. In particular, we ask whether adaptive task assignment schemes lead to a more efficient trade-off between the accuracy and the budget. Adaptive schemes, where tasks are assigned adaptively based on the data collected thus far, are widely used in practical crowdsourcing systems to efficiently use a given fixed budget. However, existing theoretical analyses of crowdsourcing systems suggest that the gain of adaptive task assignments is minimal. To bridge this gap, we investigate this question under a strictly more general probabilistic model, which has been recently introduced to model practical crowdsourced annotations. Under this generalized Dawid-Skene model, we characterize the fundamental trade-off between budget and accuracy. We introduce a novel adaptive scheme that matches this fundamental limit. We further quantify the fundamental gap between adaptive and non-adaptive schemes, by comparing the trade-off with the one for non-adaptive schemes. Our analyses confirm that the gap is significant. | The generalized Dawid-Skene model studied in this paper allows the tasks to be heterogeneous (having different difficulties) and the workers to be heterogeneous (having different reliabilities). The original Dawid-Skene (DS) model introduced in @cite_14 and analyzed in @cite_19 is a special case, when only workers are allowed to be heterogeneous. All tasks have the same difficulty with @math for all @math and @math can be either zero or one depending on the true label. Most of existing work on the DS model assumes that tasks are randomly assigned and focuses only on the inference problem of finding the true labels. Several inference algorithms have been proposed @cite_14 @cite_3 @cite_7 @cite_31 @cite_33 @cite_18 @cite_34 @cite_35 @cite_30 @cite_32 @cite_5 @cite_27 @cite_6 @cite_4 @cite_26 @cite_17 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_26",
"@cite_32",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_5",
"@cite_31",
"@cite_34",
"@cite_17"
],
"mid": [
"1570705485",
"2152009989",
"2140890285",
"9014458",
"2952140632",
"",
"2144372981",
"2284172031",
"2949312134",
"2144660879",
"",
"2163522723",
"",
"108763474",
"2125943921",
"2129345386",
"2636291236"
],
"abstract": [
"Crowdsourcing has become an eective and popular tool for human-powered computation to label large datasets. Since the workers can be unreliable, it is common in crowdsourcing to assign multiple workers to one task, and to aggregate the labels in order to obtain results of high quality. In this paper, we provide nite-sample exponential bounds on the error rate (in probability and in expectation) of general aggregation rules under the Dawid-Skene crowdsourcing model. The bounds are derived for multi-class labeling, and can be used to analyze many aggregation methods, including majority voting, weighted majority voting and the oracle Maximum A Posteriori (MAP) rule. We show that the oracle MAP rule approximately optimizes our upper bound on the mean error rate of weighted majority voting in certain setting. We propose an iterative weighted majority voting (IWMV) method that optimizes the error rate bound and approximates the oracle MAP rule. Its one step version has a provable theoretical guarantee on the error rate. The IWMV method is intuitive and computationally simple. Experimental results on simulated and real data show that IWMV performs at least on par with the state-of-the-art methods, and it has a much lower computational cost (around one hundred times faster) than the state-of-the-art methods.",
"An important way to make large training sets is to gather noisy labels from crowds of nonexperts. We propose a minimax entropy principle to improve the quality of these labels. Our method assumes that labels are generated by a probability distribution over workers, items, and labels. By maximizing the entropy of this distribution, the method naturally infers item confusability and worker expertise. We infer the ground truth by minimizing the entropy of this distribution, which we show minimizes the Kullback-Leibler (KL) divergence between the probability distribution and the unknown truth. We show that a simple coordinate descent scheme can optimize minimax entropy. Empirically, our results are substantially better than previously published methods for the same problem.",
"Crowdsourcing systems, in which tasks are electronically distributed to numerous \"information piece-workers\", have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a general model of such crowdsourcing tasks, and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm significantly outperforms majority voting and, in fact, is asymptotically optimal through comparison to an oracle that knows the reliability of every worker.",
"In compiling a patient record many facets are subject to errors of measurement. A model is presented which allows individual error-rates to be estimated for polytomous facets even when the patient's \"true\" response is not available. The EM algorithm is shown to provide a slow but sure way of obtaining maximum likelihood estimates of the parameters of interest. Some preliminary experience is reported and the limitations of the method are described.",
"We consider the problem of accurately estimating the reliability of workers based on noisy labels they provide, which is a fundamental question in crowdsourcing. We propose a novel lower bound on the minimax estimation error which applies to any estimation procedure. We further propose Triangular Estimation (TE), an algorithm for estimating the reliability of workers. TE has low complexity, may be implemented in a streaming setting when labels are provided by workers in real time, and does not rely on an iterative procedure. We further prove that TE is minimax optimal and matches our lower bound. We conclude by assessing the performance of TE and other state-of-the-art algorithms on both synthetic and real-world data sets.",
"",
"In this paper, we study a special kind of learning problem in which each training instance is given a set of (or distribution over) candidate class labels and only one of the candidate labels is the correct one. Such a problem can occur, e.g., in an information retrieval setting where a set of words is associated with an image, or if classes labels are organized hierarchically. We propose a novel discriminative approach for handling the ambiguity of class labels in the training examples. The experiments with the proposed approach over five different UCI datasets show that our approach is able to find the correct label among the set of candidate labels and actually achieve performance close to the case when each training instance is given a single correct label. In contrast, naive methods degrade rapidly as more ambiguity is introduced into the labels.",
"We propose a streaming algorithm for the binary classification of data based on crowdsourcing. The algorithm learns the competence of each labeller by comparing her labels to those of other labellers on the same tasks and uses this information to minimize the prediction error rate on each task. We provide performance guarantees of our algorithm for a fixed population of independent labellers. In particular, we show that our algorithm is optimal in the sense that the cumulative regret compared to the optimal decision with known labeller error probabilities is finite, independently of the number of tasks to label. The complexity of the algorithm is linear in the number of labellers and the number of tasks, up to some logarithmic factors. Numerical experiments illustrate the performance of our algorithm compared to existing algorithms, including simple majority voting and expectation-maximization algorithms, on both synthetic and real datasets.",
"Crowdsourcing is a popular paradigm for effectively collecting labels at low cost. The Dawid-Skene estimator has been widely used for inferring the true labels from the noisy labels provided by non-expert crowdsourcing workers. However, since the estimator maximizes a non-convex log-likelihood function, it is hard to theoretically justify its performance. In this paper, we propose a two-stage efficient algorithm for multi-class crowd labeling problems. The first stage uses the spectral method to obtain an initial estimate of parameters. Then the second stage refines the estimation by optimizing the objective function of the Dawid-Skene estimator via the EM algorithm. We show that our algorithm achieves the optimal convergence rate up to a logarithmic factor. We conduct extensive experiments on synthetic and real datasets. Experimental results demonstrate that the proposed algorithm is comparable to the most accurate empirical approach, while outperforming several other recently proposed methods.",
"In remote sensing applications \"ground-truth\" data is often used as the basis for training pattern recognition algorithms to generate thematic maps or to detect objects of interest. In practical situations, experts may visually examine the images and provide a subjective noisy estimate of the truth. Calibrating the reliability and bias of expert labellers is a non-trivial problem. In this paper we discuss some of our recent work on this topic in the context of detecting small volcanoes in Magellan SAR images of Venus. Empirical results (using the Expectation-Maximization procedure) suggest that accounting for subjective noise can be quite significant in terms of quantifying both human and algorithm detection performance.",
"",
"Crowdsourcing systems, in which numerous tasks are electronically distributed to numerous “information pieceworkers,” have emerged as an effective paradigm for human-powered solving of large-scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all such systems must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in an appropriate manner, e.g., majority voting. In this paper, we consider a general model of such crowdsourcing tasks and pose the problem of minimizing the total price i.e., number of task assignments that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm, inspired by belief propagation and low-rank matrix approximation, significantly outperforms majority voting and, in fact, is optimal through comparison to an oracle that knows the reliability of every worker. Further, we compare our approach with a more general class of algorithms that can dynamically assign tasks. By adaptively deciding which questions to ask to the next set of arriving workers, one might hope to reduce uncertainty more efficiently. We show that, perhaps surprisingly, the minimum price necessary to achieve a target reliability scales in the same manner under both adaptive and nonadaptive scenarios. Hence, our nonadaptive approach is order optimal under both scenarios. This strongly relies on the fact that workers are fleeting and cannot be exploited. Therefore, architecturally, our results suggest that building a reliable worker-reputation system is essential to fully harnessing the potential of adaptive designs.",
"",
"In this paper we analyze a crowdsourcing system consisting of a set of users and a set of binary choice questions. Each user has an unknown, fixed, reliability that determines the user's error rate in answering questions. The problem is to determine the truth values of the questions solely based on the user answers. Although this problem has been studied extensively, theoretical error bounds have been shown only for restricted settings: when the graph between users and questions is either random or complete. In this paper we consider a general setting of the problem where the user--question graph can be arbitrary. We obtain bounds on the error rate of our algorithm and show it is governed by the expansion of the graph. We demonstrate, using several synthetic and real datasets, that our algorithm outperforms the state of the art.",
"This paper addresses the repeated acquisition of labels for data items when the labeling is imperfect. We examine the improvement (or lack thereof) in data quality via repeated labeling, and focus especially on the improvement of training labels for supervised induction. With the outsourcing of small tasks becoming easier, for example via Rent-A-Coder or Amazon's Mechanical Turk, it often is possible to obtain less-than-expert labeling at low cost. With low-cost labeling, preparing the unlabeled part of the data can become considerably more expensive than labeling. We present repeated-labeling strategies of increasing complexity, and show several main results. (i) Repeated-labeling can improve label quality and model quality, but not always. (ii) When labels are noisy, repeated labeling can be preferable to single labeling even in the traditional setting where labels are not particularly cheap. (iii) As soon as the cost of processing the unlabeled data is not free, even the simple strategy of labeling everything multiple times can give considerable advantage. (iv) Repeatedly labeling a carefully chosen set of points is generally preferable, and we present a robust technique that combines different notions of uncertainty to select data points for which quality should be improved. The bottom line: the results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality cost regimes, the benefit is substantial.",
"Crowdsourcing has become a popular paradigm for labeling large datasets. However, it has given rise to the computational task of aggregating the crowdsourced labels provided by a collection of unreliable annotators. We approach this problem by transforming it into a standard inference problem in graphical models, and applying approximate variational methods, including belief propagation (BP) and mean field (MF). We show that our BP algorithm generalizes both majority voting and a recent algorithm by [1], while our MF method is closely related to a commonly used EM algorithm. In both cases, we find that the performance of the algorithms critically depends on the choice of a prior distribution on the workers' reliability; by choosing the prior properly, both BP and MF (and EM) perform surprisingly well on both simulated and real-world datasets, competitive with state-of-the-art algorithms based on more complicated modeling assumptions.",
"We consider estimation of worker skills from worker-task interaction data (with unknown labels) for the single-coin crowd-sourcing binary classification model in symmetric noise. We define the (worker) interaction graph whose nodes are workers and an edge between two nodes indicates whether or not the two workers participated in a common task. We show that skills are asymptotically identifiable if and only if an appropriate limiting version of the interaction graph is irreducible and has odd-cycles. We then formulate a weighted rank-one optimization problem to estimate skills based on observations on an irreducible, aperiodic interaction graph. We propose a gradient descent scheme and show that for such interaction graphs estimates converge asymptotically to the global minimum. We characterize noise robustness of the gradient scheme in terms of spectral properties of signless Laplacians of the interaction graph. We then demonstrate that a plug-in estimator based on the estimated skills achieves state-of-art performance on a number of real-world datasets. Our results have implications for rank-one matrix completion problem in that gradient descent can provably recover @math rank-one matrices based on @math off-diagonal observations of a connected graph with a single odd-cycle."
]
} |
1602.03481 | 2553923866 | Crowdsourcing platforms provide marketplaces where task requesters can pay to get labels on their data. Such markets have emerged recently as popular venues for collecting annotations that are crucial in training machine learning models in various applications. However, as jobs are tedious and payments are low, errors are common in such crowdsourced labels. A common strategy to overcome such noise in the answers is to add redundancy by getting multiple answers for each task and aggregating them using some methods such as majority voting. For such a system, there is a fundamental question of interest: how can we maximize the accuracy given a fixed budget on how many responses we can collect on the crowdsourcing system. We characterize this fundamental trade-off between the budget (how many answers the requester can collect in total) and the accuracy in the estimated labels. In particular, we ask whether adaptive task assignment schemes lead to a more efficient trade-off between the accuracy and the budget. Adaptive schemes, where tasks are assigned adaptively based on the data collected thus far, are widely used in practical crowdsourcing systems to efficiently use a given fixed budget. However, existing theoretical analyses of crowdsourcing systems suggest that the gain of adaptive task assignments is minimal. To bridge this gap, we investigate this question under a strictly more general probabilistic model, which has been recently introduced to model practical crowdsourced annotations. Under this generalized Dawid-Skene model, we characterize the fundamental trade-off between budget and accuracy. We introduce a novel adaptive scheme that matches this fundamental limit. We further quantify the fundamental gap between adaptive and non-adaptive schemes, by comparing the trade-off with the one for non-adaptive schemes. Our analyses confirm that the gap is significant. | This negative result relies crucially on the fact that, under the standard DS model, all tasks are inherently equally difficult. As all tasks have @math 's either zero or one, the individual difficulty of a task is @math , and a worker's probability of making an error on one task is the same as any other tasks. Hence, adaptively assigning more workers to relatively more ambiguous tasks has only a marginal gain. However, simple adaptive schemes are widely used in practice, where significant gains are achieved. In real-world systems, tasks are widely heterogeneous. Some images are much more difficult to classify (and find the true label) compared to other images. To capture such varying difficulties in the tasks, generalizations of the DS model were proposed in @cite_13 @cite_22 @cite_15 @cite_36 and significant improvements have been reported on real datasets. | {
"cite_N": [
"@cite_36",
"@cite_15",
"@cite_13",
"@cite_22"
],
"mid": [
"2473938289",
"1814633089",
"2142518823",
""
],
"abstract": [
"The aggregation and denoising of crowd labeled data is a task that has gained increased significance with the advent of crowdsourcing platforms and massive datasets. In this paper, we propose a permutation-based model for crowd labeled data that is a significant generalization of the common Dawid-Skene model, and introduce a new error metric by which to compare different estimators. Working in a high-dimensional non-asymptotic framework that allows both the number of workers and tasks to scale, we derive optimal rates of convergence for the permutation-based model. We show that the permutation-based model offers significant robustness in estimation due to its richness, while surprisingly incurring only a small additional statistical penalty as compared to the Dawid-Skene model. Finally, we propose a computationally-efficient method, called the OBI-WAN estimator, that is uniformly optimal over a class intermediate between the permutation-based and the Dawid-Skene models, and is uniformly consistent over the entire permutation-based model class. In contrast, the guarantees for estimators available in prior literature are sub-optimal over the original Dawid-Skene model.",
"There is a rapidly increasing interest in crowdsourcing for data labeling. By crowdsourcing, a large number of labels can be often quickly gathered at low cost. However, the labels provided by the crowdsourcing workers are usually not of high quality. In this paper, we propose a minimax conditional entropy principle to infer ground truth from noisy crowdsourced labels. Under this principle, we derive a unique probabilistic labeling model jointly parameterized by worker ability and item difficulty. We also propose an objective measurement principle, and show that our method is the only method which satisfies this objective measurement principle. We validate our method through a variety of real crowdsourcing datasets with binary, multiclass or ordinal labels.",
"Modern machine learning-based approaches to computer vision require very large databases of hand labeled images. Some contemporary vision systems already require on the order of millions of images for training (e.g., Omron face detector [9]). New Internet-based services allow for a large number of labelers to collaborate around the world at very low cost. However, using these services brings interesting theoretical and practical challenges: (1) The labelers may have wide ranging levels of expertise which are unknown a priori, and in some cases may be adversarial; (2) images may vary in their level of difficulty; and (3) multiple labels for the same image must be combined to provide an estimate of the actual label of the image. Probabilistic approaches provide a principled way to approach these problems. In this paper we present a probabilistic model and use it to simultaneously infer the label of each image, the expertise of each labeler, and the difficulty of each image. On both simulated and real data, we demonstrate that the model outperforms the commonly used \"Majority Vote\" heuristic for inferring image labels, and is robust to both noisy and adversarial labelers.",
""
]
} |
1602.03481 | 2553923866 | Crowdsourcing platforms provide marketplaces where task requesters can pay to get labels on their data. Such markets have emerged recently as popular venues for collecting annotations that are crucial in training machine learning models in various applications. However, as jobs are tedious and payments are low, errors are common in such crowdsourced labels. A common strategy to overcome such noise in the answers is to add redundancy by getting multiple answers for each task and aggregating them using some methods such as majority voting. For such a system, there is a fundamental question of interest: how can we maximize the accuracy given a fixed budget on how many responses we can collect on the crowdsourcing system. We characterize this fundamental trade-off between the budget (how many answers the requester can collect in total) and the accuracy in the estimated labels. In particular, we ask whether adaptive task assignment schemes lead to a more efficient trade-off between the accuracy and the budget. Adaptive schemes, where tasks are assigned adaptively based on the data collected thus far, are widely used in practical crowdsourcing systems to efficiently use a given fixed budget. However, existing theoretical analyses of crowdsourcing systems suggest that the gain of adaptive task assignments is minimal. To bridge this gap, we investigate this question under a strictly more general probabilistic model, which has been recently introduced to model practical crowdsourced annotations. Under this generalized Dawid-Skene model, we characterize the fundamental trade-off between budget and accuracy. We introduce a novel adaptive scheme that matches this fundamental limit. We further quantify the fundamental gap between adaptive and non-adaptive schemes, by comparing the trade-off with the one for non-adaptive schemes. Our analyses confirm that the gap is significant. | On the theoretical understanding of the original DS model, the dense regime has been studied first, where all workers are assigned all tasks. A spectral method for finding the true labels was first analyzed in @cite_33 and an EM approach followed by spectral initial step is analyzed in @cite_32 to achieve a near-optimal performance. The minimax error rate of this problem was identified in @cite_11 by analyzing the MAP estimator, which is computationally intractable. | {
"cite_N": [
"@cite_32",
"@cite_33",
"@cite_11"
],
"mid": [
"2949312134",
"",
"1522318289"
],
"abstract": [
"Crowdsourcing is a popular paradigm for effectively collecting labels at low cost. The Dawid-Skene estimator has been widely used for inferring the true labels from the noisy labels provided by non-expert crowdsourcing workers. However, since the estimator maximizes a non-convex log-likelihood function, it is hard to theoretically justify its performance. In this paper, we propose a two-stage efficient algorithm for multi-class crowd labeling problems. The first stage uses the spectral method to obtain an initial estimate of parameters. Then the second stage refines the estimation by optimizing the objective function of the Dawid-Skene estimator via the EM algorithm. We show that our algorithm achieves the optimal convergence rate up to a logarithmic factor. We conduct extensive experiments on synthetic and real datasets. Experimental results demonstrate that the proposed algorithm is comparable to the most accurate empirical approach, while outperforming several other recently proposed methods.",
"",
"Crowdsourcing has become a primary means for label collection in many real-world machine learning applications. A classical method for inferring the true labels from the noisy labels provided by crowdsourcing workers is Dawid-Skene estimator. In this paper, we prove convergence rates of global optimizers of Dawid-Skene estimator. The revealed exponent in the rate of convergence is shown to be optimal via a lower bound argument. A projected EM algorithm is analyzed and is shown to achieve nearly the same exponent as that of the global optimizers. Our work resolves the long standing issue of whether Dawid-Skene estimator has sound theoretical guarantees besides its good performance observed in practice. In addition, a comparative study with majority voting illustrates both advantages and pitfalls of Dawid-Skene estimator."
]
} |
1602.03481 | 2553923866 | Crowdsourcing platforms provide marketplaces where task requesters can pay to get labels on their data. Such markets have emerged recently as popular venues for collecting annotations that are crucial in training machine learning models in various applications. However, as jobs are tedious and payments are low, errors are common in such crowdsourced labels. A common strategy to overcome such noise in the answers is to add redundancy by getting multiple answers for each task and aggregating them using some methods such as majority voting. For such a system, there is a fundamental question of interest: how can we maximize the accuracy given a fixed budget on how many responses we can collect on the crowdsourcing system. We characterize this fundamental trade-off between the budget (how many answers the requester can collect in total) and the accuracy in the estimated labels. In particular, we ask whether adaptive task assignment schemes lead to a more efficient trade-off between the accuracy and the budget. Adaptive schemes, where tasks are assigned adaptively based on the data collected thus far, are widely used in practical crowdsourcing systems to efficiently use a given fixed budget. However, existing theoretical analyses of crowdsourcing systems suggest that the gain of adaptive task assignments is minimal. To bridge this gap, we investigate this question under a strictly more general probabilistic model, which has been recently introduced to model practical crowdsourced annotations. Under this generalized Dawid-Skene model, we characterize the fundamental trade-off between budget and accuracy. We introduce a novel adaptive scheme that matches this fundamental limit. We further quantify the fundamental gap between adaptive and non-adaptive schemes, by comparing the trade-off with the one for non-adaptive schemes. Our analyses confirm that the gap is significant. | In this paper, we are interested in a more challenging setting where each task is assigned only a small number of workers of @math . For a non-adaptive task assignment, a novel spectral algorithm based on the non-backtracking operator of the matrix @math has been analyzed under the original DS model by @cite_18 , which showed that the proposed spectral approach is near-optimal. Further, @cite_19 showed that any non-adaptive task assignment scheme will have only marginal improvement in the error rate under the original DS model. Hence, there is no significant gain in adaptivity. | {
"cite_N": [
"@cite_19",
"@cite_18"
],
"mid": [
"2163522723",
"2140890285"
],
"abstract": [
"Crowdsourcing systems, in which numerous tasks are electronically distributed to numerous “information pieceworkers,” have emerged as an effective paradigm for human-powered solving of large-scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all such systems must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in an appropriate manner, e.g., majority voting. In this paper, we consider a general model of such crowdsourcing tasks and pose the problem of minimizing the total price i.e., number of task assignments that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm, inspired by belief propagation and low-rank matrix approximation, significantly outperforms majority voting and, in fact, is optimal through comparison to an oracle that knows the reliability of every worker. Further, we compare our approach with a more general class of algorithms that can dynamically assign tasks. By adaptively deciding which questions to ask to the next set of arriving workers, one might hope to reduce uncertainty more efficiently. We show that, perhaps surprisingly, the minimum price necessary to achieve a target reliability scales in the same manner under both adaptive and nonadaptive scenarios. Hence, our nonadaptive approach is order optimal under both scenarios. This strongly relies on the fact that workers are fleeting and cannot be exploited. Therefore, architecturally, our results suggest that building a reliable worker-reputation system is essential to fully harnessing the potential of adaptive designs.",
"Crowdsourcing systems, in which tasks are electronically distributed to numerous \"information piece-workers\", have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a general model of such crowdsourcing tasks, and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm significantly outperforms majority voting and, in fact, is asymptotically optimal through comparison to an oracle that knows the reliability of every worker."
]
} |
1602.03316 | 2615167184 | Decentralized systems can be more resistant to operator mischief than centralized ones, but they are substantially harder to develop, deploy, and maintain. This cost is dramatically reduced if the decentralized part of the system can be made highly generic, and thus incorporated into many different applications. We show how existing anonymization systems can serve this purpose, securing a public database against equivocation by its operator without the need for cooperation by the database owner. We derive bounds on the probability of successful equivocation, and in doing so, we demonstrate that anonymization systems are not only important for user privacy, but that by providing privacy to machines they have a wider value within the internet infrastructure | The problem of obtaining agreement on a value amongst several---possibly malicious---users is an old one, known as the Byzantine Generals problem, and was first analyzed by Lamport in 1982 @cite_0 . Several officers plan for an attack, in which they must act simultaneously in order to be successful. This is complicated by the knowledge that some of the officers may be traitors---including the general in command of all of them---and may therefore send different messages to different units in an effort to induce a doomed attack. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2120510885"
],
"abstract": [
"Reliable computer systems must handle malfunctioning components that give conflicting information to different parts of the system. This situation can be expressed abstractly in terms of a group of generals of the Byzantine army camped with their troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan. However, one or more of them may be traitors who will try to confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement. It is shown that, using only oral messages, this problem is solvable if and only if more than two-thirds of the generals are loyal; so a single traitor can confound two loyal generals. With unforgeable written messages, the problem is solvable for any number of generals and possible traitors. Applications of the solutions to reliable computer systems are then discussed."
]
} |
1602.03228 | 2951287970 | The busy beaver is a well-known specific example of a non-computable function. Whilst many aspect of this problem have been investigated, it is not always easy to find thorough and convincing evidence for the claims made about the maximality of particular machines, and the phenomenal size of some of the numbers involved means that it is not obvious that the problem can be feasibly addressed at all. In this paper we address both of these issues. We discuss a framework in which the busy beaver problem and similar problems may be addressed, and the appropriate processes for providing evidence of claims made. We also show how a simple heuristic, which we call the observant otter, can be used to evaluate machines with an extremely large number of execution steps required to terminate. We also show empirical results for an implementation of this heuristic which show how this heuristic is effective for all known monster' machines. | The relationship between @math and @math has been investigated @cite_29 @cite_2 @cite_0 , and it is known that @math for a constant @math @cite_38 . However, this is still rather loose, and does not give us much insight into the relationship between @math and @math . In a similar manner, lower bounds on @math have been known for some time @cite_5 ; however, those given for @math have been far surpassed already. | {
"cite_N": [
"@cite_38",
"@cite_29",
"@cite_0",
"@cite_2",
"@cite_5"
],
"mid": [
"1998724425",
"2091219308",
"2067356164",
"2010433167",
"2536096804"
],
"abstract": [
"About the relation between the shift function S(n) and the busybeaver function Σ(n), there have been some results. In 1992,A. Julstrom gave his result: S(n) < Σ(20n); in 1994, WangKewen and Xu Shurun obtained: S(n) < Σ(10n). In thispaper, we shall show that S(n) ≤ Σ(3n -f c)(c is aconstant).",
"The Busy Beaver function S(n) is the maximum number of 1's a halting n-state Turing machine may leave on an initially blank tape. The shift function S(n) is the maximum number of moves such a machine may make before it halts. This paper shows that S(n) l S(20n), then uses this result to prove that both S(n and S(n) are non-computable and their non-computability is equivalent to the undecidability of the halting problem. Demonstrations that several other functions are also non-computable apply a construction used in the proof of the bound on S(n).",
"Consider Turing machines that use a tape infinite in both directions, with the tape alphabet 0,1 . Rado's busy beaver function, ones(n), is the maximum number of 1's such a machine, with n states, started on a blank (all-zero) tape, may leave on its tape when it halts. The function ones(n) is non-computable; in fact, it grows faster than any computable function. Other functions with a similar nature can be defined also. All involve machines of n states, started on a blank tape. The function time(n) is the maximum number of moves such a machine may make before halting. The function num(n) is the largest number of 1's such a machine may leave on its tape in the form of a single run; and the function space(n) is the maximum number of tape squares such a machine may scan before it halts. This paper establishes new bounds on these functions in terms of each other. Specifically, we bound time(n) by num(n+o(n)), improving on the previously known bound num(3n+6) . This result is obtained using a kind of self-interpreting'' Turing machine. We also improve on the trivial relation space(n) ≤ time(n) , using a technique of counting crossing sequences.",
"Consider Turing machines that read and write the symbols 1 and 0 on a one-dimensional tape that is infinite in both directions, and halt when started on a tape containing all O's. Rado'sbusy beaver function ones(n) is the maximum number of 1's such a machine, withn states, may leave on its tape when it halts. The function ones(n) is noncomputable; in fact, it grows faster than any computable function.",
"In this note we show how to construct some simply-configured N-state binary Turing machines that will start on a blank tape and eventually halt after printing a very large number of ones. The number of ones produced by these machines can be expressed analytically in terms of functional difference equation. The latter expression furnishes the best lower bound presently known for Rado's noncomputable function, Σ(N), when N ≫ 5."
]
} |
1602.03346 | 2951597672 | Action parsing in videos with complex scenes is an interesting but challenging task in computer vision. In this paper, we propose a generic 3D convolutional neural network in a multi-task learning manner for effective Deep Action Parsing (DAP3D-Net) in videos. Particularly, in the training phase, action localization, classification and attributes learning can be jointly optimized on our appearancemotion data via DAP3D-Net. For an upcoming test video, we can describe each individual action in the video simultaneously as: Where the action occurs, What the action is and How the action is performed. To well demonstrate the effectiveness of the proposed DAP3D-Net, we also contribute a new Numerous-category Aligned Synthetic Action dataset, i.e., NASA, which consists of 200; 000 action clips of more than 300 categories and with 33 pre-defined action attributes in two hierarchical levels (i.e., low-level attributes of basic body part movements and high-level attributes related to action motion). We learn DAP3D-Net using the NASA dataset and then evaluate it on our collected Human Action Understanding (HAU) dataset. Experimental results show that our approach can accurately localize, categorize and describe multiple actions in realistic videos. | Since little work has been done on action parsing in videos for simultaneously solving the problems of action localization, categorization and attributes learning, in this section, we mainly review some related work on action detection and action attributes modeling. Action detection can be regarded as a combination of action localization and categorization. In @cite_29 , a weakly supervised model with multiple instance learning was applied for action detection. In @cite_60 , a dynamic-poselets method was introduced. Branch-and-bound algorithm @cite_48 was proposed to reduce the action detection complexity. There also exist some sub-volume @cite_30 @cite_3 @cite_51 based action detection methods. Besides, the cross-dataset action detection @cite_31 and the spatio-temporal deformable part models based action detection @cite_49 have also been proposed in previous studies. Additionally, action detection via fast proposals was developed in @cite_11 . | {
"cite_N": [
"@cite_30",
"@cite_60",
"@cite_48",
"@cite_29",
"@cite_3",
"@cite_49",
"@cite_31",
"@cite_51",
"@cite_11"
],
"mid": [
"2131311058",
"410625161",
"2123477621",
"2016208906",
"2137981002",
"2095661305",
"2097342496",
"2018977674",
"1945129080"
],
"abstract": [
"In this paper we develop an algorithm for action recognition and localization in videos. The algorithm uses a figure-centric visual word representation. Different from previous approaches it does not require reliable human detection and tracking as input. Instead, the person location is treated as a latent variable that is inferred simultaneously with action recognition. A spatial model for an action is learned in a discriminative fashion under a figure-centric representation. Temporal smoothness over video sequences is also enforced. We present results on the UCF-Sports dataset, verifying the effectiveness of our model in situations where detection and tracking of individuals is challenging.",
"Action detection is of great importance in understanding human motion from video. Compared with action recognition, it not only recognizes action type, but also localizes its spatiotemporal extent. This paper presents a relational model for action detection, which first decomposes human action into temporal “key poses” and then further into spatial “action parts”. Specifically, we start by clustering cuboids around each human joint into dynamic-poselets using a new descriptor. The cuboids from the same cluster share consistent geometric and dynamic structure, and each cluster acts as a mixture of body parts. We then propose a sequential skeleton model to capture the relations among dynamic-poselets. This model unifies the tasks of learning the composites of mixture dynamic-poselets, the spatiotemporal structures of action parts, and the local model for each action part in a single framework. Our model not only allows to localize the action in a video stream, but also enables a detailed pose estimation of an actor. We formulate the model learning problem in a structured SVM framework and speed up model inference by dynamic programming. We conduct experiments on three challenging action detection datasets: the MSR-II dataset, the UCF Sports dataset, and the JHMDB dataset. The results show that our method achieves superior performance to the state-of-the-art methods on these datasets.",
"Actions are spatiotemporal patterns. Similar to the sliding window-based object detection, action detection finds the reoccurrences of such spatiotemporal patterns through pattern matching, by handling cluttered and dynamic backgrounds and other types of action variations. We address two critical issues in pattern matching-based action detection: 1) the intrapattern variations in actions, and 2) the computational efficiency in performing action pattern search in cluttered scenes. First, we propose a discriminative pattern matching criterion for action classification, called naive Bayes mutual information maximization (NBMIM). Each action is characterized by a collection of spatiotemporal invariant features and we match it with an action class by measuring the mutual information between them. Based on this matching criterion, action detection is to localize a subvolume in the volumetric video space that has the maximum mutual information toward a specific action class. A novel spatiotemporal branch-and-bound (STBB) search algorithm is designed to efficiently find the optimal solution. Our proposed action detection method does not rely on the results of human detection, tracking, or background subtraction. It can handle action variations such as performing speed and style variations as well as scale changes well. It is also insensitive to dynamic and cluttered backgrounds and even to partial occlusions. The cross-data set experiments on action detection, including KTH, CMU action data sets, and another new MSR action data set, demonstrate the effectiveness and efficiency of the proposed multiclass multiple-instance action detection method.",
"The detection of human action in videos of busy natural scenes with dynamic background is of interest for applications such as video surveillance. Taking a conventional fully supervised approach, the spatio-temporal locations of the action of interest have to be manually annotated frame by frame in the training videos, which is tedious and unreliable. In this paper, for the first time, a weakly supervised action detection method is proposed which only requires binary labels of the videos indicating the presence of the action of interest. Given a training set of binary labelled videos, the weakly supervised learning (WSL) problem is recast as a multiple instance learning (MIL) problem. A novel MIL algorithm is developed which differs from the existing MIL algorithms in that it locates the action of interest spatially and temporally by globally optimising both interand intra-class distance. We demonstrate through experiments that our WSL approach can achieve comparable detection performance to a fully supervised learning approach, and that the proposed MIL algorithm significantly outperforms the existing ones.",
"Real-world actions occur often in crowded, dynamic environments. This poses a difficult challenge for current approaches to video event detection because it is difficult to segment the actor from the background due to distracting motion from other objects in the scene. We propose a technique for event recognition in crowded videos that reliably identifies actions in the presence of partial occlusion and background clutter. Our approach is based on three key ideas: (1) we efficiently match the volumetric representation of an event against oversegmented spatio-temporal video volumes; (2) we augment our shape-based features using flow; (3) rather than treating an event template as an atomic entity, we separately match by parts (both in space and time), enabling robustness against occlusions and actor variability. Our experiments on human actions, such as picking up a dropped object or waving in a crowd show reliable detection with few false positives.",
"Deformable part models have achieved impressive performance for object detection, even on difficult image datasets. This paper explores the generalization of deformable part models from 2D images to 3D spatiotemporal volumes to better study their effectiveness for action detection in video. Actions are treated as spatiotemporal patterns and a deformable part model is generated for each action from a collection of examples. For each action model, the most discriminative 3D sub volumes are automatically selected as parts and the spatiotemporal relations between their locations are learned. By focusing on the most distinctive parts of each action, our models adapt to intra-class variation and show robustness to clutter. Extensive experiments on several video datasets demonstrate the strength of spatiotemporal DPMs for classifying and localizing actions.",
"In recent years, many research works have been carried out to recognize human actions from video clips. To learn an effective action classifier, most of the previous approaches rely on enough training labels. When being required to recognize the action in a different dataset, these approaches have to re-train the model using new labels. However, labeling video sequences is a very tedious and time-consuming task, especially when detailed spatial locations and time durations are required. In this paper, we propose an adaptive action detection approach which reduces the requirement of training labels and is able to handle the task of cross-dataset action detection with few or no extra training labels. Our approach combines model adaptation and action detection into a Maximum a Posterior (MAP) estimation framework, which explores the spatial-temporal coherence of actions and makes good use of the prior information which can be obtained without supervision. Our approach obtains state-of-the-art results on KTH action dataset using only 50 of the training labels in tradition approaches. Furthermore, we show that our approach is effective for the cross-dataset detection which adapts the model trained on KTH to two other challenging datasets1.",
"For automated surveillance, it is useful to detect specific actions performed by people in busy natural environments. This differs from and thus more challenging than the intensively studied action recognition problem in that for action detection in crowd an action of interest is often overwhelmed by large number of background activities of other objects in the scene. Motivated by the success of sliding-window based 2D object detection approaches, in this paper, we propose to tackle the problem by learning a discriminative classifier from annotated 3D action cuboids to capture the intra-class variation, and sliding 3D search windows for detection. The key innovation of our method is a novel greedy k nearest neighbour algorithm for automated annotation of positive training data, by which an action detector can be learned with only a single training sequence being annotated thus greatly alleviating the tedious and unreliable 3D manual annotation. Extensive experiments on real-world action detection datasets demonstrate that our detector trained with minimal annotation can achieve comparable results to that learned with full annotation, and outperforms existing methods.",
"In this paper we target at generating generic action proposals in unconstrained videos. Each action proposal corresponds to a temporal series of spatial bounding boxes, i.e., a spatio-temporal video tube, which has a good potential to locate one human action. Assuming each action is performed by a human with meaningful motion, both appearance and motion cues are utilized to measure the actionness of the video tubes. After picking those spatiotemporal paths of high actionness scores, our action proposal generation is formulated as a maximum set coverage problem, where greedy search is performed to select a set of action proposals that can maximize the overall actionness score. Compared with existing action proposal approaches, our action proposals do not rely on video segmentation and can be generated in nearly real-time. Experimental results on two challenging datasets, MSRII and UCF 101, validate the superior performance of our action proposals as well as competitive results on action detection and search."
]
} |
1602.03346 | 2951597672 | Action parsing in videos with complex scenes is an interesting but challenging task in computer vision. In this paper, we propose a generic 3D convolutional neural network in a multi-task learning manner for effective Deep Action Parsing (DAP3D-Net) in videos. Particularly, in the training phase, action localization, classification and attributes learning can be jointly optimized on our appearancemotion data via DAP3D-Net. For an upcoming test video, we can describe each individual action in the video simultaneously as: Where the action occurs, What the action is and How the action is performed. To well demonstrate the effectiveness of the proposed DAP3D-Net, we also contribute a new Numerous-category Aligned Synthetic Action dataset, i.e., NASA, which consists of 200; 000 action clips of more than 300 categories and with 33 pre-defined action attributes in two hierarchical levels (i.e., low-level attributes of basic body part movements and high-level attributes related to action motion). We learn DAP3D-Net using the NASA dataset and then evaluate it on our collected Human Action Understanding (HAU) dataset. Experimental results show that our approach can accurately localize, categorize and describe multiple actions in realistic videos. | For action attributes modeling, @cite_2 used high-level semantic attributes to represent human actions in videos and further constructed more descriptive models for the action recognition task. A similar idea has also been applied in @cite_12 @cite_34 for improved action categorization. Moreover, a convolutional multi-task learning method @cite_5 has been adopted for action recognition from low-level features with attribute regularization. In @cite_13 , a robust learning framework using relative attributes was developed for human action recognition. Additionally, action attributes and object-parts from images were also used for action recognition in @cite_19 . However, all the above studies mainly focus on action recognition by means of attributes rather than general action attributes learning tasks. Although, in @cite_47 , authors have jointly tackled the classification and attributes annotation for group activities, it is still regarded as a separated feature extraction and attribute learning pipeline rather than an end-to-end framework as our multi-task DAP3D-Net. Besides, DAP3D-Net focuses on simultaneous localization and parsing of multiple actions, while in @cite_47 only global representations of group activities are considered. | {
"cite_N": [
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_47",
"@cite_34",
"@cite_13",
"@cite_12"
],
"mid": [
"2038765747",
"2064851185",
"2020082763",
"2150674759",
"1963135442",
"2042328763",
"2181378550"
],
"abstract": [
"In this work, we propose to use attributes and parts for recognizing human actions in still images. We define action attributes as the verbs that describe the properties of human actions, while the parts of actions are objects and poselets that are closely related to the actions. We jointly model the attributes and parts by learning a set of sparse bases that are shown to carry much semantic meaning. Then, the attributes and parts of an action image can be reconstructed from sparse coefficients with respect to the learned bases. This dual sparsity provides theoretical guarantee of our bases learning and feature reconstruction approach. On the PASCAL action dataset and a new “Stanford 40 Actions” dataset, we show that our method extracts meaningful high-order interactions between attributes and parts in human actions while achieving state-of-the-art classification performance.",
"In this paper we explore the idea of using high-level semantic concepts, also called attributes, to represent human actions from videos and argue that attributes enable the construction of more descriptive models for human action recognition. We propose a unified framework wherein manually specified attributes are: i) selected in a discriminative fashion so as to account for intra-class variability; ii) coherently integrated with data-driven attributes to make the attribute set more descriptive. Data-driven attributes are automatically inferred from the training data using an information theoretic approach. Our framework is built upon a latent SVM formulation where latent variables capture the degree of importance of each attribute for each action class. We also demonstrate that our attribute-based action representation can be effectively used to design a recognition procedure for classifying novel action classes for which no training samples are available. We test our approach on several publicly available datasets and obtain promising results that quantitatively demonstrate our theoretical claims.",
"Recently, attributes have been introduced as a kind of high-level semantic information to help improve the classification accuracy. Multitask learning is an effective methodology to achieve this goal, which shares low-level features between attributes and actions. Yet such methods neglect the constraints that attributes impose on classes, which may fail to constrain the semantic relationship between the attributes and actions. In this paper, we explicitly consider such attribute-action relationship for human action recognition, and correspondingly, we modify the multitask learning model by adding attribute regularization. In this way, the learned model not only shares the low-level features, but also gets regularized according to the semantic constrains. In addition, since attribute and class label contain different amounts of semantic information, we separately treat attribute classifiers and action classifiers in the framework of multitask learning for further performance improvement. Our method is verified on three challenging datasets (KTH, UIUC, and Olympic Sports), and the experimental results demonstrate that our method achieves better results than that of previous methods on human action recognition.",
"The rapid development of social video sharing platforms has created a huge demand for automatic video classification and annotation techniques, in particular for videos containing social activities of a group of people (e.g. YouTube video of a wedding reception). Recently, attribute learning has emerged as a promising paradigm for transferring learning to sparsely labelled classes in object or single-object short action classification. In contrast to existing work, this paper for the first time, tackles the problem of attribute learning for understanding group social activities with sparse labels. This problem is more challenging because of the complex multi-object nature of social activities, and the unstructured nature of the activity context. To solve this problem, we (1) contribute an unstructured social activity attribute (USAA) dataset with both visual and audio attributes, (2) introduce the concept of semi-latent attribute space and (3) propose a novel model for learning the latent attributes which alleviate the dependence of existing models on exact and exhaustive manual specification of the attribute-space. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multi-media sparse data learning tasks including: multi-task learning, N-shot transfer learning, learning with label noise and importantly zero-shot learning.",
"When learning a new classifier, poor quality training data can significantly degrade performance. Applying selection conditions to the training data can prevent mislabeled, noisy, or damaged data from skewing the classifier. We extend a set of action attributes and apply training case attribute selection conditions to a challenging action recognition dataset.",
"High-level semantic feature is important to recognize human action. Recently, relative attributes, which are used to describe relative relationship, have been proposed as one of high-level semantic features and have shown promising performance. However, the training process is very sensitive to noises and moreover it is not robust to zero-shot learning. In this paper, to overcome these drawbacks, we propose a robust learning framework using relative attributes for human action recognition. We simultaneously add Sigmoid and Gaussian envelops into the loss objective. In this way, the influence of outliers will be greatly reduced in the process of optimization, thus improving the accuracy. In addition, we adopt Gaussian Mixture models for better fitting the distribution of actions in rank score space. Correspondingly, a novel transfer strategy is proposed to evaluate the parameters of Gaussian Mixture models for unseen classes. Our method is verified on three challenging datasets (KTH, UIUC and HOLLYWOOD2), and the experimental results demonstrate that our method achieves better results than previous methods in both zero-shot classification and traditional recognition task for human action recognition.",
"Human action recognition received many interest in the computer vision community. Most of the existing methods focus on either construct robust descriptor from the temporal domain, or computational method to exploit the discriminative power of the descriptor. In this paper we explore the idea of using local action attributes to form an action descriptor, where an action is no longer characterized with the motion changes in the temporal domain but the local semantic description of the action. We propose an novel framework where introduces local action attributes to represent an action for the final human action categorization. The local action attributes are defined for each body part which are independent from the global action. The resulting attribute descriptor is used to jointly model human action to achieve robust performance. In addition, we conduct some study on the impact of using body local and global low-level feature for the aforementioned attributes. Experiments on the KTH dataset and the MV-TJU dataset show that our local action attribute based descriptor improve action recognition performance."
]
} |
1602.03264 | 2949457404 | We show that a generative random field model, which we call generative ConvNet, can be derived from the commonly used discriminative ConvNet, by assuming a ConvNet for multi-category classification and assuming one of the categories is a base category generated by a reference distribution. If we further assume that the non-linearity in the ConvNet is Rectified Linear Unit (ReLU) and the reference distribution is Gaussian white noise, then we obtain a generative ConvNet model that is unique among energy-based models: The model is piecewise Gaussian, and the means of the Gaussian pieces are defined by an auto-encoder, where the filters in the bottom-up encoding become the basis functions in the top-down decoding, and the binary activation variables detected by the filters in the bottom-up convolution process become the coefficients of the basis functions in the top-down deconvolution process. The Langevin dynamics for sampling the generative ConvNet is driven by the reconstruction error of this auto-encoder. The contrastive divergence learning of the generative ConvNet reconstructs the training images by the auto-encoder. The maximum likelihood learning algorithm can synthesize realistic natural image patterns. | The model in the form of exponential tilting of a reference distribution where the exponential tilting is defined by ConvNet was first proposed by @cite_9 . They did not study the internal representational structure of the model. @cite_12 proposed to learn the FRAME (Filters, Random field, And Maximum Entropy) models based on the pre-learned filters of existing ConvNets. They did not learn the models from scratch. The hierarchical energy-based models were studied by the pioneering work of @cite_3 and @cite_10 . Their models do not correspond directly to modern ConvNet and do not posses the internal representational structure of the generative ConvNet. | {
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_10",
"@cite_12"
],
"mid": [
"1936878994",
"2108581046",
"2185528074",
""
],
"abstract": [
"The convolutional neural networks (CNNs) have proven to be a powerful tool for discriminative learning. Recently researchers have also started to show interest in the generative aspects of CNNs in order to gain a deeper understanding of what they have learned and how to further improve them. This paper investigates generative modeling of CNNs. The main contributions include: (1) We construct a generative model for the CNN in the form of exponential tilting of a reference distribution. (2) We propose a generative gradient for pre-training CNNs by a non-parametric importance sampling scheme, which is fundamentally different from the commonly used discriminative gradient, and yet has the same computational architecture and cost as the latter. (3) We propose a generative visualization method for the CNNs by sampling from an explicit parametric image distribution. The proposed visualization method can directly draw synthetic samples for any given node in a trained CNN by the Hamiltonian Monte Carlo (HMC) algorithm, without resorting to any extra hold-out images. Experiments on the challenging ImageNet benchmark show that the proposed generative gradient pre-training consistently helps improve the performances of CNNs, and the proposed generative visualization method generates meaningful and varied samples of synthetic images from a large-scale deep CNN.",
"We describe a way of modeling high-dimensional data vectors by using an unsupervised, nonlinear, multilayer neural network in which the activity of each neuron-like unit makes an additive contribution to a global energy score that indicates how surprised the network is by the data vector. The connection weights that determine how the activity of each unit depends on the activities in earlier layers are learned by minimizing the energy assigned to data vectors that are actually observed and maximizing the energy assigned to “confabulations” that are generated by perturbing an observed data vector in a direction that decreases its energy under the current model.",
"Deep generative models with multiple hidden layers have been shown to be able to learn meaningful and compact representations of data. In this work we propose deep energy models, which use deep feedforward neural networks to model the energy landscapes that define probabilistic models. We are able to efficiently train all layers of our model simultaneously, allowing the lower layers of the model to adapt to the training of the higher layers, and thereby producing better generative models. We evaluate the generative performance of our models on natural images and demonstrate that this joint training of multiple layers yields qualitative and quantitative improvements over greedy layerwise training. We further generalize our models beyond the commonly used sigmoidal neural networks and show how a deep extension of the product of Student-t distributions model achieves good generative performance. Finally, we introduce a discriminative extension of our model and demonstrate that it outperforms other fully-connected models on object recognition on the NORB dataset.",
""
]
} |
1602.03264 | 2949457404 | We show that a generative random field model, which we call generative ConvNet, can be derived from the commonly used discriminative ConvNet, by assuming a ConvNet for multi-category classification and assuming one of the categories is a base category generated by a reference distribution. If we further assume that the non-linearity in the ConvNet is Rectified Linear Unit (ReLU) and the reference distribution is Gaussian white noise, then we obtain a generative ConvNet model that is unique among energy-based models: The model is piecewise Gaussian, and the means of the Gaussian pieces are defined by an auto-encoder, where the filters in the bottom-up encoding become the basis functions in the top-down decoding, and the binary activation variables detected by the filters in the bottom-up convolution process become the coefficients of the basis functions in the top-down deconvolution process. The Langevin dynamics for sampling the generative ConvNet is driven by the reconstruction error of this auto-encoder. The contrastive divergence learning of the generative ConvNet reconstructs the training images by the auto-encoder. The maximum likelihood learning algorithm can synthesize realistic natural image patterns. | The generative ConvNet model can be viewed as a hierarchical version of the FRAME model , as well as the Product of Experts and Field of Experts models. These models do not have explicit Gaussian white noise reference distribution. Thus they do not have the internal auto-encoding representation, and the filters in these models do not play the role of basis functions. In fact, a main motivation for this paper is to reconcile the FRAME model , where the Gabor wavelets play the role of bottom-up filters, and the Olshausen-Field model , where the wavelets play the role of top-down basis functions. The generative ConvNet may be seen as one step towards achieving this goal. See also @cite_4 . | {
"cite_N": [
"@cite_4"
],
"mid": [
"2023443487"
],
"abstract": [
"It is well known that natural images admit sparse representations by redundant dictionaries of basis functions such as Gabor-like wavelets. However, it is still an open question as to what the next layer of representational units above the layer of wavelets should be. We address this fundamental question by proposing a sparse FRAME (Filters, Random field, And Maximum Entropy) model for representing natural image patterns. Our sparse FRAME model is an inhomogeneous generalization of the original FRAME model. It is a non-stationary Markov random field model that reproduces the observed statistical properties of filter responses at a subset of selected locations, scales and orientations. Each sparse FRAME model is intended to represent an object pattern and can be considered a deformable template. The sparse FRAME model can be written as a shared sparse coding model, which motivates us to propose a two-stage algorithm for learning the model. The first stage selects the subset of wavelets from the dictionary by a shared matching pursuit algorithm. The second stage then estimates the parameters of the model given the selected wavelets. Our experiments show that the sparse FRAME models are capable of representing a wide variety of object patterns in natural images and that the learned models are useful for object classification."
]
} |
1602.03264 | 2949457404 | We show that a generative random field model, which we call generative ConvNet, can be derived from the commonly used discriminative ConvNet, by assuming a ConvNet for multi-category classification and assuming one of the categories is a base category generated by a reference distribution. If we further assume that the non-linearity in the ConvNet is Rectified Linear Unit (ReLU) and the reference distribution is Gaussian white noise, then we obtain a generative ConvNet model that is unique among energy-based models: The model is piecewise Gaussian, and the means of the Gaussian pieces are defined by an auto-encoder, where the filters in the bottom-up encoding become the basis functions in the top-down decoding, and the binary activation variables detected by the filters in the bottom-up convolution process become the coefficients of the basis functions in the top-down deconvolution process. The Langevin dynamics for sampling the generative ConvNet is driven by the reconstruction error of this auto-encoder. The contrastive divergence learning of the generative ConvNet reconstructs the training images by the auto-encoder. The maximum likelihood learning algorithm can synthesize realistic natural image patterns. | The relationship between energy-based model with latent variables and auto-encoder was discovered by @cite_5 and @cite_13 via the score matching estimator @cite_8 . This connection requires that the free energy can be calculated analytically, i.e., the latent variables can be integrated out analytically. This is in general not the case for deep energy-based models with multiple layers of latent variables, such as deep Boltzmann machine with two layers of hidden units @cite_11 . In this case, one cannot obtain an explicit auto-encoder. In fact, for such models, the inference of the latent variables is in general intractable. In generative ConvNet, the multiple layers of binary activation variables come from the ReLU units, and the means of the Gaussian pieces are always defined by an explicit hierarchical auto-encoder. Compared to hierarchical models with explicit binary latent variables such as those based on the Boltzmann machine , the generative ConvNet is directly derived from the discriminative ConvNet. Our work seems to suggest that in searching for generative models and unsupervised learning machines, we need to look no further beyond the ConvNet. | {
"cite_N": [
"@cite_5",
"@cite_11",
"@cite_13",
"@cite_8"
],
"mid": [
"2013035813",
"189596042",
"2191540403",
"1505878979"
],
"abstract": [
"Denoising autoencoders have been previously shown to be competitive alternatives to restricted Boltzmann machines for unsupervised pretraining of each layer of a deep architecture. We show that a simple denoising autoencoder training criterion is equivalent to matching the score (with respect to the data) of a specific energy-based model to that of a nonparametric Parzen density estimator of the data. This yields several useful insights. It defines a proper probabilistic model for the denoising autoencoder technique, which makes it in principle possible to sample from them or rank examples by their energy. It suggests a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives. It justifies the use of tied weights between the encoder and decoder and suggests ways to extend the success of denoising autoencoders to a larger family of energy-based models.",
"We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and dataindependent expectations are approximated using persistent Markov chains. The use of two quite different techniques for estimating the two types of expectation that enter into the gradient of the log-likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer “pre-training” phase that allows variational inference to be initialized with a single bottomup pass. We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and perform well on handwritten digit and visual object recognition tasks.",
"We consider estimation methods for the class of continuous-data energy based models (EBMs). Our main result shows that estimating the parameters of an EBM using score matching when the conditional distribution over the visible units is Gaussian corresponds to training a particular form of regularized autoencoder. We show how different Gaussian EBMs lead to different autoencoder architectures, providing deep links between these two families of models. We compare the score matching estimator for the mPoT model, a particular Gaussian EBM, to several other training methods on a variety of tasks including image denoising and unsupervised feature extraction. We show that the regularization function induced by score matching leads to superior classification performance relative to a standard autoencoder. We also show that score matching yields classification results that are indistinguishable from better-known stochastic approximation maximum likelihood estimators.",
"One often wants to estimate statistical models where the probability density function is known only up to a multiplicative normalization constant. Typically, one then has to resort to Markov Chain Monte Carlo methods, or approximations of the normalization constant. Here, we propose that such models can be estimated by minimizing the expected squared distance between the gradient of the log-density given by the model and the gradient of the log-density of the observed data. While the estimation of the gradient of log-density function is, in principle, a very difficult non-parametric problem, we prove a surprising result that gives a simple formula for this objective function. The density function of the observed data does not appear in this formula, which simplifies to a sample average of a sum of some derivatives of the log-density given by the model. The validity of the method is demonstrated on multivariate Gaussian and independent component analysis models, and by estimating an overcomplete filter set for natural image data."
]
} |
1602.02899 | 2257956434 | Especially in the Big Data era, the usage of different classification methods is increasing day by day. The success of these classification methods depends on the effectiveness of learning methods. Extreme learning machine ELM classification algorithm is a relatively new learning method built on feed-forward neural-network. ELM classification algorithm is a simple and fast method that can create a model from high-dimensional data sets. Traditional ELM learning algorithm implicitly assumes complete access to whole data set. This is a major privacy concern in most of cases. Sharing of private data i.e. medical records is prevented because of security concerns. In this research, we propose an efficient and secure privacy-preserving learning algorithm for ELM classification over data that is vertically partitioned among several parties. The new learning method preserves the privacy on numerical attributes, builds a classification model without sharing private data without disclosing the data of each party to others. | Recently, there has been significant contributions in privacy-preserving machine learning. @cite_22 presents a probabilistic neural network (PNN) model. The PNN is an approximation of the theoretically optimal classifier, known as the Bayesian optimal classifier. There are at least three parties involved in the computation of the secure matrix summation to add the partial class conditional probability vectors together. @cite_25 developed condensation based learning method. They show that an anonymized data closely matches the characteristics of the original data. @cite_10 present new privacy-preserving protocols for both the back-propagation and ELM algorithms among several parties. The protocols are presented for perceptron learning algorithm and applied only single layer models. @cite_11 proposed methods distort confidential numerical features to protect privacy for clustering analysis. @cite_7 proposed a privacy-preserving back-propagation algorithm for horizontally partitioned databases for multi-party case. They use secure sum in their protocols. @cite_6 proposed a privacy-preserving solution for support vector machine classification. Their approach constructs the global SVM classification model from the data distributed at multiple parties, without disclosing the data of each party to others. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_6",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"2160509596",
"33126290",
"1964366273",
"1989815317",
"1544579602",
"2111272198"
],
"abstract": [
"In this paper, we present a version of the probabilistic neural network (PNN) that is capable of operating on a distributed database that is horizontally partitioned. It does so in a way that is privacy-preserving: that is, a test point can be evaluated by the algorithm without any party knowing the data owned by the other parties. We present an analysis of this algorithm from the standpoints of security and computational performance. Finally, we provide performance results of an implementation of this privacy preserving, distributed PNN algorithm.",
"",
"Traditional Data Mining and Knowledge Discovery algorithms assume free access to data, either at a centralized location or in federated form. Increasingly, privacy and security concerns restrict this access, thus derailing data mining projects. What we need is distributed knowledge discovery that is sensitive to this problem. The key is to obtain valid results, while providing guarantees on the non-disclosure of data. Support vector machine classification is one of the most widely used classification methodologies in data mining and machine learning. It is based on solid theoretical foundations and has wide practical application. This paper proposes a privacy-preserving solution for support vector machine (SVM) classification, PP-SVM for short. Our solution constructs the global SVM classification model from the data distributed at multiple parties, without disclosing the data of each party to others. We assume that data is horizontally partitioned -- each party collects the same features of information for different data objects. We quantify the security and efficiency of the proposed method, and highlight future challenges.",
"Neural network systems are highly capable of deriving knowledge from complex data, and they are used to extract patterns and trends which are otherwise hidden in many applications. Preserving the privacy of sensitive data and individuals' information is a major challenge in many of these applications. One of the most popular algorithms in neural network learning systems is the back-propagation (BP) algorithm, which is designed for single-layer and multi-layer models and can be applied to continuous data and differentiable activation functions. Another recently introduced learning technique is the extreme learning machine (ELM) algorithm. Although it works only on single-layer models, ELM can out-perform the BP algorithm by reducing the communication required between parties in the learning phase. In this paper, we present new privacy-preserving protocols for both the BP and ELM algorithms when data is horizontally and vertically partitioned among several parties. These new protocols, which preserve the privacy of both the input data and the constructed learning model, can be applied to online incoming records and or batch learning. Furthermore, the final model is securely shared among all parties, who can use it jointly to predict the corresponding output for their target data.",
"In recent years, privacy preserving data mining has become an important problem because of the large amount of personal data which is tracked by many business applications. In many cases, users are unwilling to provide personal information unless the privacy of sensitive information is guaranteed. In this paper, we propose a new framework for privacy preserving data mining of multi-dimensional data. Previous work for privacy preserving data mining uses a perturbation approach which reconstructs data distributions in order to perform the mining. Such an approach treats each dimension independently and therefore ignores the correlations between the different dimensions. In addition, it requires the development of a new distribution based algorithm for each data mining problem, since it does not use the multi-dimensional records, but uses aggregate distributions of the data as input. This leads to a fundamental re-design of data mining algorithms. In this paper, we will develop a new and flexible approach for privacy preserving data mining which does not require new problem-specific algorithms, since it maps the original data set into a new anonymized data set. This anonymized data closely matches the characteristics of the original data including the correlations among the different dimensions. We present empirical results illustrating the effectiveness of the method.",
"Despite its benefit in a wide range of applications, data mining techniques also have raised a number of ethical issues. Some such issues include those of privacy, data security, intellectual property rights, and many others. In this paper, we address the privacy problem against unauthorized secondary use of information. To do so, we introduce a family of geometric data transformation methods (GDTMs) which ensure that the mining process will not violate privacy up to a certain degree of security. We focus primarily on privacy preserving data clustering, notably on partition-based and hierarchical methods. Our proposed methods distort only confidential numerical attributes to meet privacy requirements, while preserving general features for clustering analysis. Our experiments demonstrate that our methods are effective and provide acceptable values in practice for balancing privacy and accuracy. We report the main results of our performance evaluation and discuss some open research issues."
]
} |
1602.02737 | 2511406830 | We study the problem of estimating a low-rank positive semidefinite (PSD) matrix from a set of rank-one measurements using sensing vectors composed of i.i.d. standard Gaussian entries, which are possibly corrupted by arbitrary outliers. This problem arises from applications, such as phase retrieval, covariance sketching, quantum space tomography, and power spectrum estimation. We first propose a convex optimization algorithm that seeks the PSD matrix with the minimum @math -norm of the observation residual. The advantage of our algorithm is that it is free of parameters, therefore, eliminating the need for tuning parameters and allowing easy implementations. We establish that with high probability, a low-rank PSD matrix can be exactly recovered as soon as the number of measurements is large enough, even when a fraction of the measurements are corrupted by outliers with arbitrary magnitudes. Moreover, the recovery is also stable against bounded noise. With the additional information of an upper bound of the rank of the PSD matrix, we propose another nonconvex algorithm based on subgradient descent that demonstrates excellent empirical performance in terms of computational efficiency and accuracy. | In the absence of outliers, the PhaseLift algorithm in the following form where @math denotes the trace of @math , has been proposed to solve the phase retrieval problem @cite_13 @cite_25 @cite_0 . Later the same algorithm has been employed to recover low-rank PSD matrices in @cite_30 , where an order of @math measurements obtained from i.i.d. sub-Gaussian sensing vectors are shown to guarantee exact recovery in the noise-free case and stable recovery with bounded noise. One problem with the algorithm is that the noise bound @math is assumed known. Furthermore, it is not amenable to handle outliers, since @math can be arbitrarily large with outliers and consequently the ground truth @math quickly becomes infeasible for . | {
"cite_N": [
"@cite_0",
"@cite_30",
"@cite_13",
"@cite_25"
],
"mid": [
"2102019642",
"2133105246",
"2078397124",
"2169501582"
],
"abstract": [
"This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging, and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that a complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to...",
"Statistical inference and information processing of high-dimensional data often require an efficient and accurate estimation of their second-order statistics. With rapidly changing data, limited processing power and storage at the acquisition devices, it is desirable to extract the covariance structure from a single pass over the data and a small number of stored measurements. In this paper, we explore a quadratic (or rank-one) measurement model which imposes minimal memory requirements and low computational complexity during the sampling process, and is shown to be optimal in preserving various low-dimensional covariance structures. Specifically, four popular structural assumptions of covariance matrices, namely, low rank, Toeplitz low rank, sparsity, jointly rank-one and sparse structure, are investigated, while recovery is achieved via convex relaxation paradigms for the respective structure. The proposed quadratic sampling framework has a variety of potential applications, including streaming data processing, high-frequency wireless communication, phase space tomography and phase retrieval in optics, and noncoherent subspace detection. Our method admits universally accurate covariance estimation in the absence of noise, as soon as the number of measurements exceeds the information theoretic limits. We also demonstrate the robustness of this approach against noise and imperfect structural assumptions. Our analysis is established upon a novel notion called the mixed-norm restricted isometry property (RIP- @math ), as well as the conventional RIP- @math for near-isotropic and bounded measurements. In addition, our results improve upon the best-known phase retrieval (including both dense and sparse signals) guarantees using PhaseLift with a significantly simpler approach.",
"Suppose we wish to recover a signal amssym @math from m intensity measurements of the form , ; that is, from data in which phase information is missing. We prove that if the vectors are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program–-a trace-norm minimization problem; this holds with large probability provided that m is on the order of , and without any assumption about the signal whatsoever. This novel result demonstrates that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques. Finally, we also prove that our methodology is robust vis-a-vis additive noise. © 2012 Wiley Periodicals, Inc.",
"This note shows that we can recover any complex vector @math exactly from on the order of n quadratic equations of the form |?a i ,x 0?|2=b i , i=1,?,m, by using a semidefinite program known as PhaseLift. This improves upon earlier bounds in (Commun. Pure Appl. Math. 66:1241---1274, 2013), which required the number of equations to be at least on the order of nlogn. Further, we show that exact recovery holds for all input vectors simultaneously, and also demonstrate optimal recovery results from noisy quadratic measurements; these results are much sharper than previously known results."
]
} |
1602.02737 | 2511406830 | We study the problem of estimating a low-rank positive semidefinite (PSD) matrix from a set of rank-one measurements using sensing vectors composed of i.i.d. standard Gaussian entries, which are possibly corrupted by arbitrary outliers. This problem arises from applications, such as phase retrieval, covariance sketching, quantum space tomography, and power spectrum estimation. We first propose a convex optimization algorithm that seeks the PSD matrix with the minimum @math -norm of the observation residual. The advantage of our algorithm is that it is free of parameters, therefore, eliminating the need for tuning parameters and allowing easy implementations. We establish that with high probability, a low-rank PSD matrix can be exactly recovered as soon as the number of measurements is large enough, even when a fraction of the measurements are corrupted by outliers with arbitrary magnitudes. Moreover, the recovery is also stable against bounded noise. With the additional information of an upper bound of the rank of the PSD matrix, we propose another nonconvex algorithm based on subgradient descent that demonstrates excellent empirical performance in terms of computational efficiency and accuracy. | Broadly speaking, our problem is related to low-rank matrix recovery from an under-determined linear system @cite_27 @cite_17 @cite_19 , where the linear measurements are drawn from inner products with rank-one sensing matrices. It is due to this special structure of the sensing matrices that we can eliminate the trace minimization, and only consider the feasibility constraint for PSD matrices. Standard approaches for separating low-rank and sparse components @cite_32 @cite_2 @cite_20 @cite_33 @cite_4 via convex optimization are given as where @math is a regularization parameter that requires to be tuned properly. In contrast, the formulation is parameter-free. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_32",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_20",
"@cite_17"
],
"mid": [
"1998925295",
"2962909343",
"2145962650",
"2167807229",
"2118550318",
"2003753589",
"2124252039",
"2056405837"
],
"abstract": [
"Principal component analysis (PCA) is widely used for dimensionality reduction, with well-documented merits in various applications involving high-dimensional data, including computer vision, preference measurement, and bioinformatics. In this context, the fresh look advocated here permeates benefits from variable selection and compressive sampling, to robustify PCA against outliers. A least-trimmed squares estimator of a low-rank bilinear factor analysis model is shown closely related to that obtained from an l0-(pseudo)norm-regularized criterion encouraging sparsity in a matrix explicitly modeling the outliers. This connection suggests robust PCA schemes based on convex relaxation, which lead naturally to a family of robust estimators encompassing Huber's optimal M-class as a special case. Outliers are identified by tuning a regularization parameter, which amounts to controlling sparsity of the outlier matrix along the whole robustification path of (group) least-absolute shrinkage and selection operator (Lasso) solutions. Beyond its ties to robust statistics, the developed outlier-aware PCA framework is versatile to accommodate novel and scalable algorithms to: i) track the low-rank signal subspace robustly, as new data are acquired in real time; and ii) determine principal components robustly in (possibly) infinite-dimensional feature spaces. Synthetic and real data tests corroborate the effectiveness of the proposed robust PCA schemes, when used to identify aberrant responses in personality assessment surveys, as well as unveil communities in social networks, and intruders from video surveillance data.",
"In this paper, we improve existing results in the field of compressed sensing and matrix completion when sampled data may be grossly corrupted. We introduce three new theorems. (1) In compressed sensing, we show that if the m×n sensing matrix has independent Gaussian entries, then one can recover a sparse signal x exactly by tractable l 1 minimization even if a positive fraction of the measurements are arbitrarily corrupted, provided the number of nonzero entries in x is O(m (log(n m)+1)). (2) In the very general sensing model introduced in Candes and Plan (IEEE Trans. Inf. Theory 57(11):7235–7254, 2011) and assuming a positive fraction of corrupted measurements, exact recovery still holds if the signal now has O(m (log2 n)) nonzero entries. (3) Finally, we prove that one can recover an n×n low-rank matrix from m corrupted sampled entries by tractable optimization provided the rank is on the order of O(m (nlog2 n)); again, this holds when there is a positive fraction of corrupted samples.",
"This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.",
"This paper investigates the uniqueness of a nonnegative vector solution and the uniqueness of a positive semidefinite matrix solution to underdetermined linear systems. A vector solution is the unique solution to an underdetermined linear system only if the measurement matrix has a row-span intersecting the positive orthant. Focusing on two types of binary measurement matrices, Bernoulli 0-1 matrices and adjacency matrices of general expander graphs, we show that, in both cases, the support size of a unique nonnegative solution can grow linearly, namely O(n), with the problem dimension n . We also provide closed-form characterizations of the ratio of this support size to the signal dimension. For the matrix case, we show that under a necessary and sufficient condition for the linear compressed observations operator, there will be a unique positive semidefinite matrix solution to the compressed linear observations. We further show that a randomly generated Gaussian linear compressed observations operator will satisfy this condition with overwhelmingly high probability.",
"The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.",
"Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification and is intractable to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components by minimizing a linear combination of the l1 norm and the nuclear norm of the components. We develop a notion of rank-sparsity incoherence, expressed as an uncertainty principle between the sparsity pat- tern of a matrix and its row and column spaces, and we use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature with the tangent spaces to the algebraic varieties of sparse and low-rank matrices playing a prominent role. When the sparse and low-rank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.",
"",
"We describe a new algorithm, termed subspace evolution and transfer (SET), for solving consistent low-rank matrix completion problems. The algorithm takes as its input a subset of entries of a low-rank matrix and outputs one low-rank matrix consistent with the given observations. The completion task is accomplished by searching for a column space in the Grassmann manifold that matches the incomplete observations. The SET algorithm consists of two parts-subspace evolution and subspace transfer. In the evolution part, we use a gradient descent method on the Grassmann manifold to refine our estimate of the column space. Since the gradient descent algorithm is not guaranteed to converge due to the existence of barriers along the search path, we design a new mechanism for detecting barriers and transferring the estimated column space across the barriers. This mechanism constitutes the core of the transfer step of the algorithm. The SET algorithm exhibits excellent empirical performance for a large range of sampling rates."
]
} |
1602.02594 | 2269502639 | Hosting platforms for software projects can form collaborative social networks and a prime example of this is GitHub which is arguably the most popular platform of this kind. An open source project recommendation system could be a major feature for a platform like GitHub, enabling its users to find relevant projects in a fast and simple manner. We perform network analysis on a constructed graph based on GitHub data and present a recommendation system that uses link prediction. | @cite_12 analyzed two sub networks of the GitHub network: a project to project network, where a project is linked to another project if it has any common developers, and a developer collaboration network. The authors report that both networks exhibit low average path lengths, with project-project network being seemingly scale-free @cite_12 . Furthermore, the study reported that the reason for low average path lengths is due to developers automatically connecting to other developers without actually knowing them. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2118174895"
],
"abstract": [
"Social coding enables a different experience of software development as the activities and interests of one developer are easily advertised to other developers. Developers can thus track the activities relevant to various projects in one umbrella site. Such a major change in collaborative software development makes an investigation of networkings on social coding sites valuable. Furthermore, project hosting platforms promoting this development paradigm have been thriving, among which GitHub has arguably gained the most momentum. In this paper, we contribute to the body of knowledge on social coding by investigating the network structure of social coding in GitHub. We collect 100,000 projects and 30,000 developers from GitHub, construct developer-developer and project-project relationship graphs, and compute various characteristics of the graphs. We then identify influential developers and projects on this sub network of GitHub by using PageRank. Understanding how developers and projects are actually related to each other on a social coding site is the first step towards building tool supports to aid social programmers in performing their tasks more efficiently."
]
} |
1602.02594 | 2269502639 | Hosting platforms for software projects can form collaborative social networks and a prime example of this is GitHub which is arguably the most popular platform of this kind. An open source project recommendation system could be a major feature for a platform like GitHub, enabling its users to find relevant projects in a fast and simple manner. We perform network analysis on a constructed graph based on GitHub data and present a recommendation system that uses link prediction. | A similar study performed developer collaboration analysis on a similar open source network @cite_10 , namely SourceForge. They obtained similar results, the average distance appears to be very low. They also extracted topological patterns @cite_10 , sub graphs of the original network. Through analysis of these patterns they discovered that the clustering coefficient is relatively high, i.e. the friends of a developer are likely to connect. Together with the low average distance, the SourceForge network appears to be small-world; however, we cannot infer the same for the GitHub network. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2030284415"
],
"abstract": [
"In this study, we extract patterns from a large developer collaborations network extracted from Source Forge. Net at high and low level of details. At the high level of details, we extract various network-level statistics from the network. At the low level of details, we extract topological sub-graph patterns that are frequently seen among collaborating developers. Extracting sub graph patterns from large graphs is a hard NP-complete problem. To address this challenge, we employ a novel combination of graph mining and graph matching by leveraging network-level properties of a developer network. With the approach, we successfully analyze a snapshot of Source Forge. Net data taken on September 2009. We present mined patterns and describe interesting observations."
]
} |
1602.02594 | 2269502639 | Hosting platforms for software projects can form collaborative social networks and a prime example of this is GitHub which is arguably the most popular platform of this kind. An open source project recommendation system could be a major feature for a platform like GitHub, enabling its users to find relevant projects in a fast and simple manner. We perform network analysis on a constructed graph based on GitHub data and present a recommendation system that uses link prediction. | @cite_1 performed structural analysis of several GitHub networks ranging from followers network, collaboration network to geographically separated networks. They confirm our assumption that GitHub collaboration graph is also small-world, as they report a high clustering coefficient. Along with high clustering, only a fraction of projects were reported to have high amount of collaborations. This is intuitively true, as GitHub is not used merely for collaboration and social aspects, but also as a hub for storing source code and for self-promotion, so we have a lot of repositories with only one collaborator. The authors also discovered that users who are geographically close are more likely to collaborate together on a project. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1820374484"
],
"abstract": [
"GitHub is the most popular repository for open source code. It has more than 3.5 million users, as the company declared in April 2013, and more than 10 million repositories, as of December 2013. It has a publicly accessible API and, since March 2012, it also publishes a stream of all the events occurring on public projects. Interactions among GitHub users are of a complex nature and take place in different forms. Developers create and fork repositories, push code, approve code pushed by others, bookmark their favorite projects and follow other developers to keep track of their activities. In this paper we present a characterization of GitHub, as both a social network and a collaborative platform. To the best of our knowledge, this is the first quantitative study about the interactions happening on GitHub. We analyze the logs from the service over 18 months (between March 11, 2012 and September 11, 2013), describing 183.54 million events and we obtain information about 2.19 million users and 5.68 million repositories, both growing linearly in time. We show that the distributions of the number of contributors per project, watchers per project and followers per user show a power-law-like shape. We analyze social ties and repository-mediated collaboration patterns, and we observe a remarkably low level of reciprocity of the social connections. We also measure the activity of each user in terms of authored events and we observe that very active users do not necessarily have a large number of followers. Finally, we provide a geographic characterization of the centers of activity and we investigate how distance influences collaboration."
]
} |
1602.02594 | 2269502639 | Hosting platforms for software projects can form collaborative social networks and a prime example of this is GitHub which is arguably the most popular platform of this kind. An open source project recommendation system could be a major feature for a platform like GitHub, enabling its users to find relevant projects in a fast and simple manner. We perform network analysis on a constructed graph based on GitHub data and present a recommendation system that uses link prediction. | The problem of recommendation has been around for quite some time. A popular approach in this area is Collaborative Filtering @cite_11 which relies on the assumption, that if two users perform similar actions, then they are also more likely to perform other actions in the same way. Several such approaches use machine learning methods, however, we are interested in methods using network structure. Recommendation systems in network analysis are usually based on link prediction techniques. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2100235918"
],
"abstract": [
"As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first introduce CF tasks and their main challenges, such as data sparsity, scalability, synonymy, gray sheep, shilling attacks, privacy protection, etc., and their possible solutions. We then present three main categories of CF techniques: memory-based, modelbased, and hybrid CF algorithms (that combine CF with other recommendation techniques), with examples for representative algorithms of each category, and analysis of their predictive performance and their ability to address the challenges. From basic techniques to the state-of-the-art, we attempt to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area."
]
} |
1602.02887 | 2173234358 | Machine learning-based computational intelligence methods are widely used to analyze large-scale data sets in this age of big data. Extracting useful predictive modeling from these types of data sets is a challenging problem due to their high complexity. Analyzing large amount of streaming data that can be leveraged to derive business value is another complex problem to solve. With high levels of data availability (i.e., Big Data), automatic classification of them has become an important and complex task. Hence, we explore the power of applying MapReduce-based distributed AdaBoosting of extreme learning machine (ELM) to build a predictive bag of classification models. Accordingly, (1) data set ensembles are created; (2) ELM algorithm is used to build weak learners (classifier functions); and (3) builds a strong learner from a set of weak learners. We applied this training model to the benchmark knowledge discovery and data mining data sets. | MapReduce based learning algorithms from distributed data chunks has been studied by many researchers. Many different MapReduce based learning solutions over arbitrary partitioned data have been proposed recently. Some popular MapReduce based solutions to train machine learning algorithms in the literature include the following. proposed a learning tree model which is based on series of distributed computations, and implements each one using the MapReduce model of distributed computation @cite_50 . develops some algorithms using MapReduce to perform parallel data joins on large scale data sets @cite_7 . use batch updating based hierarchical clustering to reduce computational time and data communication @cite_3 . Their approach uses co-occurence based feature selection to remove noisy features and decrease the dimension of the feature vectors. proposed parallel density based clustering algorithm (DBSCAN). They developed a partitioning strategy for large scale non-indexed data with a 4-stages MapReduce paradigm @cite_48 . proposed parallel k-means clustering based on MapReduce @cite_9 . Their approaches focus on implementing k-means with the read-only convergence heuristic in the MapReduce pattern. | {
"cite_N": [
"@cite_7",
"@cite_48",
"@cite_9",
"@cite_3",
"@cite_50"
],
"mid": [
"2154879298",
"2061051181",
"2116762767",
"2109380209",
"2125816831"
],
"abstract": [
"In data mining applications and spatial and multimedia databases, a useful tool is the kNN join, which is to produce the k nearest neighbors (NN), from a dataset S, of every point in a dataset R. Since it involves both the join and the NN search, performing kNN joins efficiently is a challenging task. Meanwhile, applications continue to witness a quick (exponential in some cases) increase in the amount of data to be processed. A popular model nowadays for large-scale data processing is the shared-nothing cluster on a number of commodity machines using MapReduce [6]. Hence, how to execute kNN joins efficiently on large data that are stored in a MapReduce cluster is an intriguing problem that meets many practical needs. This work proposes novel (exact and approximate) algorithms in MapReduce to perform efficient parallel kNN joins on large data. We demonstrate our ideas using Hadoop. Extensive experiments in large real and synthetic datasets, with tens or hundreds of millions of records in both R and S and up to 30 dimensions, have demonstrated the efficiency, effectiveness, and scalability of our methods.",
"Data clustering is an important data mining technology that plays a crucial role in numerous scientific applications. However, it is challenging due to the size of datasets has been growing rapidly to extra-large scale in the real world. Meanwhile, MapReduce is a desirable parallel programming platform that is widely applied in kinds of data process fields. In this paper, we propose an efficient parallel density-based clustering algorithm and implement it by a 4-stages MapReduce paradigm. Furthermore, we adopt a quick partitioning strategy for large scale non-indexed data. We study the metric of merge among bordering partitions and make optimizations on it. At last, we evaluate our work on real large scale datasets using Hadoop platform. Results reveal that the speedup and scale up of our work are very efficient.",
"Data clustering has been received considerable attention in many applications, such as data mining, document retrieval, image segmentation and pattern classification. The enlarging volumes of information emerging by the progress of technology, makes clustering of very large scale of data a challenging task. In order to deal with the problem, many researchers try to design efficient parallel clustering algorithms. In this paper, we propose a parallel k -means clustering algorithm based on MapReduce, which is a simple yet powerful parallel programming technique. The experimental results demonstrate that the proposed algorithm can scale well and efficiently process large datasets on commodity hardware.",
"Large datasets become common in applications like Internet services, genomic sequence analysis and astronomical telescope. The demanding requirements of memory and computation power force data mining algorithms to be parallelized in order to efficiently deal with the large datasets. This paper introduces our experience of grouping internet users by mining a huge volume of web access log of up to 100 gigabytes. The application is realized using hierarchical clustering algorithms with Map-Reduce, a parallel processing framework over clusters. However, the immediate implementation of the algorithms suffers from efficiency problem for both inadequate memory and higher execution time. This paper present an efficient hierarchical clustering method of mining large datasets with Map-Reduce. The method includes two optimization techniques: “Batch Updating” to reduce the computational time and communication costs among cluster nodes, and “Co-occurrence based feature selection” to decrease the dimension of feature vectors and eliminate noise features. The empirical study shows the first technique can significantly reduce the IO and distributed communication overhead, reducing the total execution time to nearly 1 15. Experimentally, the second technique efficiently simplifies the features while obtains improved accuracy of hierarchical clustering.",
"Classification and regression tree learning on massive datasets is a common data mining task at Google, yet many state of the art tree learning algorithms require training data to reside in memory on a single machine. While more scalable implementations of tree learning have been proposed, they typically require specialized parallel computing architectures. In contrast, the majority of Google's computing infrastructure is based on commodity hardware. In this paper, we describe PLANET: a scalable distributed framework for learning tree models over large datasets. PLANET defines tree learning as a series of distributed computations, and implements each one using the MapReduce model of distributed computation. We show how this framework supports scalable construction of classification and regression trees, as well as ensembles of such models. We discuss the benefits and challenges of using a MapReduce compute cluster for tree learning, and demonstrate the scalability of this approach by applying it to a real world learning task from the domain of computational advertising."
]
} |
1602.02660 | 2951770173 | Many classes of images exhibit rotational symmetry. Convolutional neural networks are sometimes trained using data augmentation to exploit this, but they are still required to learn the rotation equivariance properties from the data. Encoding these properties into the network architecture, as we are already used to doing for translation equivariance by using convolutional layers, could result in a more efficient use of the parameter budget by relieving the model from learning them. We introduce four operations which can be inserted into neural network models as layers, and which can be combined to make these models partially equivariant to rotations. They also enable parameter sharing across different orientations. We evaluate the effect of these architectural modifications on three datasets which exhibit rotational symmetry and demonstrate improved performance with smaller models. | We can also modify the architecture to facilitate learning of equivariance properties from data, rather than directly encode them. This approach is more flexible, but it requires more training data. The model of is able to learn local invariance to arbitrary transformations by grouping filters into overlapping neighbourhoods whose activations are pooled together. describe a template-based approach that successfully learns representations invariant to both affine and non-affine transformations (e.g. out-of-plane rotation). propose a probabilistic framework to model the transformation group to which a given dataset exhibits equivariance. Tiled CNNs @cite_9 , in which weight sharing is reduced, are able to approximate more complex local invariances than regular CNNs. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2147860648"
],
"abstract": [
"Convolutional neural networks (CNNs) have been successfully applied to many tasks such as digit and object recognition. Using convolutional (tied) weights significantly reduces the number of parameters that have to be learned, and also allows translational invariance to be hard-coded into the architecture. In this paper, we consider the problem of learning invariances, rather than relying on hard-coding. We propose tiled convolution neural networks (Tiled CNNs), which use a regular \"tiled\" pattern of tied weights that does not require that adjacent hidden units share identical weights, but instead requires only that hidden units k steps away from each other to have tied weights. By pooling over neighboring units, this architecture is able to learn complex invariances (such as scale and rotational invariance) beyond translational invariance. Further, it also enjoys much of CNNs' advantage of having a relatively small number of learned parameters (such as ease of learning and greater scalability). We provide an efficient learning algorithm for Tiled CNNs based on Topographic ICA, and show that learning complex invariant features allows us to achieve highly competitive results for both the NORB and CIFAR-10 datasets."
]
} |
1602.02123 | 2271148307 | The proliferation of sensor devices monitoring human activity generates voluminous amount of temporal sequences needing to be interpreted and categorized. Moreover, complex behavior detection requires the personalization of multi-sensor fusion algorithms. Conditional random fields (CRFs) are commonly used in structured prediction tasks such as part-of-speech tagging in natural language processing. Conditional probabilities guide the choice of each tag label in the sequence conflating the structured prediction task with the sequence classification task where different models provide different categorization of the same sequence. The claim of this paper is that CRF models also provide discriminative models to distinguish between types of sequence regardless of the accuracy of the labels obtained if we calibrate the class membership estimate of the sequence. We introduce and compare different neural network based linear-chain CRFs and we present experiments on two complex sequence classification and structured prediction tasks to support this claim. | Similar to HMMs, Long Short Term Memory (LSTM) recurrent neural networks (RNNs) @cite_3 learn a sequence of labels from unsegmented data such as that found in handwriting or speech recognition. This capability, called temporal classification, is distinguished from framewise classification where the training data is a sequence of pairwise input and output labels, suitable for supervised learning, and where the length of the sequence is known. LSTM RNNs' architecture consists of a hidden layer recurrent neural network with generative capabilities and adapted for deep learning with skip connections between hidden nodes at different levels. Unlike HMMs, there are no direct connections between the output nodes of the neural network (i.e., the labels of the sequence), but there are indirect connections through a prediction network from an output node to the next input. Consequently, LSTM RNNs can do sequence labeling as well as sequence generation through their predictive capability. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1810943226"
],
"abstract": [
"This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles."
]
} |
1602.02123 | 2271148307 | The proliferation of sensor devices monitoring human activity generates voluminous amount of temporal sequences needing to be interpreted and categorized. Moreover, complex behavior detection requires the personalization of multi-sensor fusion algorithms. Conditional random fields (CRFs) are commonly used in structured prediction tasks such as part-of-speech tagging in natural language processing. Conditional probabilities guide the choice of each tag label in the sequence conflating the structured prediction task with the sequence classification task where different models provide different categorization of the same sequence. The claim of this paper is that CRF models also provide discriminative models to distinguish between types of sequence regardless of the accuracy of the labels obtained if we calibrate the class membership estimate of the sequence. We introduce and compare different neural network based linear-chain CRFs and we present experiments on two complex sequence classification and structured prediction tasks to support this claim. | In @cite_1 , perceptrons were integrated as discriminative learners in the probabilistic framework of CRFs in the context of part-of-speech tagging. The Viterbi decoding algorithm finds the best tagged sequence under the current weight parameters of feature-tag pairs. As in the perceptron algorithm, weight updates (0 1 loss) are triggered only when discrepancies occur. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2008652694"
],
"abstract": [
"We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modification of the proof of convergence of the perceptron algorithm for classification problems. We give experimental results on part-of-speech tagging and base noun phrase chunking, in both cases showing improvements over results for a maximum-entropy tagger."
]
} |
1602.02415 | 2254341021 | This paper considers the use of the anisotropic total variation seminorm to recover a two dimensional vector @math from its partial Fourier coefficients, sampled along Cartesian lines. We prove that if @math has at most @math nonzero coefficients in each column and @math has at most @math nonzero coefficients in each row, then, up to multiplication by @math factors, one can exactly recover @math by sampling along @math horizontal lines of its Fourier coefficients and along @math vertical lines of its Fourier coefficients. Finally, unlike standard compressed sensing estimates, the @math factors involved are dependent on the separation distance between the nonzero entries in each row column of the gradient of @math and not on @math , the ambient dimension of @math . | In @cite_1 , investigated this structure dependency in the case of wavelet regularization with Cartesian line sampling in the Fourier domain. In particular, they proved that one can guarantee stable error bounds provided that the number of horizontal lines within each block of Fourier coefficients is proportional, up to log factors, with the sparsity in each column of the wavelet transform within the corresponding wavelet scale. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2949327578"
],
"abstract": [
"Compressed Sensing (CS) is an appealing framework for applications such as Magnetic Resonance Imaging (MRI). However, up-to-date, the sensing schemes suggested by CS theories are made of random isolated measurements, which are usually incompatible with the physics of acquisition. To reflect the physical constraints of the imaging device, we introduce the notion of blocks of measurements: the sensing scheme is not a set of isolated measurements anymore, but a set of groups of measurements which may represent any arbitrary shape (parallel or radial lines for instance). Structured acquisition with blocks of measurements are easy to implement, and provide good reconstruction results in practice. However, very few results exist on the theoretical guarantees of CS reconstructions in this setting. In this paper, we derive new CS results for structured acquisitions and signals satisfying a prior structured sparsity. The obtained results provide a recovery probability of sparse vectors that explicitly depends on their support. Our results are thus support-dependent and offer the possibility for flexible assumptions on the sparsity structure. Moreover, the results are drawing-dependent, since we highlight an explicit dependency between the probability of reconstructing a sparse vector and the way of choosing the blocks of measurements. Numerical simulations show that the proposed theory is faithful to experimental observations."
]
} |
1602.02358 | 2293931279 | Node similarity is a fundamental problem in graph analytics. However, node similarity between nodes in different graphs (inter-graph nodes) has not received a lot of attention yet. The inter-graph node similarity is important in learning a new graph based on the knowledge of an existing graph (transfer learning on graphs) and has applications in biological, communication, and social networks. In this paper, we propose a novel distance function for measuring inter-graph node similarity with edit distance, called NED. In NED, two nodes are compared according to their local neighborhood structures which are represented as unordered k-adjacent trees, without relying on labels or other assumptions. Since the computation problem of tree edit distance on unordered trees is NP-Complete, we propose a modified tree edit distance, called TED*, for comparing neighborhood trees. TED* is a metric distance, as the original tree edit distance, but more importantly, TED* is polynomially computable. As a metric distance, NED admits efficient indexing, provides interpretable results, and shows to perform better than existing approaches on a number of data analysis tasks, including graph de-anonymization. Finally, the efficiency and effectiveness of NED are empirically demonstrated using real-world graphs. | One major type of node similarity measure is called link-based similarity or transitivity-based similarity and is designed to compare intra-graph nodes. SimRank @cite_12 and a number of SimRank variants like SimRank* @cite_15 , SimRank++ @cite_0 , RoleSim @cite_31 , just to name a few, are typical link-based similarities which have been studied extensively. Other link-based similarities include random walks with restart @cite_23 and path-based similarity @cite_4 . A comparative study for link-based node similarities can be found in @cite_25 . Unfortunately, those link-based node similarities are not suitable for comparing inter-graph nodes since these nodes are not connected and the distances will be always @math . | {
"cite_N": [
"@cite_31",
"@cite_4",
"@cite_0",
"@cite_23",
"@cite_15",
"@cite_25",
"@cite_12"
],
"mid": [
"2140445565",
"103340358",
"2568082647",
"2133299088",
"",
"2107921943",
"2117831564"
],
"abstract": [
"A key task in analyzing social networks and other complex networks is role analysis: describing and categorizing nodes by how they interact with other nodes. Two nodes have the same role if they interact with equivalent sets of neighbors. The most fundamental role equivalence is automorphic equivalence. Unfortunately, the fastest algorithm known for graph automorphism is nonpolynomial. Moreover, since exact equivalence is rare, a more meaningful task is measuring the role similarity between any two nodes. This task is closely related to the link-based similarity problem that SimRank addresses. However, SimRank and other existing simliarity measures are not sufficient because they do not guarantee to recognize automorphically or structurally equivalent nodes. This paper makes two contributions. First, we present and justify several axiomatic properties necessary for a role similarity measure or metric. Second, we present RoleSim, a role similarity metric which satisfies these axioms and which can be computed with a simple iterative algorithm. We rigorously prove that RoleSim satisfies all the axiomatic properties and demonstrate its superior interpretative power on both synthetic and real datasets.",
"Similarity search is a primitive operation in database and Web search engines. With the advent of large-scale heterogeneous information networks that consist of multi-typed, interconnected objects, such as the bibliographic networks and social media networks, it is important to study similarity search in such networks. Intuitively, two objects are similar if they are linked by many paths in the network. However, most existing similarity measures are defined for homogeneous networks. Different semantic meanings behind paths are not taken into consideration. Thus they cannot be directly applied to heterogeneous networks. In this paper, we study similarity search that is defined among the same type of objects in heterogeneous networks. Moreover, by considering different linkage paths in a network, one could derive various similarity semantics. Therefore, we introduce the concept of meta path-based similarity, where a meta path is a path consisting of asequence of relations defined between different object types (i.e., structural paths at the meta level). No matter whether a user would like to explicitly specify a path combination given sufficient domain knowledge, or choose the best path by experimental trials, or simply provide training examples to learn it, meta path forms a common base for a network-based similarity search engine. In particular, under the meta path framework we define a novel similarity measure called PathSim that is able to find peer objects in the network (e.g., find authors in the similar field and with similar reputation), which turns out to be more meaningful in many scenarios compared with random-walk based similarity measures. In order to support fast online query processing for PathSim queries, we develop an efficient solution that partially materializes short meta paths and then concatenates them online to compute top-k results. Experiments on real data sets demonstrate the effectiveness and efficiency of our proposed paradigm.",
"We focus on the problem of query rewriting for sponsored search. We base rewrites on a historical click graph that records the ads that have been clicked on in response to past user queries. Given a query q, we first consider Simrank [7] as a way to identify queries similar to q, i.e., queries whose ads a user may be interested in. We argue that Simrank fails to properly identify query similarities in our application, and we present two enhanced versions of Simrank: one that exploits weights on click graph edges and another that exploits \"evidence.\" We experimentally evaluate our new schemes against Simrank, using actual click graphs and queries from Yahoo!, and using a variety of metrics. Our results show that the enhanced methods can yield more and better query rewrites.",
"How closely related are two nodes in a graph? How to compute this score quickly, on huge, disk-resident, real graphs? Random walk with restart (RWR) provides a good relevance score between two nodes in a weighted graph, and it has been successfully used in numerous settings, like automatic captioning of images, generalizations to the \"connection subgraphs\", personalized PageRank, and many more. However, the straightforward implementations of RWR do not scale for large graphs, requiring either quadratic space and cubic pre-computation time, or slow response time on queries. We propose fast solutions to this problem. The heart of our approach is to exploit two important properties shared by many real graphs: (a) linear correlations and (b) block- wise, community-like structure. We exploit the linearity by using low-rank matrix approximation, and the community structure by graph partitioning, followed by the Sherman- Morrison lemma for matrix inversion. Experimental results on the Corel image and the DBLP dabasets demonstrate that our proposed methods achieve significant savings over the straightforward implementations: they can save several orders of magnitude in pre-computation and storage cost, and they achieve up to 150x speed up with 90 + quality preservation.",
"",
"Measuring similarity between objects is a fundamental task in domains such as data mining, information retrieval, and so on. Link-based similarity measures have attracted the attention of many researchers and have been widely applied in recent years. However, most previous works mainly focus on introducing new link-based measures, and seldom provide theoretical as well as experimental comparisons with other measures. Thus, selecting the suitable measure in different situations and applications is difficult. In this paper, a comprehensive analysis and critical comparison of various link-based similarity measures and algorithms are presented. Their strengths and weaknesses are discussed. Their actual runtime performances are also compared via experiments on benchmark data sets. Some novel and useful guidelines for users to choose the appropriate link-based measure for their applications are discovered.",
"The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects:\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach."
]
} |
1602.02358 | 2293931279 | Node similarity is a fundamental problem in graph analytics. However, node similarity between nodes in different graphs (inter-graph nodes) has not received a lot of attention yet. The inter-graph node similarity is important in learning a new graph based on the knowledge of an existing graph (transfer learning on graphs) and has applications in biological, communication, and social networks. In this paper, we propose a novel distance function for measuring inter-graph node similarity with edit distance, called NED. In NED, two nodes are compared according to their local neighborhood structures which are represented as unordered k-adjacent trees, without relying on labels or other assumptions. Since the computation problem of tree edit distance on unordered trees is NP-Complete, we propose a modified tree edit distance, called TED*, for comparing neighborhood trees. TED* is a metric distance, as the original tree edit distance, but more importantly, TED* is polynomially computable. As a metric distance, NED admits efficient indexing, provides interpretable results, and shows to perform better than existing approaches on a number of data analysis tasks, including graph de-anonymization. Finally, the efficiency and effectiveness of NED are empirically demonstrated using real-world graphs. | To compare inter-graph nodes, neighborhood-based similarities have been used. Some primitive methods directly compare the ego-nets (direct neighbors) of two nodes using Jaccard coefficient, S rensen -- Dice coefficient, or Ochiai coefficient @cite_13 @cite_7 @cite_19 . Ness @cite_20 and NeMa @cite_27 expand on this idea and they use the structure of the @math -hop neighborhood of each node. However, for all these methods, if two nodes do not share common neighbors (or neighbors with the same labels), the distance will always be @math , even if the neighborhoods are isomorphic to each other. | {
"cite_N": [
"@cite_7",
"@cite_19",
"@cite_27",
"@cite_13",
"@cite_20"
],
"mid": [
"2005207065",
"2134008243",
"1509240356",
"2017099446",
"2020657191"
],
"abstract": [
"A new form of document coupling called co-citation is defined as the frequency with which two documents are cited together. The co-citation frequency of two scientific papers can be determined by comparing lists of citing documents in the Science Citation Index and counting identical entries. Networks of co-cited papers can be generated for specific scientific specialties, and an example is drawn from the literature of particle physics. Co-citation patterns are found to differ significantly from bibliographic coupling patterns, but to agree generally with patterns of direct citation. Clusters of co-cited papers provide a new way to study the specialty structure of science. They may provide a new approach to indexing and to the creation of SDI profiles.",
"Network clustering (or graph partitioning) is an important task for the discovery of underlying structures in networks. Many algorithms find clusters by maximizing the number of intra-cluster edges. While such algorithms find useful and interesting structures, they tend to fail to identify and isolate two kinds of vertices that play special roles - vertices that bridge clusters (hubs) and vertices that are marginally connected to clusters (outliers). Identifying hubs is useful for applications such as viral marketing and epidemiology since hubs are responsible for spreading ideas or disease. In contrast, outliers have little or no influence, and may be isolated as noise in the data. In this paper, we proposed a novel algorithm called SCAN (Structural Clustering Algorithm for Networks), which detects clusters, hubs and outliers in networks. It clusters vertices based on a structural similarity measure. The algorithm is fast and efficient, visiting each vertex only once. An empirical evaluation of the method using both synthetic and real datasets demonstrates superior performance over other methods such as the modularity-based algorithms.",
"It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques.",
"The aim of this paper is to understand the interrelations among relations within concrete social groups. Social structure is sought, not ideal types, although the latter are relevant to interrelations among relations. From a detailed social network, patterns of global relations can be extracted, within which classes of equivalently positioned individuals are delineated. The global patterns are derived algebraically through a ‘functorial’ mapping of the original pattern. Such a mapping (essentially a generalized homomorphism) allows systematically for concatenation of effects through the network. The notion of functorial mapping is of central importance in the ‘theory of categories,’ a branch of modern algebra with numerous applications to algebra, topology, logic. The paper contains analyses of two social networks, exemplifying this approach.",
"Complex social and information network search becomes important with a variety of applications. In the core of these applications, lies a common and critical problem: Given a labeled network and a query graph, how to efficiently search the query graph in the target network. The presence of noise and the incomplete knowledge about the structure and content of the target network make it unrealistic to find an exact match. Rather, it is more appealing to find the top-k approximate matches. In this paper, we propose a neighborhood-based similarity measure that could avoid costly graph isomorphism and edit distance computation. Under this new measure, we prove that subgraph similarity search is NP hard, while graph similarity match is polynomial. By studying the principles behind this measure, we found an information propagation model that is able to convert a large network into a set of multidimensional vectors, where sophisticated indexing and similarity search algorithms are available. The proposed method, called Ness (Neighborhood Based Similarity Search), is appropriate for graphs with low automorphism and high noise, which are common in many social and information networks. Ness is not only efficient, but also robust against structural noise and information loss. Empirical results show that it can quickly and accurately find high-quality matches in large networks, with negligible cost."
]
} |
1602.02358 | 2293931279 | Node similarity is a fundamental problem in graph analytics. However, node similarity between nodes in different graphs (inter-graph nodes) has not received a lot of attention yet. The inter-graph node similarity is important in learning a new graph based on the knowledge of an existing graph (transfer learning on graphs) and has applications in biological, communication, and social networks. In this paper, we propose a novel distance function for measuring inter-graph node similarity with edit distance, called NED. In NED, two nodes are compared according to their local neighborhood structures which are represented as unordered k-adjacent trees, without relying on labels or other assumptions. Since the computation problem of tree edit distance on unordered trees is NP-Complete, we propose a modified tree edit distance, called TED*, for comparing neighborhood trees. TED* is a metric distance, as the original tree edit distance, but more importantly, TED* is polynomially computable. As a metric distance, NED admits efficient indexing, provides interpretable results, and shows to perform better than existing approaches on a number of data analysis tasks, including graph de-anonymization. Finally, the efficiency and effectiveness of NED are empirically demonstrated using real-world graphs. | An approach that can work for inter-graph nodes is to extract features from each node using the neighborhood structure and compare these features. OddBall @cite_29 and NetSimile @cite_24 construct the feature vectors by using the ego-nets (direct neighbors) information such as the degree of the node, the number of edges in the ego-net and so on. ReFeX @cite_5 is a framework to construct the structural features recursively. The main problem with this approach is that the choice of features is ad-hoc and the distance function is not easy to interpret. Furthermore, in many cases, the distance function may be zero even for nodes with different neighborhoods. Actually, for the more advanced method, ReFeX, the distance method is not even a metric distance. | {
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_29"
],
"mid": [
"1980331354",
"2001325956",
"1492581097"
],
"abstract": [
"Given a set of k networks, possibly with different sizes and no overlaps in nodes or links, how can we quickly assess similarity between them? Analogously, are there a set of social theories which, when represented by a small number of descriptive, numerical features, effectively serve as a \"signature\" for the network? Having such signatures will enable a wealth of graph mining and social network analysis tasks, including clustering, outlier detection, visualization, etc. We propose a novel, effective, and scalable method, called NETSIMILE, for solving the above problem. Our approach has the following desirable properties: (a) It is supported by a set of social theories. (b) It gives similarity scores that are size-invariant. (c) It is scalable, being linear on the number of links for graph signature extraction. In extensive experiments on numerous synthetic and real networks from disparate domains, NETSIMILE outperforms baseline competitors. We also demonstrate how our approach enables several mining tasks such as clustering, visualization, discontinuity detection, network transfer learning, and re-identification across networks.",
"Given a graph, how can we extract good features for the nodes? For example, given two large graphs from the same domain, how can we use information in one to do classification in the other (i.e., perform across-network classification or transfer learning on graphs)? Also, if one of the graphs is anonymized, how can we use information in one to de-anonymize the other? The key step in all such graph mining tasks is to find effective node features. We propose ReFeX (Recursive Feature eXtraction), a novel algorithm, that recursively combines local (node-based) features with neighborhood (egonet-based) features; and outputs regional features -- capturing \"behavioral\" information. We demonstrate how these powerful regional features can be used in within-network and across-network classification and de-anonymization tasks -- without relying on homophily, or the availability of class labels. The contributions of our work are as follows: (a) ReFeX is scalable and (b) it is effective, capturing regional (\"behavioral\") information in large graphs. We report experiments on real graphs from various domains with over 1M edges, where ReFeX outperforms its competitors on typical graph mining tasks like network classification and de-anonymization.",
"Given a large, weighted graph, how can we find anomalies? Which rules should be violated, before we label a node as an anomaly? We propose the oddball algorithm, to find such nodes The contributions are the following: (a) we discover several new rules (power laws) in density, weights, ranks and eigenvalues that seem to govern the so-called “neighborhood sub-graphs” and we show how to use these rules for anomaly detection; (b) we carefully choose features, and design oddball, so that it is scalable and it can work un-supervised (no user-defined constants) and (c) we report experiments on many real graphs with up to 1.6 million nodes, where oddball indeed spots unusual nodes that agree with intuition."
]
} |
1602.02358 | 2293931279 | Node similarity is a fundamental problem in graph analytics. However, node similarity between nodes in different graphs (inter-graph nodes) has not received a lot of attention yet. The inter-graph node similarity is important in learning a new graph based on the knowledge of an existing graph (transfer learning on graphs) and has applications in biological, communication, and social networks. In this paper, we propose a novel distance function for measuring inter-graph node similarity with edit distance, called NED. In NED, two nodes are compared according to their local neighborhood structures which are represented as unordered k-adjacent trees, without relying on labels or other assumptions. Since the computation problem of tree edit distance on unordered trees is NP-Complete, we propose a modified tree edit distance, called TED*, for comparing neighborhood trees. TED* is a metric distance, as the original tree edit distance, but more importantly, TED* is polynomially computable. As a metric distance, NED admits efficient indexing, provides interpretable results, and shows to perform better than existing approaches on a number of data analysis tasks, including graph de-anonymization. Finally, the efficiency and effectiveness of NED are empirically demonstrated using real-world graphs. | Another method that has been used for comparing biological networks, such as protein-protein interaction networks (PPI) and metabolic networks, is to extract a feature vector using graphlets @cite_21 @cite_3 . Graphlets are small connected non-isomorphic induced subgraphs of a large network @cite_10 and generalize the notion of the degree of a node. However, they are also limited to the small neighborhood around each node and as the size of the neighborhood increases the accuracy of this method decreases. | {
"cite_N": [
"@cite_21",
"@cite_10",
"@cite_3"
],
"mid": [
"2141185847",
"2148762636",
""
],
"abstract": [
"Motivation: Discovering and understanding patterns in networks of protein–protein interactions (PPIs) is a central problem in systems biology. Alignments between these networks aid functional understanding as they uncover important information, such as evolutionary conserved pathways, protein complexes and functional orthologs. A few methods have been proposed for global PPI network alignments, but because of NP-completeness of underlying sub-graph isomorphism problem, producing topologically and biologically accurate alignments remains a challenge. Results: We introduce a novel global network alignment tool, Lagrangian GRAphlet-based ALigner (L-GRAAL), which directly optimizes both the protein and the interaction functional conservations, using a novel alignment search heuristic based on integer programming and Lagrangian relaxation. We compare L-GRAAL with the state-of-the-art network aligners on the largest available PPI networks from BioGRID and observe that L-GRAAL uncovers the largest common sub-graphs between the networks, as measured by edge-correctness and symmetric sub-structures scores, which allow transferring more functional information across networks. We assess the biological quality of the protein mappings using the semantic similarity of their Gene Ontology annotations and observe that L-GRAAL best uncovers functionally conserved proteins. Furthermore, we introduce for the first time a measure of the semantic similarity of the mapped interactions and show that L-GRAAL also uncovers best functionally conserved interactions. In addition, we illustrate on the PPI networks of baker's yeast and human the ability of L-GRAAL to predict new PPIs. Finally, L-GRAAL's results are the first to show that topological information is more important than sequence information for uncovering functionally conserved interactions. Availability and implementation: L-GRAAL is coded in C++. Software is available at: http: bio-nets.doc.ic.ac.uk L-GRAAL . Contact: ku.ca.lairepmi@ningod-dolam.n Supplementary information: Supplementary data are available at Bioinformatics online.",
"Motivation: Networks have been used to model many real-world phenomena to better understand the phenomena and to guide experiments in order to predict their behavior. Since incorrect models lead to incorrect predictions, it is vital to have as accurate a model as possible. As a result, new techniques and models for analyzing and modeling real-world networks have recently been introduced. Results: One example of large and complex networks involves protein--protein interaction (PPI) networks. We analyze PPI networks of yeast Saccharomyces cerevisiae and fruitfly Drosophila melanogaster using a newly introduced measure of local network structure as well as the standardly used measures of global network structure. We examine the fit of four different network models, including Erdos-Renyi, scale-free and geometric random network models, to these PPI networks with respect to the measures of local and global network structure. We demonstrate that the currently accepted scale-free model of PPI networks fails to fit the data in several respects and show that a random geometric model provides a much more accurate model of the PPI data. We hypothesize that only the noise in these networks is scale-free. Conclusions: We systematically evaluate how well-different network models fit the PPI networks. We show that the structure of PPI networks is better modeled by a geometric random graph than by a scale-free model. Supplementary information: Supplementary information is available at http: www.cs.utoronto.ca juris data data ppiGRG04",
""
]
} |
1602.02358 | 2293931279 | Node similarity is a fundamental problem in graph analytics. However, node similarity between nodes in different graphs (inter-graph nodes) has not received a lot of attention yet. The inter-graph node similarity is important in learning a new graph based on the knowledge of an existing graph (transfer learning on graphs) and has applications in biological, communication, and social networks. In this paper, we propose a novel distance function for measuring inter-graph node similarity with edit distance, called NED. In NED, two nodes are compared according to their local neighborhood structures which are represented as unordered k-adjacent trees, without relying on labels or other assumptions. Since the computation problem of tree edit distance on unordered trees is NP-Complete, we propose a modified tree edit distance, called TED*, for comparing neighborhood trees. TED* is a metric distance, as the original tree edit distance, but more importantly, TED* is polynomially computable. As a metric distance, NED admits efficient indexing, provides interpretable results, and shows to perform better than existing approaches on a number of data analysis tasks, including graph de-anonymization. Finally, the efficiency and effectiveness of NED are empirically demonstrated using real-world graphs. | Another node similarity for inter-graph nodes based only on the network structure is proposed by @cite_11 which is called HITS-based similarity. In HITS-based similarity, all pairs of nodes between two graphs are virtually connected. The similarity between a pair of inter-graph nodes is calculated using the following similarity matrix: where @math and @math are the adjacency matrices of the two graphs and @math is the similarity matrix in the @math iteration. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2113367658"
],
"abstract": [
"We introduce a concept of similarity between vertices of directed graphs. Let GA and GB be two directed graphs with, respectively, nA and nB vertices. We define an nB nA similarity matrix S whose real entry sij expresses how similar vertex j (in GA) is to vertex i (in GB): we say that sij is their similarity score. The similarity matrix can be obtained as the limit of the normalized even iterates of Sk+1 = BSkAT + BTSkA, where A and B are adjacency matrices of the graphs and S0 is a matrix whose entries are all equal to 1. In the special case where GA = GB = G, the matrix S is square and the score sij is the similarity score between the vertices i and j of G. We point out that Kleinberg's \"hub and authority\" method to identify web-pages relevant to a given query can be viewed as a special case of our definition in the case where one of the graphs has two vertices and a unique directed edge between them. In analogy to Kleinberg, we show that our similarity scores are given by the components of a dominant eigenvector of a nonnegative matrix. Potential applications of our similarity concept are numerous. We illustrate an application for the automatic extraction of synonyms in a monolingual dictionary."
]
} |
1602.01665 | 2296689167 | Query-expansion via pseudo-relevance feedback is a popular method of overcoming the problem of vocabulary mismatch and of increasing average retrieval effectiveness. In this paper, we develop a new method that estimates a query topic model from a set of pseudo-relevant documents using a new language modelling framework. We assume that documents are generated via a mixture of multivariate Polya distributions, and we show that by identifying the topical terms in each document, we can appropriately select terms that are likely to belong to the query topic model. The results of experiments on several TREC collections show that the new approach compares favourably to current state-of-the-art expansion methods. | In the language modelling framework, there has been a number of initial approaches to building query topic models. The idea of a query model was introduced by Zhai @cite_12 and the simple mixture model (SMM) approach to feedback was developed. The SMM approach aims to extract the topical aspects of the top @math documents assuming that the same multinomial mixture has generated each document in @math . By fixing the initial mixture parameter ( @math ), the topical aspects of the top @math documents can be estimated using Expectation-Maximisation (EM). Regularised mixture models @cite_2 have been developed that aim to eliminate some of the free parameters in the SMM. However, this approach has been shown to be inferior to the SMM @cite_10 . | {
"cite_N": [
"@cite_10",
"@cite_12",
"@cite_2"
],
"mid": [
"1966093341",
"1964348731",
"2052088591"
],
"abstract": [
"We systematically compare five representative state-of-the-art methods for estimating query language models with pseudo feedback in ad hoc information retrieval, including two variants of the relevance language model, two variants of the mixture feedback model, and the divergence minimization estimation method. Our experiment results show that a variant of relevance model and a variant of the mixture model tend to outperform other methods. We further propose several heuristics that are intuitively related to the good retrieval performance of an estimation method, and show that the variations in how these heuristics are implemented in different methods provide a good explanation of many empirical observations.",
"The language modeling approach to retrieval has been shown to perform well empirically. One advantage of this new approach is its statistical foundations. However, feedback, as one important component in a retrieval system, has only been dealt with heuristically in this new retrieval approach: the original query is usually literally expanded by adding additional terms to it. Such expansion-based feedback creates an inconsistent interpretation of the original and the expanded query. In this paper, we present a more principled approach to feedback in the language modeling approach. Specifically, we treat feedback as updating the query language model based on the extra evidence carried by the feedback documents. Such a model-based feedback strategy easily fits into an extension of the language modeling approach. We propose and evaluate two different approaches to updating a query language model based on feedback documents, one based on a generative probabilistic model of feedback documents and one based on minimization of the KL-divergence over feedback documents. Experiment results show that both approaches are effective and outperform the Rocchio feedback approach.",
"Pseudo-relevance feedback has proven to be an effective strategy for improving retrieval accuracy in all retrieval models. However the performance of existing pseudo feedback methods is often affected significantly by some parameters, such as the number of feedback documents to use and the relative weight of original query terms; these parameters generally have to be set by trial-and-error without any guidance. In this paper, we present a more robust method for pseudo feedback based on statistical language models. Our main idea is to integrate the original query with feedback documents in a single probabilistic mixture model and regularize the estimation of the language model parameters in the model so that the information in the feedback documents can be gradually added to the original query. Unlike most existing feedback methods, our new method has no parameter to tune. Experiment results on two representative data sets show that the new method is significantly more robust than a state-of-the-art baseline language modeling approach for feedback with comparable or better retrieval accuracy."
]
} |
1602.02023 | 1975716803 | We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods. | Some methods use a combination of silhouette constraints and sparse feature correspondences to estimate, at best, a medium scale non-rigid 4D surface detail @cite_4 . Other approaches use stereo-based photo-consistency constraints in addition to silhouettes to achieve denser estimates of finer scale deformations @cite_18 @cite_7 . It is an involved problem to phrase dense stereo-based surface refinement as a continuous optimization problem, as it is done in variational approaches @cite_12 . Thus, stereo-based refinement in performance capture often resorts to discrete surface displacement sampling which are less efficient, and with which globally smooth and coherent solutions are harder to achieve. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_12",
"@cite_7"
],
"mid": [
"2117888987",
"",
"2006227471",
"2109752307"
],
"abstract": [
"Creating realistic animated models of people is a central task in digital content production. Traditionally, highly skilled artists and animators construct shape and appearance models for digital character. They then define the character's motion at each time frame or specific key-frames in a motion sequence to create a digital performance. Increasingly, producers are using motion capture technology to record animations from an actor's performance. This technology reduces animation production time and captures natural movements to create a more believable production. However, motion capture requires the use of specialist suits and markers and only records skelet al motion. It lacks the detailed secondary surface dynamics of cloth and hair that provide the visual realism of a live performance. Over the last decade, we have investigated studio capture technology with the objective of creating models of real people that accurately reflect the time-varying shape and appearance of the whole body with clothing. Surface capture is a fully automated system for capturing a human's shape and appearance as well as motion from multiple video cameras to create highly realistic animated content from an actor's performance in full wardrobe. Our system solves two key problems in performance capture: scene capture from a limited number of camera views and efficient scene representation for visualization",
"",
"In this article, we introduce a new global optimization method to the field of multiview 3D reconstruction. While global minimization has been proposed in a discrete formulation in form of the maxflow-mincut framework, we suggest the use of a continuous convex relaxation scheme. Specifically, we propose to cast the problem of 3D shape reconstruction as one of minimizing a spatially continuous convex functional. In qualitative and quantitative evaluation we demonstrate several advantages of the proposed continuous formulation over the discrete graph cut solution. Firstly, geometric properties such as weighted boundary length and surface area are represented in a numerically consistent manner: The continuous convex relaxation assures that the algorithm does not suffer from metrication errors in the sense that the reconstruction converges to the continuous solution as the spatial resolution is increased. Moreover, memory requirements are reduced, allowing for globally optimal reconstructions at higher resolutions. We study three different energy models for multiview reconstruction, which are based on a common variational template unifying regional volumetric terms and on-surface photoconsistency. The three models use data measurements at increasing levels of sophistication. While the first two approaches are based on a classical silhouette-based volume subdivision, the third one relies on stereo information to define regional costs. Furthermore, this scheme is exploited to compute a precise photoconsistency measure as opposed to the classical estimation. All three models are compared on standard data sets demonstrating their advantages and shortcomings. For the third one, which gives the most accurate results, a more exhaustive qualitative and quantitative evaluation is presented.",
"This paper proposes a new marker-less approach to capturing human performances from multi-view video. Our algorithm can jointly reconstruct spatio-temporally coherent geometry, motion and textural surface appearance of actors that perform complex and rapid moves. Furthermore, since our algorithm is purely meshbased and makes as few as possible prior assumptions about the type of subject being tracked, it can even capture performances of people wearing wide apparel, such as a dancer wearing a skirt. To serve this purpose our method efficiently and effectively combines the power of surface- and volume-based shape deformation techniques with a new mesh-based analysis-through-synthesis framework. This framework extracts motion constraints from video and makes the laser-scan of the tracked subject mimic the recorded performance. Also small-scale time-varying shape detail is recovered by applying model-guided multi-view stereo to refine the model surface. Our method delivers captured performance data at high level of detail, is highly versatile, and is applicable to many complex types of scenes that could not be handled by alternative marker-based or marker-free recording techniques."
]
} |
1602.02023 | 1975716803 | We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods. | An alternative way to recover fine-scale deforming surface detail is to use shading-based methods, shape-from-shading or photometric stereo @cite_21 . Many of these approaches require controlled and calibrated lighting @cite_10 @cite_5 , which reduces their applicability. More recently, shading-based refinement of dynamic scenes captured under more general lighting was shown @cite_19 , but these approaches are computationally challenging as they require to solve an inverse rendering problem to obtain estimates of illumination, appearance and shape at the same time. | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_21",
"@cite_10"
],
"mid": [
"2113507517",
"2040436296",
"2121253532",
"2148151066"
],
"abstract": [
"We present an approach to add true fine-scale spatio-temporal shape detail to dynamic scene geometry captured from multi-view video footage. Our approach exploits shading information to recover the millimeter-scale surface structure, but in contrast to related approaches succeeds under general unconstrained lighting conditions. Our method starts off from a set of multi-view video frames and an initial series of reconstructed coarse 3D meshes that lack any surface detail. In a spatio-temporal maximum a posteriori probability (MAP) inference framework, our approach first estimates the incident illumination and the spatially-varying albedo map on the mesh surface for every time instant. Thereafter, albedo and illumination are used to estimate the true geometric detail visible in the images and add it to the coarse reconstructions. The MAP framework uses weak temporal priors on lighting, albedo and geometry which improve reconstruction quality yet allow for temporal variations in the data.",
"We describe a system for high-resolution capture of moving 3D geometry, beginning with dynamic normal maps from multiple views. The normal maps are captured using active shape-from-shading (photometric stereo), with a large lighting dome providing a series of novel spherical lighting configurations. To compensate for low-frequency deformation, we perform multi-view matching and thin-plate spline deformation on the initial surfaces obtained by integrating the normal maps. Next, the corrected meshes are merged into a single mesh using a volumetric method. The final output is a set of meshes, which were impossible to produce with previous methods. The meshes exhibit details on the order of a few millimeters, and represent the performance over human-size working volumes at a temporal resolution of 60Hz.",
"We propose a method to obtain a complete and accurate 3D model from multiview images captured under a variety of unknown illuminations. Based on recent results showing that for Lambertian objects, general illumination can be approximated well using low-order spherical harmonics, we develop a robust alternating approach to recover surface normals. Surface normals are initialized using a multi-illumination multiview stereo algorithm, then refined using a robust alternating optimization method based on the l1 metric. Erroneous normal estimates are detected using a shape prior. Finally, the computed normals are used to improve the preliminary 3D model. The reconstruction system achieves watertight and robust 3D reconstruction while neither requiring manual interactions nor imposing any constraints on the illumination. Experimental results on both real world and synthetic data show that the technique can acquire accurate 3D models for Lambertian surfaces, and even tolerates small violations of the Lambertian assumption.",
"We present an algorithm and the associated capture methodology to acquire and track the detailed 3D shape, bends, and wrinkles of deforming surfaces. Moving 3D data has been difficult to obtain by methods that rely on known surface features, structured light, or silhouettes. Multispec- tral photometric stereo is an attractive alternative because it can recover a dense normal field from an un-textured surface. We show how to capture such data and register it over time to generate a single deforming surface. Experiments were performed on video sequences of un- textured cloth, filmed under spatially separated red, green, and blue light sources. Our first finding is that using zero- depth-silhouettes as the initial boundary condition already produces rather smoothly varying per-frame reconstructions with high detail. Second, when these 3D reconstructions are augmented with 2D optical flow, one can register the first frame's reconstruction to every subsequent frame."
]
} |
1602.02023 | 1975716803 | We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods. | The method we propose has some similarity to the work of Sand al @cite_20 who capture skin deformation as a displacement field on a template mesh; however, they require marker-based skeleton capture, and only fit the surface to match the silhouettes in multi-view video. Our problem formulation is inspired by the work of Stoll al @cite_22 who used a collection of Gaussian functions in 3D and 2D for marker-less skelet al pose estimation. Estimation of surface detail was not the goal of that work. Our paper extends their basic concept to the different problem of dense stereo-based surface estimation using continuous optimization of a smooth energy that can be formulated in closed form, and that has analytic derivatives. | {
"cite_N": [
"@cite_22",
"@cite_20"
],
"mid": [
"2092146246",
"2156755925"
],
"abstract": [
"We present an approach for modeling the human body by Sums of spatial Gaussians (SoG), allowing us to perform fast and high-quality markerless motion capture from multi-view video sequences. The SoG model is equipped with a color model to represent the shape and appearance of the human and can be reconstructed from a sparse set of images. Similar to the human body, we also represent the image domain as SoG that models color consistent image blobs. Based on the SoG models of the image and the human body, we introduce a novel continuous and differentiable model-to-image similarity measure that can be used to estimate the skelet al motion of a human at 5–15 frames per second even for many camera views. In our experiments, we show that our method, which does not rely on silhouettes or training data, offers an good balance between accuracy and computational cost.",
"We describe a method for the acquisition of deformable human geometry from silhouettes. Our technique uses a commercial tracking system to determine the motion of the skeleton, then estimates geometry for each bone using constraints provided by the silhouettes from one or more cameras. These silhouettes do not give a complete characterization of the geometry for a particular point in time, but when the subject moves, many observations of the same local geometries allow the construction of a complete model. Our reconstruction algorithm provides a simple mechanism for solving the problems of view aggregation, occlusion handling, hole filling, noise removal, and deformation modeling. The resulting model is parameterized to synthesize geometry for new poses of the skeleton. We demonstrate this capability by rendering the geometry for motion sequences that were not included in the original datasets."
]
} |
1602.01804 | 2295336459 | EU Directive 95 46 EC and the upcoming EU General Data Protection Regulation grant Europeans the right of access to data pertaining to them. Consumers can approach their service providers to obtain all personal data stored and processed there. Furthermore, they can demand erasure (or correction) of their data. We conducted an undercover field study to determine whether these rights can be exerted in practice. We assessed the behaviour of the vendors of 150 smartphone apps and 120 websites that are popular in Germany. Our deletion requests were fulfilled in 52 to 57 of the cases and less than half of the data provision requests were answered satisfactorily. Further, we observed instances of carelessness: About 20 of website owners would have disclosed our personal data to impostors. The results indicate that exerting privacy rights that have been introduced two decades ago is still a frustrating endeavour most of the time. | We are not aware of previous research studying the behaviour of online service providers to determine the effectiveness of the right of access to personal data in practice. @cite_0 analysed the privacy attitudes and the behaviour of app developers. However, in contrast to our undercover study, they conducted interviews and used an online survey. Most research about online privacy focuses on software, e. ,g. app permissions as well as what kind of data is being collected by apps and whom it is shared with. Another line studies legal and usability aspects, for instance by analysing privacy policies. In the following we will review recent work along these lines. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2069101457"
],
"abstract": [
"Smartphone app developers have to make many privacy-related decisions about what data to collect about endusers, and how that data is used. We explore how app developers make decisions about privacy and security. Additionally, we examine whether any privacy and security behaviors are related to characteristics of the app development companies. We conduct a series of interviews with 13 app developers to obtain rich qualitative information about privacy and security decision-making. We use an online survey of 228 app developers to quantify behaviors and test our hypotheses about the relationship between privacy and security behaviors and company characteristics. We find that smaller companies are less likely to demonstrate positive privacy and security behaviors. Additionally, although third-party tools for ads and analytics are pervasive, developers aren’t aware of the data collected by these tools. We suggest tools and opportunities to reduce the barriers for app developers to implement privacy and security best practices."
]
} |
1602.01804 | 2295336459 | EU Directive 95 46 EC and the upcoming EU General Data Protection Regulation grant Europeans the right of access to data pertaining to them. Consumers can approach their service providers to obtain all personal data stored and processed there. Furthermore, they can demand erasure (or correction) of their data. We conducted an undercover field study to determine whether these rights can be exerted in practice. We assessed the behaviour of the vendors of 150 smartphone apps and 120 websites that are popular in Germany. Our deletion requests were fulfilled in 52 to 57 of the cases and less than half of the data provision requests were answered satisfactorily. Further, we observed instances of carelessness: About 20 of website owners would have disclosed our personal data to impostors. The results indicate that exerting privacy rights that have been introduced two decades ago is still a frustrating endeavour most of the time. | @cite_10 monitored HTTP(S) transmissions of popular apps and found that many apps share information with third-party websites. TaintDroid is an effort to detect privacy violations by means of taint analysis @cite_7 . The second GPEN Privacy Sweep @cite_9 conducted by privacy enforcement authorities found that the majority of apps does not provide sufficient information for the user to understand why it is necessary to grant the requested permissions, which were found to be excessive in relation to the functionality for 31 , Haystack ( https: www.haystack.mobi ) alerts users about data leaks and collects data for research on privacy in mobile ecosystems. | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_7"
],
"mid": [
"1981568085",
"2159779522",
"1963971515"
],
"abstract": [
"In a series of experiments, we examined how the timing impacts the salience of smartphone app privacy notices. In a web survey and a field experiment, we isolated different timing conditions for displaying privacy notices: in the app store, when an app is started, during app use, and after app use. Participants installed and played a history quiz app, either virtually or on their phone. After a distraction or delay they were asked to recall the privacy notice's content. Recall was used as a proxy for the attention paid to and salience of the notice. Showing the notice during app use significantly increased recall rates over showing it in the app store. In a follow-up web survey, we tested alternative app store notices, which improved recall but did not perform as well as notices shown during app use. The results suggest that even if a notice contains information users care about, it is unlikely to be recalled if only shown in the app store.",
"",
"Today's smartphone operating systems frequently fail to provide users with adequate control over and visibility into how third-party applications use their privacy-sensitive data. We address these shortcomings with TaintDroid, an efficient, systemwide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid provides real-time analysis by leveraging Android's virtualized execution environment. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, we found 68 instances of misappropriation of users' location and device identification information across 20 applications. Monitoring sensitive data with TaintDroid provides informed use of third-party applications for phone users and valuable input for smartphone security service firms seeking to identify misbehaving applications."
]
} |
1602.01345 | 2253887218 | Hearing Aid HA algorithms need to be tuned “fitted” to match the impairment of each specific patient. The lack of a fundamental HA fitting theory is a strong contributing factor to an unsatisfying sound experience for about 20 of HA patients. This paper proposes a probabilistic modeling approach to the design of HA algorithms. The proposed method relies on a generative probabilistic model for the hearing loss problem and provides for automated inference of the corresponding 1 signal processing algorithm, 2 the fitting solution as well as 3 a principled performance evaluation metric. All three tasks are realized as message passing algorithms in a factor graph representation of the generative model, which in principle allows for fast implementation on HA or mobile device hardware. The methods are theoretically worked out and simulated with a custom-built factor graph toolbox for a specific hearing loss model. | The state-of-the-art in hearing aid signal processing is well described by Kates @cite_6 and Hamacher @cite_33 . More specifically, the literature on dynamic range compression technology for hearing loss compensation is nicely summarized by @cite_10 and @cite_4 . In both works, DRC circuits are developed through direct design, i.e., the hearing loss problem is not an explicit part of the solution. In contrast, a problem-based signal processing solution for hearing loss compensation has first been formulated in @cite_31 , where an optimal compensation gain is computed through Kalman filtering. The current paper extends that work by proposing a fully probabilistic modelling approach for both the SP, PE and MC tasks as well as an in-situ executable data base collection method. Moreover, the current work presents a factor graph framework for efficient execution of these tasks through message passing on FFGs. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_6",
"@cite_31",
"@cite_10"
],
"mid": [
"2078817191",
"2157252316",
"629565630",
"2113159496",
""
],
"abstract": [
"AbstractThe purpose of this paper is to study the effects of dynamic-range compression and linear amplification on speech intelligibility and quality for hearing-impaired listeners. The paper focuses on the relative benefit of compression compared to linear amplification and the effect of varying the number of compression channels and the compression time constants. The stimuli are sentences in a background of stationary speech-shaped noise. Speech intelligibility and quality indices are used to predict the listener responses for a mild, moderate sloping, and moderate severe hearing loss. The results show a strong interaction between signal processing, speech intensity, and hearing loss. The results are interpreted in terms of the two major effects of compression on speech: the increase in audibility and the decrease in temporal and spectral envelope contrast.SumarioEl proposito de este trabajo fue estudiar los efectos de la compresion del rango dinamico y de la amplificacion lineal sobre la inteligibilid...",
"The development of hearing aids incorporates two aspects, namely, the audiological and the technical point of view. The former focuses on items like the recruitment phenomenon, the speech intelligibility of hearing-impaired persons, or just on the question of hearing comfort. Concerning these subjects, different algorithms intending to improve the hearing ability are presented in this paper. These are automatic gain controls, directional microphones, and noise reduction algorithms. Besides the audiological point of view, there are several purely technical problems which have to be solved. An important one is the acoustic feedback. Another instance is the proper automatic control of all hearing aid components bymeans of a classification unit. In addition to an overview of state-of-the-art algorithms, this paper focuses on future trends.",
"Preface Hearing-Aid Technology Types of Hearing Aids From Analog to Digital Digital Circuit Components Batteries Concluding Remarks References Signal Processing Basics Signal and System Properties Discrete Fourier Transform Filters and Filter Banks Block Processing Digital System Concerns Concluding Remarks References The Electroacoustic System Hearing Aid System Head and Ear Microphone and Receiver Vent Acoustics Occlusion Effect Concluding Remarks References Directional Microphones Hearing-Aid Microphones Directional Response Patterns Frequency Response Magnitude Frequency Response Microphone Mismatch Interaction with Vents Microphone Noise Microphones on the Head Microphone Performance Indices Rooms and Reverberation Benefit in the Real World Concluding Remarks References Adaptive and Multi-Microphone Arrays Two-Microphone Adaptive Array Delay-And-Sum Beamforming Adaptive Arrays Superdirective Arrays Widely-Spaced Arrays Array Benefits Concluding Remarks References Wind Noise Turbulence Hearing-Aid Measurements Signal Characteristics Wind-Noise Reduction Concluding Remarks References Feedback Cancellation The Feedback System Gain-Reduction Solutions Adaptive Feedback Cancellation Processing Limitations Concluding Remarks References Dynamic-Range Compression Does Compression Help? Algorithm Design Concerns Single-Channel Compression Multi-Channel Compression Frequency-Domain Compression Frequency Warping Concluding Remarks References Single-Microphone Noise Suppression Properties of Speech and Noise Signals Low-Level Expansion Envelope Valley Tracking Bandwidth Reduction Envelope Modulation Filters Concluding Remarks References Spectral Subtraction Noise Estimation Wiener Filter Spectral Subtraction Algorithm Effectiveness Concluding Remarks References Spectral Contrast Enhancement Auditory Filters in the Damaged Cochlea Spectral Valley Suppression Spectral Contrast Modification Excess Upward Spread of Masking F2 F1 Ratio Processing Comparison Combining Spectral Contrast Enhancement with Compression Concluding Remarks References Sound Classification The Rationale for Classification Signal Features Feature Selection Classifier Algorithms Classification Examples Concluding Remarks References Binaural Signal Processing The \"Cocktail Party\" Problem Signal Transmission Binaural Compression Binaural Noise Suppression Dichotic Band Splitting Concluding Remarks References Index",
"Modern hearing aids use Dynamic Range Compression (DRC) as the primary solution to combat Hearing Loss (HL). Unfortunately, common DRC based solutions to hearing loss are not directly based on a proper mathematical or algorithmic description of the hearing loss problem. In this paper, we propose a probabilistic model for describing hearing loss, and we use Bayesian inference for deriving optimal HL compensation algorithms. We will show that, for a simple specific generative HL model, the inferred HL compensation algorithm corresponds to the classic DRC solution. An advantage to our approach is that it is readily extensible to more complex hearing loss models, which by automated Bayesian inference would yield complex yet optimal hearing loss compensation algorithms.",
""
]
} |
1602.01345 | 2253887218 | Hearing Aid HA algorithms need to be tuned “fitted” to match the impairment of each specific patient. The lack of a fundamental HA fitting theory is a strong contributing factor to an unsatisfying sound experience for about 20 of HA patients. This paper proposes a probabilistic modeling approach to the design of HA algorithms. The proposed method relies on a generative probabilistic model for the hearing loss problem and provides for automated inference of the corresponding 1 signal processing algorithm, 2 the fitting solution as well as 3 a principled performance evaluation metric. All three tasks are realized as message passing algorithms in a factor graph representation of the generative model, which in principle allows for fast implementation on HA or mobile device hardware. The methods are theoretically worked out and simulated with a custom-built factor graph toolbox for a specific hearing loss model. | The fully probabilistic treatment of hearing loss compensation that is proposed in this work is to our knowledge new to the hearing aid literature. However, the idea of inferring audio processing algorithms though inference in a generative probabilistic model goes back at least to Roweis @cite_19 . More recently, Rennie and colleagues have described several audio processing algorithms for speech recognition and source separation based on probabilistic inference through message passing in a graphical model @cite_35 . More generally, algorithm design based on inference in generative probabilistic models is an increasingly popular technique in the Bayesian machine learning literature, e.g. @cite_8 , @cite_15 . | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_35",
"@cite_8"
],
"mid": [
"2103139809",
"1503398984",
"2109886546",
"1663973292"
],
"abstract": [
"Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models.",
"Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.",
"We address the problem of single-channel speech separation and recognition using loopy belief propagation in a way that enables efficient inference for an arbitrary number of speech sources. The graphical model consists of a set of N Markov chains, each of which represents a language model or grammar for a given speaker. A Gaussian mixture model with shared states is used to model the hidden acoustic signal for each grammar state of each source. The combination of sources is modeled in the log spectrum domain using non-linear interaction functions. Previously, temporal inference in such a model has been performed using an N-dimensional Viterbi algorithm that scales exponentially with the number of sources. In this paper, we describe a loopy message passing algorithm that scales linearly with language model size. The algorithm achieves human levels of performance, and is an order of magnitude faster than competitive systems for two speakers.",
"Cristopher M. BishopInformation Science and StatisticsSpringer 2006, 738 pagesAs the author writes in the preface of the book, pattern recognition has its origin inengineering, whereas machine learning grew out of computer science. However, theseactivities can be viewed as two facets of the same field, and they have undergonesubstantial development over the past years.Bayesian methods are widely used, while graphical models have emerged as a generalframework for describing and applying probabilistic models. Similarly new modelsbased on kernels have had significant impact on both algorithms and applications.This textbook reflects these recent developments while providing a comprehensiveintroduction to the fields of pattern recognition and machine learning. It is aimedat advanced undergraduate or first year PhD students, as well as researchers andpractitioners. It can be consider as an introductory course to the subject.The first four chapters are devoted to the concepts of Probability and Statistics that areneededforreadingtherestofthebook,sowecanimaginethatthespeedishighinorderto get from zero to infinity. I believe that it is better to study the book after a previouscourse on Probability and Statistics. On the other hand, a basic knowledge of linearalgebra and multivariate calculus is assumed.The other chapters give to a classic probabilist or statistician a point of view on someapplications that are very interesting but far from his usual world. In all the text themathematical aspects are at the second level in relation withthe ideas and intuitionsthatthe author wants to communicate.The book is supported by a great deal of additional material, including lecture slides aswell as the complete set of figures used in it, and the reader is encouraged to visit thebook web site for the latest information. So it can be very useful for a course or a talkabout the subject."
]
} |
1602.01345 | 2253887218 | Hearing Aid HA algorithms need to be tuned “fitted” to match the impairment of each specific patient. The lack of a fundamental HA fitting theory is a strong contributing factor to an unsatisfying sound experience for about 20 of HA patients. This paper proposes a probabilistic modeling approach to the design of HA algorithms. The proposed method relies on a generative probabilistic model for the hearing loss problem and provides for automated inference of the corresponding 1 signal processing algorithm, 2 the fitting solution as well as 3 a principled performance evaluation metric. All three tasks are realized as message passing algorithms in a factor graph representation of the generative model, which in principle allows for fast implementation on HA or mobile device hardware. The methods are theoretically worked out and simulated with a custom-built factor graph toolbox for a specific hearing loss model. | The encompassing prior approach to nested model comparison is described in @cite_36 . More discussion on Bayesian methods for comparing constrained models are available in @cite_5 , @cite_27 and @cite_26 . | {
"cite_N": [
"@cite_36",
"@cite_5",
"@cite_27",
"@cite_26"
],
"mid": [
"2143933215",
"2068887647",
"2151312804",
"2072837245"
],
"abstract": [
"This paper deals with Bayesian selection of models that can be specified using inequality constraints among the model parameters. The concept of encompassing priors is introduced, that is, a prior distribution for an unconstrained model from which the prior distributions of the constrained models can be derived. It is shown that the Bayes factor for the encompassing and a constrained model has a very nice interpretation: it is the ratio of the proportion of the prior and posterior distribution of the encompassing model in agreement with the constrained model. It is also shown that, for a specific class of models, selection based on encompassing priors will render a virtually objective selection procedure. The paper concludes with three illustrative examples: an analysis of variance with ordered means; a contingency table analysis with ordered odds-ratios; and a multilevel model with ordered slopes. 1 Inequality constrained statistical models Researchers often have one or more (competing) theories about their field of research. Consider, for example, theories about the effect of behavioral therapy versus medication for children with an attention deficit disorder (ADD). Some researchers in this area believe medication is the only effective treatment for ADD, some believe strongly in behavioral therapy, and others may expect an additive effect of both therapies. To test or compare the plausibility of these theories they need to be translated into statistical models. Subsequently, empirical data can be used to determine which model is best. Inequality constraints on model parameters can be useful in the specification of statistical models. This paper deals with competing models that have the same parameter vector, but in one or more of the models parameters are subjected to inequality constraints. To continue the example, consider an experiment where children with ADD are randomly assigned to one of four conditions: no treatment (1), behavioral therapy (2), medication (3), and behavioral therapy plus medication (4). Let the outcome",
"Abstract Constrained parameter problems arise in a wide variety of applications, including bioassay, actuarial graduation, ordinal categorical data, response surfaces, reliability development testing, and variance component models. Truncated data problems arise naturally in survival and failure time studies, ordinal data models, and categorical data studies aimed at uncovering underlying continuous distributions. In many applications both parameter constraints and data truncation are present. The statistical literature on such problems is very extensive, reflecting both the problems’ widespread occurrence in applications and the methodological challenges that they pose. However, it is striking that so little of this applied and theoretical literature involves a parametric Bayesian perspective. From a technical viewpoint, this perhaps is not difficult to understand. The fundamental tool for Bayesian calculations in typical realistic models is (multidimensional) numerical integration, which often is problem...",
"The Bayes factor is a useful tool for evaluating sets of inequality and about equality constrained models. In the approach described, the Bayes factor for a constrained model with the encompassing model reduces to the ratio of two proportions, namely the proportion of, respectively, the encompassing prior and posterior in agreement with the constraints. This enables easy and straightforward estimation of the Bayes factor and its Monte Carlo Error. In this set-up, the issue of sensitivity to model specific prior distributions reduces to sensitivity to one prior distribution, that is, the prior for the encompassing model. It is shown that for specific classes of inequality constrained models, the Bayes factors for the constrained with the unconstrained model is virtually independent of the encompassing prior, that is, model selection is virtually objective.",
"An encompassing prior (EP) approach to facilitate Bayesian model selection for nested models with inequality constraints has been previously proposed. In this approach, samples are drawn from the prior and posterior distributions of an encompassing model that contains an inequality restricted version as a special case. The Bayes factor in favor of the inequality restriction then simplifies to the ratio of the proportions of posterior and prior samples consistent with the inequality restriction. This formalism has been applied almost exclusively to models with inequality or ''about equality'' constraints. It is shown that the EP approach naturally extends to exact equality constraints by considering the ratio of the heights for the posterior and prior distributions at the point that is subject to test (i.e., the Savage-Dickey density ratio). The EP approach generalizes the Savage-Dickey ratio method, and can accommodate both inequality and exact equality constraints. The general EP approach is found to be a computationally efficient procedure to calculate Bayes factors for nested models. However, the EP approach to exact equality constraints is vulnerable to the Borel-Kolmogorov paradox, the consequences of which warrant careful consideration."
]
} |
1602.01226 | 2950327665 | Context: Verification and validation (V&V) activities make up 20 to 50 percent of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project whilst also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation. | Manual software testing is associated with problems in practice such as high cost and tediousness and error-proneness @cite_16 @cite_56 @cite_36 @cite_38 @cite_34 @cite_31 . Despite these problems, manual testing is still extensively used for system and acceptance testing in industrial practice. One reason is because state-of-practice test automation techniques primarily perform testing on lower levels of system abstraction, e.g. unit testing with JUnit @cite_15 . Attempts to apply the low level techniques for high level testing, e.g. system and acceptance tests, have resulted in complex test cases that are costly and difficult to maintain, presenting a need for high level test automation techniques @cite_6 @cite_44 @cite_55 @cite_56 @cite_2 @cite_26 @cite_45 @cite_36 . Another reason for the lack of automation is presented in research as the inability to automate all testing @cite_6 @cite_14 @cite_46 @cite_47 @cite_37 . This inability comes from the inability of scripted test cases to identify defects that are not explicitly asserted, which infers a need for, at least some level of, manual or exploratory testing @cite_23 . | {
"cite_N": [
"@cite_38",
"@cite_47",
"@cite_26",
"@cite_14",
"@cite_37",
"@cite_15",
"@cite_36",
"@cite_46",
"@cite_55",
"@cite_34",
"@cite_56",
"@cite_6",
"@cite_44",
"@cite_45",
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_16"
],
"mid": [
"",
"",
"",
"1972478306",
"",
"2483314354",
"",
"",
"",
"",
"",
"2114647523",
"",
"",
"1481784587",
"",
"",
"2154516928"
],
"abstract": [
"",
"",
"",
"This report addresses some of our observations made in a dozen of projects in the area of software testing, and more specifically, in automated testing. It documents, analyzes and consolidates what we consider to be of interest to the community. The major findings can be summarized in a number of lessons learned, covering test strategy, testability, daily integration, and best practices. The report starts with a brief description of five sample projects. Then, we discuss our observations and experiences and illustrate them with the sample projects. The report concludes with a synopsis of these experiences and with suggestions for future test automation endeavors.",
"",
"Writing unit test code is labor-intensive, hence it is often not done as an integral part of programming. However, unit testing is a practical approach to increasing the correctness and quality of software; for example, the Extreme Programming approach relies on frequent unit testing. In this paper we present a new approach that makes writing unit tests easier. It uses a formal specification language's runtime assertion checker to decide whether methods are working correctly, thus automating the writing of unit test oracles. These oracles can be easily combined with hand-written test data. Instead of writing testing code, the programmer writes formal specifications (e.g., pre- and postconditions). This makes the programmer's task easier, because specifications are more concise and abstract than the equivalent test code, and hence more readable and maintainable. Furthermore, by using specifications in testing, specification errors are quickly discovered, so the specifications are more likely to provide useful documentation and inputs to other tools. We have implemented this idea using the Java Modeling Language (JML) and the JUnit testing framework, but the approach could be easily implemented with other combinations of formal specification languages and unit test tools.",
"",
"",
"",
"",
"",
"There is a documented gap between academic and practitioner views on software testing. This paper tries to close the gap by investigating both views regarding the benefits and limits of test automation. The academic views are studied with a systematic literature review while the practitioners views are assessed with a survey, where we received responses from 115 software professionals. The results of the systematic literature review show that the source of evidence regarding benefits and limitations is quite shallow as only 25 papers provide the evidence. Furthermore, it was found that benefits often originated from stronger sources of evidence (experiments and case studies), while limitations often originated from experience reports. We believe that this is caused by publication bias of positive results. The survey showed that benefits of test automation were related to test reusability, repeatability, test coverage and effort saved in test executions. The limitations were high initial invests in automation setup, tool selection and training. Additionally, 45 of the respondents agreed that available tools in the market offer a poor fit for their needs. Finally, it was found that 80 of the practitioners disagreed with the vision that automated testing would fully replace manual testing.",
"",
"",
"Exploratory testing (ET) - simultaneous learning, test design, and test execution - is an applied practice in industry but lacks research. We present the current knowledge of ET based on existing literature and interviews with seven practitioners in three companies. Our interview data shows that the main reasons for using ET in the companies were the difficulties in designing test cases for complicated functionality and the need for testing from the end user's viewpoint. The perceived benefits of ET include the versatility of testing and the ability to quickly form an overall picture of system quality. We found some support for the claimed high defect detection efficiency of ET. The biggest shortcoming of ET was managing test coverage. Further quantitative research on the efficiency and effectiveness of ET is needed. To help focus ET efforts and help control test coverage, we must study planning, controlling and tracking ET.",
"",
"",
"Since manual black-box testing of GUI-based APplications (GAPs) is tedious and laborious, test engineers create test scripts to automate the testing process. These test scripts interact with GAPs by performing actions on their GUI objects. An extra effort that test engineers put in writing test scripts is paid off when these scripts are run repeatedly. Unfortunately, releasing new versions of GAPs with modified GUIs breaks their corresponding test scripts thereby obliterating benefits of test automation. We offer a novel approach for maintaining and evolving test scripts so that they can test new versions of their respective GAPs. We built a tool to implement our approach, and we conducted a case study with forty five professional programmers and test engineers to evaluate this tool. The results show with strong statistical significance that users find more failures and report fewer false positives (p ≪ 0.02) in test scripts with our tool than with a flagship industry product and a baseline manual approach. Our tool is lightweight and it takes less than eight seconds to analyze approximately 1KLOC of test scripts."
]
} |
1602.01226 | 2950327665 | Context: Verification and validation (V&V) activities make up 20 to 50 percent of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project whilst also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation. | The third generation, also referred to as Visual GUI Testing (VGT) @cite_21 @cite_52 @cite_12 , instead uses image recognition that allows VGT tools, e.g. Sikuli @cite_3 or JAutomate @cite_24 , to interact with any GUI component shown to the user on the computer monitor. As a consequence, VGT has a high degree of flexibility and can be used on any system regardless of programming language or even platform. Combined with scenario-based scripts, the image recognition allows the user to write testware applications that can emulate human user interaction with the SUT. Previous research has shown that VGT is applicable in practice for the automation of manual system tests @cite_21 @cite_52 @cite_12 . However, only limited information has been acquired regarding the maintenance costs associated with the technique @cite_7 @cite_12 @cite_0 . | {
"cite_N": [
"@cite_7",
"@cite_21",
"@cite_52",
"@cite_3",
"@cite_24",
"@cite_0",
"@cite_12"
],
"mid": [
"2050930639",
"2165413788",
"",
"2141125339",
"2033288554",
"114331032",
""
],
"abstract": [
"Visual GUI testing (VGT) is an emerging technique that provides software companies with the capability to automate previously time-consuming, tedious, and fault prone manual system and acceptance tests. Previous work on VGT has shown that the technique is industrially applicable, but has not addressed the real-world applicability of the technique when used by practitioners on industrial grade systems. This paper presents a case study performed during an industrial project with the goal to transition from manual to automated system testing using VGT. Results of the study show that the VGT transition was successful and that VGT could be applied in the industrial context when performed by practitioners but that there were several problems that first had to be solved, e.g. testing of a distributed system, tool volatility. These problems and solutions have been presented together with qualitative, and quantitative, data about the benefits of the technique compared to manual testing, e.g. greatly improved execution speed, feasible transition and maintenance costs, improved bug finding ability. The study thereby provides valuable, and previously missing, contributions about VGT to both practitioners and researchers.",
"Software companies are under continuous pressure to shorten time to market, raise quality and lower costs. More automated system testing could be instrumental in achieving these goals and in recent years testing tools have been developed to automate the interaction with software systems at the GUI level. However, there is a lack of knowledge on the usability and applicability of these tools in an industrial setting. This study evaluates two tools for automated visual GUI testing on a real-world, safety-critical software system developed by the company Saab AB. The tools are compared based on their properties as well as how they support automation of system test cases that have previously been conducted manually. The time to develop and the size of the automated test cases as well as their execution times have been evaluated. Results show that there are only minor differences between the two tools, one commercial and one open-source, but, more importantly, that visual GUI testing is an applicable technology for automated system testing with effort gains over manual system test practices. The study results also indicate that the technology has benefits over alternative GUI testing techniques and that it can be used for automated acceptance testing. However, visual GUI testing still has challenges that must be addressed, in particular the script maintenance costs and how to support robust test execution.",
"",
"We present Sikuli, a visual approach to search and automation of graphical user interfaces using screenshots. Sikuli allows users to take a screenshot of a GUI element (such as a toolbar button, icon, or dialog box) and query a help system using the screenshot instead of the element's name. Sikuli also provides a visual scripting API for automating GUI interactions, using screenshot patterns to direct mouse and keyboard events. We report a web-based user study showing that searching by screenshot is easy to learn and faster to specify than keywords. We also demonstrate several automation tasks suitable for visual scripting, such as map navigation and bus tracking, and show how visual scripting can improve interactive help systems previously proposed in the literature.",
"System- and acceptance-testing are primarily performed with manual practices in current software industry. However, these practices have several issues, e.g. they are tedious, error prone and time consuming with costs up towards 40 percent of the total development cost. Automated test techniques have been proposed as a solution to mitigate these issues, but they generally approach testing from a lower level of system abstraction, leaving a gap for a flexible, high system-level test automation technique tool. In this paper we present JAutomate, a Visual GUI Testing (VGT) tool that fills this gap by combining image recognition with record and replay functionality for high system-level test automation performed through the system under test's graphical user interface. We present the tool, its benefits compared to other similar techniques and manual testing. In addition, we compare JAutomate with two other VGT tools based on their static properties. Finally, we present the results from a survey with industrial practitioners that identifies test-related problems that industry is currently facing and discuss how JAutomate can solve or mitigate these problems.",
"Automation in Web testing has been successfully supported by DOM-based tools that allow testers to program the interactions of their test cases with the Web application under test. More recently a new generation of visual tools has been proposed where a test case interacts with the Web application by recognising the images of the widgets that can be actioned upon and by asserting the expected visual appearance of the result.",
""
]
} |
1602.01410 | 2515281507 | In image deconvolution problems, the diagonalization of the underlying operators by means of the fast Fourier transform (FFT) usually yields very large speedups. When there are incomplete observations (e.g., in the case of unknown boundaries), standard deconvolution techniques normally involve non-diagonalizable operators, resulting in rather slow methods or, otherwise, use inexact convolution models, resulting in the occurrence of artifacts in the enhanced images. In this paper, we propose a new deconvolution framework for images with incomplete observations that allows us to work with diagonalized convolution operators, and therefore is very fast. We iteratively alternate the estimation of the unknown pixels and of the deconvolved image, using, e.g., an FFT-based deconvolution method. This framework is an efficient, high-quality alternative to existing methods of dealing with the image boundaries, such as edge tapering. It can be used with any fast deconvolution method. We give an example in which a state-of-the-art method that assumes periodic boundary conditions is extended, using this framework, to unknown boundary conditions. Furthermore, we propose a specific implementation of this framework, based on the alternating direction method of multipliers (ADMM). We provide a proof of convergence for the resulting algorithm, which can be seen as a “partial” ADMM, in which not all variables are dualized. We report experimental comparisons with other primal–dual methods, where the proposed one performed at the level of the state of the art. Four different kinds of applications were tested in the experiments: deconvolution, deconvolution with inpainting, superresolution, and demosaicing, all with unknown boundaries. | Methods that use exact models have the potential to completely eliminate the occurrence of artifacts. One such method is the one proposed in @cite_46 , which implicitly uses model , although that model is not explicitly mentioned in the paper. That method is limited to the use of quadratic regularizers, which allow a fast implementation but yield relatively low-quality deconvolution results. The method has been extended in @cite_44 , allowing the use of more general regularizers. In the latter form, the method achieves a relatively high speed by limiting the estimation of the boundary zone (an estimation that is time-consuming, in that method's formulation) to a few initial iterations of the optimization procedure. This version of the method improves on the results of @cite_46 due to the use of more appropriate regularizers, but the imperfect estimation of the boundary zone gives rise to artifacts in the deblurred images. Similar approaches for astronomical images can be found in @cite_42 @cite_4 . Another method, proposed in @cite_11 , uses model , and is rather slow due to the use of non-diagonalizable BTTB matrices. | {
"cite_N": [
"@cite_4",
"@cite_42",
"@cite_44",
"@cite_46",
"@cite_11"
],
"mid": [
"",
"2121559065",
"1969556086",
"2100476607",
"2164728625"
],
"abstract": [
"",
"In this paper we propose a solution to the problem of reducing the boundary effects (ripples) in the deconvolution of astronomical images. The approach applies to the Richardson-Lucy method (RLM), namely the most frequently used de- convolution method in Astronomy, and is based on the idea of using RLM for attempting a reconstruction of the astronomical target in a domain broader than that of the detected image. Even if, in general, the reconstruction outside the image domain is not reliable, this approach, in a sense, is letting RLM to choose the appropriate boundary conditions and, as a consequence, the reconstruction inside the domain is considerably improved. We propose a simple implementation of this approach, allowing a reduction of its computational burden. Numerical experiments indicate that it is possible to obtain excellent results. Extensions and applications of the method are briefly discussed.",
"We propose a solution to the problem of boundary artifacts appearing in several recently published fast deblurring algorithms based on iterated shrinkage thresholding in a sparse domain and Fourier domain deconvolution. Our approach adapts an idea proposed by Reeves for deconvolution by the Wiener filter. The time of computation less than doubles.",
"Fast Fourier transform (FFT)-based restorations are fast, but at the expense of assuming that the blurring and deblurring are based on circular convolution. Unfortunately, when the opposite sides of the image do not match up well in intensity, this assumption can create significant artifacts across the image. If the pixels outside the measured image window are modeled as unknown values in the restored image, boundary artifacts are avoided. However, this approach destroys the structure that makes the use of the FFT directly applicable, since the unknown image is no longer the same size as the measured image. Thus, the restoration methods available for this problem no longer have the computational efficiency of the FFT. We propose a new restoration method for the unknown boundary approach that can be implemented in a fast and flexible manner. We decompose the restoration into a sum of two independent restorations. One restoration yields an image that comes directly from a modified FFT-based approach. The other restoration involves a set of unknowns whose number equals that of the unknown boundary values. By summing the two, the artifacts are canceled. Because the second restoration has a significantly reduced set of unknowns, it can be calculated very efficiently even though no circular convolution structure exists.",
"We propose a total variation based model for simultaneous image inpainting and blind deconvolution. We demonstrate that the tasks are inherently coupled together and that solving them individually will lead to poor results. The main advantages of our model are that (i) boundary conditions for deconvolution required near the interface between observed and occluded regions are naturally generated through inpainting; (ii) inpainting results are enhanced through deconvolution (as opposed to inpainting blurry images). As a result, ringing effects due to imposing improper boundary conditions and errors due to imperfection of inpainting blurry images are reduced. Moreover, our model can also be used to generate boundary conditions for regular deconvolution problems that yields better results than previous methods.© 2005 Wiley Periodicals, Inc. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 92–102, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002 ima.20041"
]
} |
1602.01410 | 2515281507 | In image deconvolution problems, the diagonalization of the underlying operators by means of the fast Fourier transform (FFT) usually yields very large speedups. When there are incomplete observations (e.g., in the case of unknown boundaries), standard deconvolution techniques normally involve non-diagonalizable operators, resulting in rather slow methods or, otherwise, use inexact convolution models, resulting in the occurrence of artifacts in the enhanced images. In this paper, we propose a new deconvolution framework for images with incomplete observations that allows us to work with diagonalized convolution operators, and therefore is very fast. We iteratively alternate the estimation of the unknown pixels and of the deconvolved image, using, e.g., an FFT-based deconvolution method. This framework is an efficient, high-quality alternative to existing methods of dealing with the image boundaries, such as edge tapering. It can be used with any fast deconvolution method. We give an example in which a state-of-the-art method that assumes periodic boundary conditions is extended, using this framework, to unknown boundary conditions. Furthermore, we propose a specific implementation of this framework, based on the alternating direction method of multipliers (ADMM). We provide a proof of convergence for the resulting algorithm, which can be seen as a “partial” ADMM, in which not all variables are dualized. We report experimental comparisons with other primal–dual methods, where the proposed one performed at the level of the state of the art. Four different kinds of applications were tested in the experiments: deconvolution, deconvolution with inpainting, superresolution, and demosaicing, all with unknown boundaries. | The works @cite_37 and @cite_18 propose primal-dual methods that do not involve the inversion of large BTTB matrices. In , we experimentally test one of these methods @cite_37 , and find it to be rather slow. Another primal-dual method that considers non-periodic boundaries was proposed in @cite_32 . It needs the structure of the boundaries to be known , which normally is not the case in real-life situations. The use of an artificially chosen structure results, once again, in the occurrence of artifacts. | {
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_32"
],
"mid": [
"1998991750",
"1973665846",
"1969128975"
],
"abstract": [
"We propose a new first-order splitting algorithm for solving jointly the primal and dual formulations of large-scale convex minimization problems involving the sum of a smooth function with Lipschitzian gradient, a nonsmooth proximable function, and linear composite functions. This is a full splitting approach, in the sense that the gradient and the linear operators involved are applied explicitly without any inversion, while the nonsmooth functions are processed individually via their proximity operators. This work brings together and notably extends several classical splitting schemes, like the forward–backward and Douglas–Rachford methods, as well as the recent primal–dual method of Chambolle and Pock designed for problems with linear composite terms.",
"A wide array of image recovery problems can be abstracted into the problem of minimizing a sum of composite convex functions in a Hilbert space. To solve such problems, primal-dual proximal approaches have been developed which provide efficient solutions to large-scale optimization problems. The objective of this paper is to show that a number of existing algorithms can be derived from a general form of the forward-backward algorithm applied in a suitable product space. Our approach also allows us to develop useful extensions of existing algorithms by introducing a variable metric. An illustration to image restoration is provided.",
"We present primal-dual decomposition algorithms for convex optimization problems with cost functions @math , where @math and @math have inexpensive proximal operators and @math can be decomposed as a sum of two structured matrices. The methods are based on the Douglas--Rachford splitting algorithm applied to various splittings of the primal-dual optimality conditions. We discuss applications to image deblurring problems with nonquadratic data fidelity terms, different types of convex regularization, and simple convex constraints. In these applications, the primal-dual splitting approach allows us to handle general boundary conditions for the blurring operator. Numerical results indicate that the primal-dual splitting methods compare favorably with the alternating direction method of multipliers, the Douglas--Rachford algorithm applied to a reformulated primal problem, and the Chambolle--Pock primal-dual algorithm."
]
} |
1602.01410 | 2515281507 | In image deconvolution problems, the diagonalization of the underlying operators by means of the fast Fourier transform (FFT) usually yields very large speedups. When there are incomplete observations (e.g., in the case of unknown boundaries), standard deconvolution techniques normally involve non-diagonalizable operators, resulting in rather slow methods or, otherwise, use inexact convolution models, resulting in the occurrence of artifacts in the enhanced images. In this paper, we propose a new deconvolution framework for images with incomplete observations that allows us to work with diagonalized convolution operators, and therefore is very fast. We iteratively alternate the estimation of the unknown pixels and of the deconvolved image, using, e.g., an FFT-based deconvolution method. This framework is an efficient, high-quality alternative to existing methods of dealing with the image boundaries, such as edge tapering. It can be used with any fast deconvolution method. We give an example in which a state-of-the-art method that assumes periodic boundary conditions is extended, using this framework, to unknown boundary conditions. Furthermore, we propose a specific implementation of this framework, based on the alternating direction method of multipliers (ADMM). We provide a proof of convergence for the resulting algorithm, which can be seen as a “partial” ADMM, in which not all variables are dualized. We report experimental comparisons with other primal–dual methods, where the proposed one performed at the level of the state of the art. Four different kinds of applications were tested in the experiments: deconvolution, deconvolution with inpainting, superresolution, and demosaicing, all with unknown boundaries. | In @cite_41 , a method that has some resemblance to our proposed deconvolution framework was introduced, in the context of the solution of systems of linear equations with Toeplitz system matrices. | {
"cite_N": [
"@cite_41"
],
"mid": [
"2029301183"
],
"abstract": [
"This paper explores a seemingly counter-intuitive idea: the possibility of accelerating the solution of certain linear equations by adding even more equations to the problem. The basic insight is to trade-off problem size by problem structure. We test this idea on Toeplitz equations, in which case the expense of a larger set of equations easily leads to circulant structure. The idea leads to a very simple iterative algorithm, which works for a certain class of Toeplitz matrices, each iteration requiring only two circular convolutions. In the symmetric definite case, numerical experiments show that the method can compete with the preconditioned conjugate gradient method (PCG), which achieves O(nlogn) performance. Because the iteration does not converge for all Toeplitz matrices, we give necessary and sufficient conditions to ensure convergence (for not necessarily symmetric matrices), and suggest an efficient convergence test. In the positive definite case we determine the value of the free parameter of the circulant that leads to the fastest convergence, as well as the corresponding value for the spectral radius of the iteration matrix. Although the usefulness of the proposed iteration is limited in the case of ill-conditioned matrices, we believe that the results show that the problem size problem structure trade-off deserves further study."
]
} |
1602.01168 | 2271779885 | Deep Convolutional Neural Networks (CNN) enforces supervised information only at the output layer, and hidden layers are trained by back propagating the prediction error from the output layer without explicit supervision. We propose a supervised feature learning approach, Label Consistent Neural Network, which enforces direct supervision in late hidden layers. We associate each neuron in a hidden layer with a particular class label and encourage it to be activated for input signals from the same class. More specifically, we introduce a label consistency regularization called "discriminative representation error" loss for late hidden layers and combine it with classification error loss to build our overall objective function. This label consistency constraint alleviates the common problem of gradient vanishing and tends to faster convergence; it also makes the features derived from late hidden layers discriminative enough for classification even using a simple @math -NN classifier, since input signals from the same class will have very similar representations. Experimental results demonstrate that our approach achieves state-of-the-art performances on several public benchmarks for action and object category recognition. | CNNs have achieved performance improvements over traditional hand-crafted features in image recognition @cite_17 , detection @cite_31 and retrieval @cite_18 . This is due to the availability of large-scale image datasets @cite_24 and recent technical improvements such as ReLU @cite_38 , drop-out @cite_33 , @math convolution @cite_4 @cite_39 , batch normalization @cite_13 and data augmentation based on random flipping, RGB jittering, contrast normalization @cite_17 @cite_4 , which helps speed up convergence while avoiding overfitting. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_39",
"@cite_24",
"@cite_31",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"2102605133",
"",
"1904365287",
"2950179405",
"2108598243",
"",
"2949117887",
""
],
"abstract": [
"",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This \"overfitting\" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random \"dropout\" gives big improvements on many benchmark tasks and sets new records for speech and object recognition.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.",
""
]
} |
1602.01168 | 2271779885 | Deep Convolutional Neural Networks (CNN) enforces supervised information only at the output layer, and hidden layers are trained by back propagating the prediction error from the output layer without explicit supervision. We propose a supervised feature learning approach, Label Consistent Neural Network, which enforces direct supervision in late hidden layers. We associate each neuron in a hidden layer with a particular class label and encourage it to be activated for input signals from the same class. More specifically, we introduce a label consistency regularization called "discriminative representation error" loss for late hidden layers and combine it with classification error loss to build our overall objective function. This label consistency constraint alleviates the common problem of gradient vanishing and tends to faster convergence; it also makes the features derived from late hidden layers discriminative enough for classification even using a simple @math -NN classifier, since input signals from the same class will have very similar representations. Experimental results demonstrate that our approach achieves state-of-the-art performances on several public benchmarks for action and object category recognition. | AlexNet @cite_17 initiated the dramatic performance improvements of CNN in static image recognition and current state-of-the-art performance has been obtained by deeper and more sophisticated network architectures such as VGGNet @cite_35 and GoogLeNet @cite_39 . Very recently, researchers have applied CNNs to action and event recognition in videos. While initial approaches use image-trained CNN models to extract frame-level features and aggregate them into video-level descriptors @cite_22 @cite_0 @cite_43 , more recent work trains CNNs using video data and focuses on effectively incorporating the temporal dimension and learning good spatial-temporal features automatically @cite_40 @cite_5 @cite_25 @cite_19 @cite_42 @cite_28 . Two-stream CNNs @cite_25 are perhaps the most successful architecture for action recognition currently. They consist of a spatial net trained with video frames and a temporal net trained with optical flow fields. With the two streams capturing spatial and temporal information separately, the late fusion of the two produces competitive action recognition results. @cite_19 and @cite_42 have obtained further performance gain by exploring deeper two-stream network architectures and refining technical details; @cite_28 achieved state-of-the-art in action recognition by integrating two-stream CNNs, improved trajectories and Fisher Vector encoding. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_28",
"@cite_42",
"@cite_39",
"@cite_0",
"@cite_43",
"@cite_40",
"@cite_19",
"@cite_5",
"@cite_25",
"@cite_17"
],
"mid": [
"1686810756",
"",
"1944615693",
"",
"2950179405",
"2964227963",
"1950136256",
"1983364832",
"787785461",
"2308045930",
"2952186347",
""
],
"abstract": [
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"",
"Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features [31] and deep-learned features [24]. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMD-B51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features [31] and deep-learned features [24]. Our method also achieves superior performance to the state of the art on these datasets.",
"",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"",
"In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkits. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6 to 36.8 for the TRECVID MEDTest 14 dataset and from 34.0 to 44.6 for the TRECVID MEDTest 13 dataset.",
"We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.",
"Deep convolutional networks have achieved great success for object recognition in still images. However, for action recognition in videos, the improvement of deep convolutional networks is not so evident. We argue that there are two reasons that could probably explain this result. First the current network architectures (e.g. Two-stream ConvNets) are relatively shallow compared with those very deep models in image domain (e.g. VGGNet, GoogLeNet), and therefore their modeling capacity is constrained by their depth. Second, probably more importantly, the training dataset of action recognition is extremely small compared with the ImageNet dataset, and thus it will be easy to over-fit on the training dataset. To address these issues, this report presents very deep two-stream ConvNets for action recognition, by adapting recent very deep architectures into video domain. However, this extension is not easy as the size of action recognition is quite small. We design several good practices for the training of very deep two-stream ConvNets, namely (i) pre-training for both spatial and temporal nets, (ii) smaller learning rates, (iii) more data augmentation techniques, (iv) high drop out ratio. Meanwhile, we extend the Caffe toolbox into Multi-GPU implementation with high computational efficiency and low memory consumption. We verify the performance of very deep two-stream ConvNets on the dataset of UCF101 and it achieves the recognition accuracy of @math .",
"",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
""
]
} |
1602.01595 | 2344508595 | We train one multilingual model for dependency parsing and use it to parse sentences in several languages. The parsing model uses (i) multilingual word clusters and embeddings; (ii) token-level language information; and (iii) language-specific features (fine-grained POS tags). This input representation enables the parser not only to parse effectively in multiple languages, but also to generalize across languages based on linguistic universals and typological similarities, making it more effective to learn from limited annotations. Our parser's performance compares favorably to strong baselines in a range of data scenarios, including when the target language has a large treebank, a small treebank, or no treebank for training. | Our work builds on the model transfer approach, which was pioneered by who trained a parser on a source language treebank then applied it to parse sentences in a target language. and trained unlexicalized parsers on treebanks of multiple source languages and applied the parser to different languages. , , and used language typology to improve model transfer. To add lexical information, used multilingual word clusters, while , , and used multilingual word embeddings. used a neural network based model, sharing most of the parameters between two languages, and used an @math regularizer to tie the lexical embeddings of translationally-equivalent words. We incorporate these ideas in our framework, while proposing a novel neural architecture for embedding language typology (see ), and use a variant of word dropout @cite_6 for consuming noisy structured inputs. We also show how to replace an array of monolingually trained parsers with one multilingually-trained parser without sacrificing accuracy, which is related to . | {
"cite_N": [
"@cite_6"
],
"mid": [
"2250473257"
],
"abstract": [
"Many existing deep learning models for natural language processing tasks focus on learning the compositionality of their inputs, which requires many expensive computations. We present a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time. While our model is syntactically-ignorant, we show significant improvements over previous bag-of-words models by deepening our network and applying a novel variant of dropout. Moreover, our model performs better than syntactic models on datasets with high syntactic variance. We show that our model makes similar errors to syntactically-aware models, indicating that for the tasks we consider, nonlinearly transforming the input is more important than tailoring a network to incorporate word order and syntax."
]
} |
1602.01595 | 2344508595 | We train one multilingual model for dependency parsing and use it to parse sentences in several languages. The parsing model uses (i) multilingual word clusters and embeddings; (ii) token-level language information; and (iii) language-specific features (fine-grained POS tags). This input representation enables the parser not only to parse effectively in multiple languages, but also to generalize across languages based on linguistic universals and typological similarities, making it more effective to learn from limited annotations. Our parser's performance compares favorably to strong baselines in a range of data scenarios, including when the target language has a large treebank, a small treebank, or no treebank for training. | Another popular approach for cross-lingual supervision is to project annotations from the source language to the target language via a parallel corpus @cite_23 @cite_19 or via automatically-translated sentences @cite_28 . used entropy regularization to learn from both parallel data (with projected annotations) and unlabeled data in the target language. trained an array of target-language parsers on fully annotated trees, by iteratively decoding sentences in the target language with incomplete annotations. One research direction worth pursuing is to find synergies between the model transfer approach and annotation projection approach. | {
"cite_N": [
"@cite_28",
"@cite_19",
"@cite_23"
],
"mid": [
"2144571677",
"2143954309",
"2016630033"
],
"abstract": [
"The present paper describes an approach to adapting a parser to a new language. Presumably the target language is much poorer in linguistic resources than the source language. The technique has been tested on two European languages due to test data availability; however, it is easily applicable to any pair of sufficiently related languages, including some of the Indic language group. Our adaptation technique using existing annotations in the source language achieves performance equivalent to that obtained by training on 1546 trees in the target language.",
"Broad coverage, high quality parsers are available for only a handful of languages. A prerequisite for developing broad coverage parsers for more languages is the annotation of text with the desired linguistic representations (also known as “treebanking”). However, syntactic annotation is a labor intensive and time-consuming process, and it is difficult to find linguistically annotated text in sufficient quantities. In this article, we explore using parallel text to help solving the problem of creating syntactic annotation in more languages. The central idea is to annotate the English side of a parallel corpus, project the analysis to the second language, and then train a stochastic analyzer on the resulting noisy annotations. We discuss our background assumptions, describe an initial study on the “projectability” of syntactic relations, and then present two experiments in which stochastic parsers are developed with minimal human intervention via projection from English.",
"This paper describes a system and set of algorithms for automatically inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity taggers and morphological analyzers for an arbitrary foreign language. Case studies include French, Chinese, Czech and Spanish.Existing text analysis tools for English are applied to bilingual text corpora and their output projected onto the second language via statistically derived word alignments. Simple direct annotation projection is quite noisy, however, even with optimal alignments. Thus this paper presents noise-robust tagger, bracketer and lemmatizer training procedures capable of accurate system bootstrapping from noisy and incomplete initial projections.Performance of the induced stand-alone part-of-speech tagger applied to French achieves 96 core part-of-speech (POS) tag accuracy, and the corresponding induced noun-phrase bracketer exceeds 91 F-measure. The induced morphological analyzer achieves over 99 lemmatization accuracy on the complete French verbal system.This achievement is particularly noteworthy in that it required absolutely no hand-annotated training data in the given language, and virtually no language-specific knowledge or resources beyond raw text. Performance also significantly exceeds that obtained by direct annotation projection."
]
} |
1602.01599 | 2951809175 | We present a comparative evaluation of various techniques for action recognition while keeping as many variables as possible controlled. We employ two categories of Riemannian manifolds: symmetric positive definite matrices and linear subspaces. For both categories we use their corresponding nearest neighbour classifiers, kernels, and recent kernelised sparse representations. We compare against traditional action recognition techniques based on Gaussian mixture models and Fisher vectors (FVs). We evaluate these action recognition techniques under ideal conditions, as well as their sensitivity in more challenging conditions (variations in scale and translation). Despite recent advancements for handling manifolds, manifold based techniques obtain the lowest performance and their kernel representations are more unstable in the presence of challenging conditions. The FV approach obtains the highest accuracy under ideal conditions. Moreover, FV best deals with moderate scale and translation changes. | Grassmann manifolds, which are special cases of Riemannian manifolds, represent a set of @math -dimensional linear subspaces and have also been investigated for the action recognition problem @cite_21 @cite_50 @cite_42 @cite_47 . The straightforward way to deal with Riemannian manifolds is via the nearest-neighbour (NN) scheme. For SPD matrices, NN classification using the log-Euclidean metric for covariance matrices is employed in @cite_10 @cite_52 . Canonical or principal angles are used as a metric to measure similarity between two LS and have been employed in conjunction with NN in @cite_10 . | {
"cite_N": [
"@cite_21",
"@cite_42",
"@cite_52",
"@cite_50",
"@cite_47",
"@cite_10"
],
"mid": [
"2161299605",
"2050728860",
"2019245255",
"1997025120",
"2106670658",
"2059378705"
],
"abstract": [
"Action videos are multidimensional data and can be naturally represented as data tensors. While tensor computing is widely used in computer vision, the geometry of tensor space is often ignored. The aim of this paper is to demonstrate the importance of the intrinsic geometry of tensor space which yields a very discriminating structure for action recognition. We characterize data tensors as points on a product manifold and model it statistically using least squares regression. To this aim, we factorize a data tensor relating to each order of the tensor using Higher Order Singular Value Decomposition (HOSVD) and then impose each factorized element on a Grassmann manifold. Furthermore, we account for underlying geometry on manifolds and formulate least squares regression as a composite function. This gives a natural extension from Euclidean space to manifolds. Consequently, classification is performed using geodesic distance on a product manifold where each factor manifold is Grassmannian. Our method exploits appearance and motion without explicitly modeling the shapes and dynamics. We assess the proposed method using three gesture databases, namely the Cambridge hand-gesture, the UMD Keck body-gesture, and the CHALEARN gesture challenge data sets. Experimental results reveal that not only does the proposed method perform well on the standard benchmark data sets, but also it generalizes well on the one-shot-learning gesture challenge. Furthermore, it is based on a simple statistical model and the intrinsic geometry of tensor space.",
"Common human actions are instantly recognizable by people and increasingly machines need to understand this language if they are to engage smoothly with people. Here we introduce a new method for automated human action recognition. The proposed method represents videos as a tangent bundle on a Grassmann manifold. Videos are expressed as third order tensors and factorized to a set of tangent spaces. Tangent vectors are then computed between elements on a Grassmann manifold and exploited for action classification. In particular, logarithmic mapping is applied to map a point from the manifold to tangent vectors centered at a given element. The canonical metric is used to induce the intrinsic distance for a set of tangent spaces. Empirical results show that our method is effective on both uniform and non-uniform backgrounds for action classification. We achieve recognition rates of 91 on the Cambridge gesture dataset, 88 on the UCF sport dataset, and 97 on the KTH human action dataset. Additionally, our method does not require prior training.",
"We propose a general framework for fast and accurate recognition of actions in video using empirical covariance matrices of features. A dense set of spatio-temporal feature vectors are computed from video to provide a localized description of the action, and subsequently aggregated in an empirical covariance matrix to compactly represent the action. Two supervised learning methods for action recognition are developed using feature covariance matrices. Common to both methods is the transformation of the classification problem in the closed convex cone of covariance matrices into an equivalent problem in the vector space of symmetric matrices via the matrix logarithm. The first method applies nearest-neighbor classification using a suitable Riemannian metric for covariance matrices. The second method approximates the logarithm of a query covariance matrix by a sparse linear combination of the logarithms of training covariance matrices. The action label is then determined from the sparse coefficients. Both methods achieve state-of-the-art classification performance on several datasets, and are robust to action variability, viewpoint changes, and low object resolution. The proposed framework is conceptually simple and has low storage and computational requirements making it attractive for real-time implementation.",
"Increasingly, machines are interacting with people through human action recognition from video streams. Video data can naturally be represented as a third-order data tensor. Although many tensor-based approaches have been proposed for action recognition, the geometry of the tensor space is seldom regarded as an important aspect. In this paper, we stress that a data tensor is related to a tangent bundle on a special manifold. Using a manifold charting, we can extract discriminating information between actions. Data tensors are first factorized using high-order singular value decomposition, where each factor is projected onto a tangent space and the intrinsic distance is computed from a tangent bundle for action classification. We examine a standard manifold charting and some alternative chartings on special manifolds, particularly, the special orthogonal group, Stiefel manifolds, and Grassmann manifolds. Because the proposed paradigm frames the classification scheme as a nearest neighbor based on the intrinsic distance, prior training is unnecessary. We evaluate our method on three public action databases including the Cambridge gesture, the UMD Keck body gesture, and the UCF sport datasets. The empirical results reveal that our method is highly competitive with the current state-of-the-art methods, robust to small alignment errors, and yet simpler.",
"We present a novel structure, called a Subspace Forest, designed to provide an efficient approximate nearest neighbor query of subspaces represented as points on Grassmann manifolds. We apply this structure to action recognition by representing actions as subspaces spanning a sequence of thumbnail image tiles extracted from a tracked entity. The Subspace Forest lifts the concept of randomized decision forests from classifying vectors to classifying subspaces, and employs a splitting method that respects the underlying manifold geometry. The Subspace Forest is an inherently parallel structure and is highly scalable due to O(log N) recognition time complexity. Our experimental results demonstrate state-of-the-art classification accuracies on the well-known KTH Actions and UCF Sports benchmarks, and a competitive score on Cambridge Gestures. In addition to being both highly accurate and scalable, the Subspace Forest is built without supervision and requires no extensive validation stage for model selection. Conceptually, the Subspace Forest could be used anywhere set-to-set feature matching is desired.",
"Nearest-neighbor searching is a crucial component in many computer vision applications such as face recognition, object recognition, texture classification, and activity recognition. When large databases are involved in these applications, it is also important to perform these searches in a fast manner. Depending on the problem at hand, nearest neighbor strategies need to be devised over feature and model spaces which in many cases are not Euclidean in nature. Thus, metrics that are tuned to the geometry of this space are required which are also known as geodesics. In this paper, we address the problem of fast nearest neighbor searching in non-Euclidean spaces, where in addition to dealing with the large size of the dataset, the significant computational load involves geodesic computations. We study the applicability of the various classes of nearest neighbor algorithms toward this end. Exact nearest neighbor methods that rely solely on the existence of a metric can be extended, albeit with a huge computational cost. We derive an approximate method of searching via approximate embeddings using the logarithmic map. We study the error incurred in such an embedding and show that it performs well in real experiments."
]
} |
1602.01599 | 2951809175 | We present a comparative evaluation of various techniques for action recognition while keeping as many variables as possible controlled. We employ two categories of Riemannian manifolds: symmetric positive definite matrices and linear subspaces. For both categories we use their corresponding nearest neighbour classifiers, kernels, and recent kernelised sparse representations. We compare against traditional action recognition techniques based on Gaussian mixture models and Fisher vectors (FVs). We evaluate these action recognition techniques under ideal conditions, as well as their sensitivity in more challenging conditions (variations in scale and translation). Despite recent advancements for handling manifolds, manifold based techniques obtain the lowest performance and their kernel representations are more unstable in the presence of challenging conditions. The FV approach obtains the highest accuracy under ideal conditions. Moreover, FV best deals with moderate scale and translation changes. | Manifolds can be also mapped to a reproducing kernel Hilbert space (RKHS) by using kernels. Kernel analysis on SPD matrices and LS has been used for gesture and action recognition in @cite_40 @cite_0 @cite_9 . SPD matrices are embedded into RKHS via a pseudo kernel in @cite_40 . With this pseudo kernel is possible to formulate a locality preserving projections over SPD matrices. Positive definite radial kernels are used to solve the action recognition problem in @cite_0 , where an optimisation algorithm is employed to select the best kernel among the class of positive definite radial kernels on the manifold. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_40"
],
"mid": [
"1975951726",
"",
"2030605635"
],
"abstract": [
"We tackle the problem of optimizing over all possible positive definite radial kernels on Riemannian manifolds for classification. Kernel methods on Riemannian manifolds have recently become increasingly popular in computer vision. However, the number of known positive definite kernels on manifolds remain very limited. Furthermore, most kernels typically depend on at least one parameter that needs to be tuned for the problem at hand. A poor choice of kernel, or of parameter value, may yield significant performance drop-off. Here, we show that positive definite radial kernels on the unit n-sphere, the Grassmann manifold and Kendall's shape manifold can be expressed in a simple form whose parameters can be automatically optimized within a support vector machine framework. We demonstrate the benefits of our kernel learning algorithm on object, face, action and shape recognition.",
"",
"A convenient way of analysing Riemannian manifolds is to embed them in Euclidean spaces, with the embedding typically obtained by flattening the manifold via tangent spaces. This general approach is not free of drawbacks. For example, only distances between points to the tangent pole are equal to true geodesic distances. This is restrictive and may lead to inaccurate modelling. Instead of using tangent spaces, we propose embedding into the Reproducing Kernel Hilbert Space by introducing a Riemannian pseudo kernel. We furthermore propose to recast a locality preserving projection technique from Euclidean spaces to Riemannian manifolds, in order to demonstrate the benefits of the embedding. Experiments on several visual classification tasks (gesture recognition, person re-identification and texture classification) show that in comparison to tangent-based processing and state-of-the-art methods (such as tensor canonical correlation analysis), the proposed approach obtains considerable improvements in discrimination accuracy."
]
} |
1602.01599 | 2951809175 | We present a comparative evaluation of various techniques for action recognition while keeping as many variables as possible controlled. We employ two categories of Riemannian manifolds: symmetric positive definite matrices and linear subspaces. For both categories we use their corresponding nearest neighbour classifiers, kernels, and recent kernelised sparse representations. We compare against traditional action recognition techniques based on Gaussian mixture models and Fisher vectors (FVs). We evaluate these action recognition techniques under ideal conditions, as well as their sensitivity in more challenging conditions (variations in scale and translation). Despite recent advancements for handling manifolds, manifold based techniques obtain the lowest performance and their kernel representations are more unstable in the presence of challenging conditions. The FV approach obtains the highest accuracy under ideal conditions. Moreover, FV best deals with moderate scale and translation changes. | Recently, the traditional sparse representation (SR) on vectors has been generalised to sparse representations in SPD matrices and LS @cite_11 @cite_43 @cite_48 @cite_12 . While the objective of SR is to find a representation that efficiently approximates elements of a signal class with as few atoms as possible, for the Riemannian SR, any given point can be represented as a sparse combination of dictionary elements @cite_43 @cite_48 . @cite_48 , LS are embedded into the space via isometric mapping, which leads to a closed-form solution for updating a LS representation, atom by atom. Moreover, @cite_48 presents a kernelised version of the dictionary learning algorithm to deal with non-linearity in data. @cite_43 outlines the sparse coding and dictionary learning problem for SPD matrices. To this end, SPD matrices are embedded into the RKHS to perform sparse coding. | {
"cite_N": [
"@cite_43",
"@cite_48",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2950254505",
"1272371277",
"2141696504"
],
"abstract": [
"",
"Recent advances in computer vision and machine learning suggest that a wide range of problems can be addressed more appropriately by considering non-Euclidean geometry. In this paper we explore sparse dictionary learning over the space of linear subspaces, which form Riemannian structures known as Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into the space of symmetric matrices by an isometric mapping, which enables us to devise a closed-form solution for updating a Grassmann dictionary, atom by atom. Furthermore, to handle non-linearity in data, we propose a kernelised version of the dictionary learning algorithm. Experiments on several classification tasks (face recognition, action recognition, dynamic texture classification) show that the proposed approach achieves considerable improvements in discrimination accuracy, in comparison to state-of-the-art methods such as kernelised Affine Hull Method and graph-embedding Grassmann discriminant analysis.",
"Low-rank representation (LRR) has recently attracted great interest due to its pleasing efficacy in exploring low-dimensional subspace structures embedded in data. One of its successful applications is subspace clustering which means data are clustered according to the subspaces they belong to. In this paper, at a higher level, we intend to cluster subspaces into classes of subspaces. This is naturally described as a clustering problem on Grassmann manifold. The novelty of this paper is to generalize LRR on Euclidean space into the LRR model on Grassmann manifold. The new method has many applications in computer vision tasks. The paper conducts the experiments over two real world examples, clustering handwritten digits and clustering dynamic textures. The experiments show the proposed method outperforms a number of existing methods.",
"A novel framework for action recognition in video using empirical covariance matrices of bags of low-dimensional feature vectors is developed. The feature vectors are extracted from segments of silhouette tunnels of moving objects and coarsely capture their shapes. The matrix logarithm is used to map the segment covariance matrices, which live in a nonlinear Riemannian manifold, to the vector space of symmetric matrices. A recently developed sparse linear representation framework for dictionary-based classification is then applied to the log-covariance matrices. The log-covariance matrix of a query segment is approximated by a sparse linear combination of the log-covariance matrices of training segments and the sparse coefficients are used to determine the action label of the query segment. This approach is tested on the Weizmann and the UT-Tower human action datasets. The new approach attains a segment-level classification rate of 96.74 for the Weizmann dataset and 96.15 for the UT-Tower dataset. Additionally, the proposed method is computationally and memory efficient and easy to implement."
]
} |
1602.01599 | 2951809175 | We present a comparative evaluation of various techniques for action recognition while keeping as many variables as possible controlled. We employ two categories of Riemannian manifolds: symmetric positive definite matrices and linear subspaces. For both categories we use their corresponding nearest neighbour classifiers, kernels, and recent kernelised sparse representations. We compare against traditional action recognition techniques based on Gaussian mixture models and Fisher vectors (FVs). We evaluate these action recognition techniques under ideal conditions, as well as their sensitivity in more challenging conditions (variations in scale and translation). Despite recent advancements for handling manifolds, manifold based techniques obtain the lowest performance and their kernel representations are more unstable in the presence of challenging conditions. The FV approach obtains the highest accuracy under ideal conditions. Moreover, FV best deals with moderate scale and translation changes. | Recently, the FV approach has been successfully applied to the action recognition problem @cite_31 @cite_51 @cite_23 . This approach can be thought as an evolution of the BoF representation, encoding additional information @cite_41 @cite_23 . Rather than encoding the frequency of the descriptors, as for BoF, FV encodes the deviations from a probabilistic version of the visual dictionary. This is done by computing the gradient of the sample log-likelihood with respect the parameters of the dictionary model. Since more information is extracted, a smaller visual dictionary size can be used than for BoF, in order to achieve the same or better performance. | {
"cite_N": [
"@cite_41",
"@cite_31",
"@cite_51",
"@cite_23"
],
"mid": [
"1901041506",
"2283723945",
"2131042978",
"2105101328"
],
"abstract": [
"The Fisher Vector (FV) representation of images can be seen as an extension of the popular bag-of-visual word (BOV). Both of them are based on an intermediate representation, the visual vocabulary built in the low level feature space. If a probability density function (in our case a Gaussian Mixture Model) is used to model the visual vocabulary, we can compute the gradient of the log likelihood with respect to the parameters of the model to represent an image. The Fisher Vector is the concatenation of these partial derivatives and describes in which direction the parameters of the model should be modified to best fit the data. This representation has the advantage to give similar or even better classification performance than BOV obtained with supervised visual vocabularies, being at the same time class independent. This latter property allows its usage both in supervised (categorization, semantic image segmentation) and unsupervised tasks (clustering, retrieval). In this paper we will show how it was successfully applied to these problems achieving state-of-the-art performances.",
"We propose a hierarchical approach to multi-action recognition that performs joint classification and segmentation. Ai¾źgiven video containing several consecutive actions is processed via a sequence of overlapping temporal windows. Each frame in a temporal window is represented through selective low-level spatio-temporal features which efficiently capture relevant local dynamics. Features from each window are represented as a Fisher vector, which captures first and second order statistics. Instead of directly classifying each Fisher vector, it is converted into a vector of class probabilities. The final classification decision for each frame is then obtained by integrating the class probabilities at the frame level, which exploits the overlapping of the temporal windows. Experiments were performed on two datasets: s-KTH ai¾źstitched version of the KTH dataset to simulate multi-actions, and the challenging CMU-MMAC dataset. On s-KTH, the proposed approach achieves an accuracy of 85.0i¾ź , significantly outperforming two recent approaches based on GMMs and HMMs which obtained 78.3i¾ź and 71.2i¾ź , respectively. On CMU-MMAC, the proposed approach achieves an accuracy of 40.9i¾ź , outperforming the GMM and HMM approaches which obtained 33.7i¾ź and 38.4i¾ź , respectively. Furthermore, the proposed system is on average 40 times faster than the GMM based approach.",
"Action recognition in uncontrolled video is an important and challenging computer vision problem. Recent progress in this area is due to new local features and models that capture spatio-temporal structure between local features, or human-object interactions. Instead of working towards more complex models, we focus on the low-level features and their encoding. We evaluate the use of Fisher vectors as an alternative to bag-of-word histograms to aggregate a small set of state-of-the-art low-level descriptors, in combination with linear classifiers. We present a large and varied set of evaluations, considering (i) classification of short actions in five datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that for basic action recognition and localization MBH features alone are enough for state-of-the-art performance. For complex events we find that SIFT and MFCC features provide complementary cues. On all three problems we obtain state-of-the-art results, while using fewer features and less complex models.",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art."
]
} |
1602.01125 | 2261888986 | In this paper we explore the problem of fitting a 3D morphable model to single face images using only sparse geometric features (edges and landmark points). Previous approaches to this problem are based on nonlinear optimisation of an edge-derived cost that can be viewed as forming soft correspondences between model and image edges. We propose a novel approach, that explicitly computes hard correspondences. The resulting objective function is non-convex but we show that a good initialisation can be obtained efficiently using alternating linear least squares in a manner similar to the iterated closest point algorithm. We present experimental results on both synthetic and real images and show that our approach outperforms methods that use soft correspondence and other recent methods that rely solely on geometric features. | Landmark fitting 2D landmarks have long been used as a way to initialize a morphable model fit @cite_24 . Breuer al @cite_10 obtained this initialisation using a landmark detector providing a fully automatic system. More recently, landmarks have been shown to be sufficient for obtaining useful shape estimates in their own right @cite_28 . Furthermore, noisily detected landmarks can be filtered using a model @cite_11 and automatic landmark detection can be integrated into a fitting algorithm @cite_0 . In a similar manner to landmarks, local features can be used to aid the fitting process @cite_16 . | {
"cite_N": [
"@cite_28",
"@cite_24",
"@cite_0",
"@cite_16",
"@cite_10",
"@cite_11"
],
"mid": [
"2003706019",
"2237250383",
"236111108",
"1487748529",
"2156119076",
"2142848890"
],
"abstract": [
"In this paper, we present a complete framework to inverse render faces with a 3D Morphable Model (3DMM). By decomposing the image formation process into geometric and photometric parts, we are able to state the problem as a multilinear system which can be solved accurately and efficiently. As we treat each contribution as independent, the objective function is convex in the parameters and a global solution is guaranteed. We start by recovering 3D shape using a novel algorithm which incorporates generalization error of the model obtained from empirical measurements. We then describe two methods to recover facial texture, diffuse lighting, specular reflectance, and camera properties from a single image. The methods make increasingly weak assumptions and can be solved in a linear fashion. We evaluate our findings on a publicly available database, where we are able to outperform an existing state-of-the-art algorithm. We demonstrate the usability of the recovered parameters in a recognition experiment conducted on the CMU-PIE database.",
"In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.",
"We present a novel probabilistic approach for fitting a statistical model to an image. A 3D Morphable Model (3DMM) of faces is interpreted as a generative (Top-Down) Bayesian model. Random Forests are used as noisy detectors (Bottom-Up) for the face and facial landmark positions. The Top-Down and Bottom-Up parts are then combined using a Data-Driven Markov Chain Monte Carlo Method (DDMCMC). As core of the integration, we use the Metropolis-Hastings algorithm which has two main advantages. First, the algorithm can handle unreliable detections and therefore does not need the detectors to take an early and possible wrong hard decision before fitting. Second, it is open for integration of various cues to guide the fitting process. Based on the proposed approach, we implemented a completely automatic, pose and illumination invariant face recognition application. We are able to train and test the building blocks of our application on different databases. The system is evaluated on the Multi-PIE database and reaches state of the art performance.",
"In this paper, we propose a novel fitting method that uses local image features to fit a 3D Morphable Face Model to 2D images. To overcome the obstacle of optimising a cost function that contains a non-differentiable feature extraction operator, we use a learning-based cascaded regression method that learns the gradient direction from data. The method allows to simultaneously solve for shape and pose parameters. Our method is thoroughly evaluated on Morphable Model generated data and first results on real data are presented. Compared to traditional fitting methods, which use simple raw features like pixel colour or edge maps, local features have been shown to be much more robust against variations in imaging conditions. Our approach is unique in that we are the first to use local features to fit a 3D Morphable Model. Because of the speed of our method, it is applicable for real-time applications. Our cascaded regression framework is available as an open source library at github.com patrikhuber superviseddescent.",
"This paper presents a fully automated algorithm for reconstructing a textured 3D model of a face from a single photograph or a raw video stream. The algorithm is based on a combination of Support Vector Machines (SVMs) and a Morphable Model of 3D faces. After SVM face detection, individual facial features are detected using a novel regression- and classification-based approach, and probabilistically plausible configurations of features are selected to produce a list of candidates for several facial feature positions. In the next step, the configurations of feature points are evaluated using a novel criterion that is based on a Morphable Model and a combination of linear projections. To make the algorithm robust with respect to head orientation, this process is iterated while the estimate of pose is refined. Finally, the feature points initialize a model-fitting procedure of the Morphable Model. The result is a high resolution 3D surface model.",
"Fitting statistical 2D and 3D shape models to images is necessary for a variety of tasks, such as video editing and face recognition. Much progress has been made on local fitting from an initial guess, but determining a close enough initial guess is still an open problem. One approach is to detect distinct landmarks in the image and initalize the model fit from these correspondences. This is difficult, because detection of landmarks based only on the local appearance is inherently ambiguous. This makes it necessary to use global shape information for the detections. We propose a method to solve the combinatorial problem of selecting out of a large number of candidate landmark detections the configuration which is best supported by a shape model. Our method, as opposed to previous approaches, always finds the globally optimal configuration. The algorithm can be applied to a very general class of shape models and is independent of the underlying feature point detector. Its theoretic optimality is shown, and it is evaluated on a large face dataset."
]
} |
1602.01125 | 2261888986 | In this paper we explore the problem of fitting a 3D morphable model to single face images using only sparse geometric features (edges and landmark points). Previous approaches to this problem are based on nonlinear optimisation of an edge-derived cost that can be viewed as forming soft correspondences between model and image edges. We propose a novel approach, that explicitly computes hard correspondences. The resulting objective function is non-convex but we show that a good initialisation can be obtained efficiently using alternating linear least squares in a manner similar to the iterated closest point algorithm. We present experimental results on both synthetic and real images and show that our approach outperforms methods that use soft correspondence and other recent methods that rely solely on geometric features. | Edge fitting An early example of using image edges for face model fitting is the Active Shape Model (ASM) @cite_20 where a 2D boundary model is aligned to image edges. In 3D, contours have been used directly for 3D face shape estimation @cite_26 and indirectly as a feature for fitting a 3DMM. The earliest work in this direction was due to Moghaddam al @cite_4 who fitted a 3DMM to silhouettes extracted from multiple views. From a theoretical standpoint, L "u thi al @cite_15 explored to what degree face shape is constrained when contours are fixed. | {
"cite_N": [
"@cite_15",
"@cite_26",
"@cite_4",
"@cite_20"
],
"mid": [
"1711351391",
"1526895196",
"2141661257",
"2038952578"
],
"abstract": [
"Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.",
"This paper presents a novel method for estimating the three-dimensional shape of faces, facilitating the possibility of enhanced face recognition. The method involves a combined use of photometric stereo and profile view information. It can be divided into three principal stages: (1) An initial estimate of the face is obtained using four-source high-speed photometric stereo. (2) The profile is determined from a side-view camera. (3) The facial shape estimation is iteratively refined using the profile until an energy functional is minimised. This final stage, which is the most important contribution of the paper, works by continually deforming the shape estimate so that its profile is exact. An energy is then calculated based on the difference between the raw images and synthetic images generated using the new shape estimate. The surface normals are then adjusted according to energy until convergence. Several real face reconstructions are presented and compared to ground truth. The results clearly demonstrate a significant improvement in accuracy compared to standard photometric stereo.",
"We present a method for 3D face acquisition using a set or sequence of 2D binary silhouettes. Since silhouette images depend only on the shape and pose of an object, they are immune to lighting and or texture variations (unlike feature or texture-based shape-from-correspondence). Our prior 3D face model is a linear combination of \"eigenheads\" obtained by applying PCA to a training set of laser-scanned 3D faces. These shape coefficients are the parameters for a near-automatic system for capturing the 3D shape as well as the 2D texture-map of a novel input face. Specifically, we use back-projection and a boundary-weighted XOR-based cost function for binary silhouette matching, coupled with a probabilistic \"downhill-simplex\" optimization for shape estimation and refinement. Experiments with a multicamera rig as well as monocular video sequences demonstrate the advantages of our 3D modeling framework and ultimately, its utility for robust face recognition with built-in invariance to pose and illumination.",
"!, Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion. It is more problematic to apply modelbased methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed. The problem with existing methods is that they sacrifice model specificity in order to accommodate variability, thereby compromising robustness during image interpretation. We argue that a model should only be able to deform in ways characteristic of the class of objects it represents. We describe a method for building models by learning patterns of variability from a training set of correctly annotated images. These models can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes). The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set. We show several practical examples where we have built such models and used them to locate partially occluded objects in noisy, cluttered images. Q 199s A&& prrss, IN."
]
} |
1602.01125 | 2261888986 | In this paper we explore the problem of fitting a 3D morphable model to single face images using only sparse geometric features (edges and landmark points). Previous approaches to this problem are based on nonlinear optimisation of an edge-derived cost that can be viewed as forming soft correspondences between model and image edges. We propose a novel approach, that explicitly computes hard correspondences. The resulting objective function is non-convex but we show that a good initialisation can be obtained efficiently using alternating linear least squares in a manner similar to the iterated closest point algorithm. We present experimental results on both synthetic and real images and show that our approach outperforms methods that use soft correspondence and other recent methods that rely solely on geometric features. | Romdhani al @cite_32 include an edge distance cost as part of a hybrid energy function. Texture and outer (silhouette) contours are used in a similar way to LM-ICP @cite_25 where correspondence between image edges and model contours is soft''. This is achieved by applying a distance transform to an edge image. This provides a smoothly varying cost surface whose value at a pixel indicates the distance (and its gradient, the direction) to the closest edge. This idea was extended by Amberg al @cite_8 who use it in a multi-view setting and smooth the edge distance cost by averaging results with different parameters. In this way, the cost surface also encodes the saliency of an edge. Keller al @cite_18 showed that such approaches lead to a cost function that is neither continuous nor differentiable. This suggests the optimisation method must be carefully chosen. | {
"cite_N": [
"@cite_18",
"@cite_25",
"@cite_32",
"@cite_8"
],
"mid": [
"1491962147",
"2004312117",
"2155211928",
"2156736396"
],
"abstract": [
"In this paper we take a fresh look at the problem of extracting shape from contours of human faces. We focus on two key questions: how can we robustly fit a 3D face model to a given input contour; and, how much information about shape does a single contour image convey. Our system matches silhouettes and inner contours of a PCA based Morphable Model to an input contour image. We discuss different types of contours in terms of their effect on the continuity and differentiability of related error functions and justify our choices of error function (modified Euclidean Distance Transform) and optimization algorithm (Downhill Simplex). In a synthetic test setting we explore the limits of accuracy when recovering shape and pose from a single correct input contour and find that pose is much better captured by contours than is shape. In a semi-synthetic test setting - the input images are edges extracted from photorealistic renderings of the PCA model - we investigate the robustness of our method and argue that not all discrepancies between edges and contours can be solved by the fitting process alone.",
"Abstract This paper introduces a new method of registering point sets. The registration error is directly minimized using general-purpose non-linear optimization (the Levenberg–Marquardt algorithm). The surprising conclusion of the paper is that this technique is comparable in speed to the special-purpose Iterated Closest Point algorithm, which is most commonly used for this task. Because the routine directly minimizes an energy function, it is easy to extend it to incorporate robust estimation via a Huber kernel, yielding a basin of convergence that is many times wider than existing techniques. Finally, we introduce a data structure for the minimization based on the chamfer distance transform, which yields an algorithm that is both faster and more robust than previously described methods.",
"We present a novel algorithm aiming to estimate the 3D shape, the texture of a human face, along with the 3D pose and the light direction from a single photograph by recovering the parameters of a 3D morphable model. Generally, the algorithms tackling the problem of 3D shape estimation from image data use only the pixels intensity as input to drive the estimation process. This was previously achieved using either a simple model, such as the Lambertian reflectance model, leading to a linear fitting algorithm. Alternatively, this problem was addressed using a more precise model and minimizing a non-convex cost function with many local minima. One way to reduce the local minima problem is to use a stochastic optimization algorithm. However, the convergence properties (such as the radius of convergence) of such algorithms, are limited. Here, as well as the pixel intensity, we use various image features such as the edges or the location of the specular highlights. The 3D shape, texture and imaging parameters are then estimated by maximizing the posterior of the parameters given these image features. The overall cost function obtained is smoother and, hence, a stochastic optimization algorithm is not needed to avoid the local minima problem. This leads to the multi-features fitting algorithm that has a wider radius of convergence and a higher level of precision. This is shown on some example photographs, and on a recognition experiment performed on the CMU-PIE image database.",
"We present a novel model based stereo system, which accurately extracts the 3D shape and pose of faces from multiple images taken simultaneously. Extracting the 3D shape from images is important in areas such as pose-invariant face recognition and image manipulation. The method is based on a 3D morphable face model learned from a database of facial scans. The use of a strong face prior allows us to extract high precision surfaces from stereo data of faces, where traditional correlation based stereo methods fail because of the mostly textureless input images. The method uses two or more uncalibrated images of arbitrary baseline, estimating calibration and shape simultaneously. Results using two and three input images are presented. We replace the lighting and albedo estimation of a monocular method with the use of stereo information, making the system more accurate and robust. We evaluate the method using ground truth data and the standard PIE image dataset. A comparison with the state of the art monocular system shows that the new method has a significantly higher accuracy."
]
} |
1602.01125 | 2261888986 | In this paper we explore the problem of fitting a 3D morphable model to single face images using only sparse geometric features (edges and landmark points). Previous approaches to this problem are based on nonlinear optimisation of an edge-derived cost that can be viewed as forming soft correspondences between model and image edges. We propose a novel approach, that explicitly computes hard correspondences. The resulting objective function is non-convex but we show that a good initialisation can be obtained efficiently using alternating linear least squares in a manner similar to the iterated closest point algorithm. We present experimental results on both synthetic and real images and show that our approach outperforms methods that use soft correspondence and other recent methods that rely solely on geometric features. | Edge features have also been used in other ways. Cashman and Fitzgibbon @cite_6 learn a 3DMM from 2D images by fitting to silhouettes. Zhu al @cite_23 present a method that can be seen as a hybrid of landmark and edge fitting. Landmarks that define boundaries are allowed to slide over the 3D face surface during fitting. A recent alternative to optimisation-based approaches is to learn a regressor from extracted face contours to 3DMM shape parameters @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_23",
"@cite_6"
],
"mid": [
"1565195577",
"1935685005",
"2066090933"
],
"abstract": [
"A novel 3D face estimation method based on a regression matrix and occluding contours.3D vertices around occluding boundaries and their corresponding 2D pixel projections are highly correlated.The 3D face estimation method resembles dense surface shape recovery from missing data. This paper addresses the problem of 3D face shape approximation from occluding contours, i.e., the boundaries between the facial region and the background. To this end, a linear regression process that models the relationship between a set of 2D occluding contours and a set of 3D vertices is applied onto the corresponding training sets using Partial Least Squares. The result of this step is a regression matrix which is capable of estimating new 3D face point clouds from the out-of-training 2D Cartesian pixel positions of the selected contours. Our approach benefits from the highly correlated spaces spanned by the 3D vertices around the occluding boundaries of a face and their corresponding 2D pixel projections. As a result, the proposed method resembles dense surface shape recovery from missing data. Our technique is evaluated over four scenarios designed to investigate both the influence of the contours included in the training set and the considered number of contours. Qualitative and quantitative experiments demonstrate that using contours outperform the state of the art on the database used in this article. Even using a limited number of contours provides a useful approximation to the 3D face surface.",
"Pose and expression normalization is a crucial step to recover the canonical view of faces under arbitrary conditions, so as to improve the face recognition performance. An ideal normalization method is desired to be automatic, database independent and high-fidelity, where the face appearance should be preserved with little artifact and information loss. However, most normalization methods fail to satisfy one or more of the goals. In this paper, we propose a High-fidelity Pose and Expression Normalization (HPEN) method with 3D Morphable Model (3DMM) which can automatically generate a natural face image in frontal pose and neutral expression. Specifically, we firstly make a landmark marching assumption to describe the non-correspondence between 2D and 3D landmarks caused by pose variations and propose a pose adaptive 3DMM fitting algorithm. Secondly, we mesh the whole image into a 3D object and eliminate the pose and expression variations using an identity preserving 3D transformation. Finally, we propose an inpainting method based on Possion Editing to fill the invisible region caused by self occlusion. Extensive experiments on Multi-PIE and LFW demonstrate that the proposed method significantly improves face recognition performance and outperforms state-of-the-art methods in both constrained and unconstrained environments.",
"3D morphable models are low-dimensional parameterizations of 3D object classes which provide a powerful means of associating 3D geometry to 2D images. However, morphable models are currently generated from 3D scans, so for general object classes such as animals they are economically and practically infeasible. We show that, given a small amount of user interaction (little more than that required to build a conventional morphable model), there is enough information in a collection of 2D pictures of certain object classes to generate a full 3D morphable model, even in the absence of surface texture. The key restriction is that the object class should not be strongly articulated, and that a very rough rigid model should be provided as an initial estimate of the “mean shape.” The model representation is a linear combination of subdivision surfaces, which we fit to image silhouettes and any identifiable key points using a novel combined continuous-discrete optimization strategy. Results are demonstrated on several natural object classes, and show that models of rather high quality can be obtained from this limited information."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.