aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1610.01394 | 2529709694 | We describe an end-to-end framework for learning parameters of min-cost flow multi-target tracking problem with quadratic trajectory interactions including suppression of overlapping tracks and contextual cues about co-occurrence of different objects. Our approach utilizes structured prediction with a tracking-specific loss function to learn the complete set of model parameters. In this learning framework, we evaluate two different approaches to finding an optimal set of tracks under a quadratic model objective, one based on an linear program (LP) relaxation and the other based on novel greedy variants of dynamic programming that handle pairwise interactions. We find the greedy algorithms achieve almost equivalent accuracy to the LP relaxation while being up to 10 @math ? faster than a commercial LP solver. We evaluate trained models on three challenging benchmarks. Surprisingly, we find that with proper parameter learning, our simple data association model without explicit appearance motion reasoning is able to achieve comparable or better accuracy than many state-of-the-art methods that use far more complex motion features or appearance affinity metric learning. | Finally, recent work has also pursued detectors which are specifically optimized for tracking scenarios. @cite_34 propose to learn multi-person detector by using hard-negatives acquired from a tracker's output, in order to let the detector to solve ambiguities that the tracker cannot handle. @cite_15 propose to use a target identity-aware network-flow model to process videos in batches of frames, and learn people detectors for each individual person in an online fashion. | {
"cite_N": [
"@cite_15",
"@cite_34"
],
"mid": [
"1921895346",
"2136579260"
],
"abstract": [
"In this paper we show that multiple object tracking (MOT) can be formulated in a framework, where the detection and data-association are performed simultaneously. Our method allows us to overcome the confinements of data association based MOT approaches; where the performance is dependent on the object detection results provided at input level. At the core of our method lies structured learning which learns a model for each target and infers the best location of all targets simultaneously in a video clip. The inference of our structured learning is done through a new Target Identity-aware Network Flow (TINF), where each node in the network encodes the probability of each target identity belonging to that node. The proposed Lagrangian relaxation optimization finds the high quality solution to the network. During optimization a soft spatial constraint is enforced between the nodes of the graph which helps reducing the ambiguity caused by nearby targets with similar appearance in crowded scenarios. We show that automatically detecting and tracking targets in a single framework can help resolve the ambiguities due to frequent occlusion and heavy articulation of targets. Our experiments involve challenging yet distinct datasets and show that our method can achieve results better than the state-of-art.",
"People tracking in crowded real-world scenes is challenging due to frequent and long-term occlusions. Recent tracking methods obtain the image evidence from object (people) detectors, but typically use off-the-shelf detectors and treat them as black box components. In this paper we argue that for best performance one should explicitly train people detectors on failure cases of the overall tracker instead. To that end, we first propose a novel joint people detector that combines a state-of-the-art single person detector with a detector for pairs of people, which explicitly exploits common patterns of person-person occlusions across multiple viewpoints that are a frequent failure case for tracking in crowded scenes. To explicitly address remaining failure modes of the tracker we explore two methods. First, we analyze typical failures of trackers and train a detector explicitly on these cases. And second, we train the detector with the people tracker in the loop, focusing on the most common tracker failures. We show that our joint multi-person detector significantly improves both detection accuracy as well as tracker performance, improving the state-of-the-art on standard benchmarks."
]
} |
1610.01238 | 2528537661 | We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving. | Traditional methods of camera-based drivable path estimation for road vehicles involve preprocessing steps to remove shadow and exposure artefacts @cite_26 @cite_11 , extraction of low-level road and lane features @cite_14 @cite_37 , fitting road and lane models to feature detections @cite_4 @cite_18 , and temporal fusion of road and lane hypotheses between successive frames @cite_7 @cite_20 . While effective in well-maintained road environments, these approaches suffer in the presence of occlusions, shadows and changing lighting conditions, unstructured roads, and areas with few or no markings @cite_19 . Robustness can be significantly increased by combining images with radar @cite_12 or LIDAR @cite_13 but at an increased sensor cost. | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_26",
"@cite_4",
"@cite_18",
"@cite_7",
"@cite_19",
"@cite_13",
"@cite_12",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2131076267",
"1856128652",
"",
"",
"1732557381",
"",
"2157076480",
"2124802015",
"",
""
],
"abstract": [
"",
"Driver-assistance systems that monitor driver intent, warn drivers of lane departures, or assist in vehicle guidance are all being actively considered. It is therefore important to take a critical look at key aspects of these systems, one of which is lane-position tracking. It is for these driver-assistance objectives that motivate the development of the novel \"video-based lane estimation and tracking\" (VioLET) system. The system is designed using steerable filters for robust and accurate lane-marking detection. Steerable filters provide an efficient method for detecting circular-reflector markings, solid-line markings, and segmented-line markings under varying lighting and road conditions. They help in providing robustness to complex shadowing, lighting changes from overpasses and tunnels, and road-surface variations. They are efficient for lane-marking extraction because by computing only three separable convolutions, we can extract a wide variety of lane markings. Curvature detection is made more robust by incorporating both visual cues (lane markings and lane texture) and vehicle-state information. The experiment design and evaluation of the VioLET system is shown using multiple quantitative metrics over a wide variety of test conditions on a large test path using a unique instrumented vehicle. A justification for the choice of metrics based on a previous study with human-factors applications as well as extensive ground-truth testing from different times of day, road conditions, weather, and driving scenarios is also presented. In order to design the VioLET system, an up-to-date and comprehensive analysis of the current state of the art in lane-detection research was first performed. In doing so, a comparison of a wide variety of methods, pointing out the similarities and differences between methods as well as when and where various methods are most useful, is presented",
"An essential functionality for advanced driver assistance systems (ADAS) is road segmentation, which directly supports ADAS applications like road departure warning and is an invaluable background segmentation stage for other functionalities as vehicle detection. Unfortunately, road segmentation is far from being trivial since the road is in an outdoor scenario imaged from a mobile platform. For instance, shadows are a relevant problem for segmentation. The usual approaches are ad hoc mechanisms, applied after an initial segmentation step, that try to recover road patches not included as segmented road for being in shadow. In this paper we argue that by using a different feature space to perform the segmentation we can minimize the problem of shadows from the very beginning. Rather than the usual segmentation in a color space we propose segmentation in a shadowless image which is computable in real---time using a color camera. The paper presents comparative results for both asphalted and non---asphalted roads, showing the benefits of the proposal in presence of shadows and vehicles.",
"",
"",
"Particle filtering of boundary points is a robust way to estimate lanes. This paper introduces a new lane model in correspondence to this particle filter-based approach, which is flexible to detect all kinds of lanes. A modified version of an Euclidean distance transform is applied to an edge map of a road image from a birds-eye view to provide information for boundary point detection. An efficient lane tracking method is also discussed. The use of this distance transform exploits useful information in lane detection situations, and greatly facilitates the initialization of the particle filter, as well as lane tracking. Finally, the paper validates the algorithm with experimental evidence for lane detection and tracking.",
"",
"This paper describes a system for detecting and estimating the properties of multiple travel lanes in an urban road network from calibrated video imagery and laser range data acquired by a moving vehicle. The system operates in real-time in several stages on multiple processors, fusing detected road markings, obstacles, and curbs into a stable non-parametric estimate of nearby travel lanes. The system incorporates elements of a provided piecewise-linear road network as a weak prior. Our method is notable in several respects: it detects and estimates multiple travel lanes; it fuses asynchronous, heterogeneous sensor streams; it handles high-curvature roads; and it makes no assumption about the position or orientation of the vehicle with respect to the road. We analyze the system's performance in the context of the 2007 DARPA Urban Challenge. With five cameras and thirteen lidars, our method was incorporated into a closed-loop controller to successfully guide an autonomous vehicle through a 90 km urban course at speeds up to 40 km h amidst moving traffic.",
"Treats a problem arising in the design of intelligent vehicles: automated detection of lane and pavement boundaries using forward-looking optical and radar imaging sensors mounted on an automobile. In previous work, lane and pavement boundaries have always been located separately. This separate detection strategy is problematic in situations when either the optical or the radar image is too noisy. We propose a Bayesian multisensor image fusion method to solve our boundary detection problem. This method makes use of a deformable template model to globally describe the boundaries of interest. The optical and radar imaging processes are described with random field likelihoods. The multisensor fusion boundary detection problem is reformulated as a joint MAP estimation problem. However, the joint MAP estimate is intractable, as it involves the computation of a notoriously difficult normalization constant, also known as the partition function. Therefore, we settle for the so-called empirical MAP estimate, as an approximation to the true MAP estimate. Several experimental results are provided to demonstrate the efficacy of the empirical MAP estimation method in simultaneously detecting lane and pavement boundaries. Fusion of multi-modal images is not only of interest to the intelligent vehicles community, but to others as well, such as biomedicine, remote sensing, target recognition. The method presented in the paper is also applicable to image fusion problems in these other areas.",
"",
""
]
} |
1610.01238 | 2528537661 | We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving. | The challenges in building large-scale labelled datasets has led some researchers to consider virtual environments, for which ground-truth semantic labels can be rendered in parallel with synthetic camera images. Methods using customised video game engines have been used to produce hundreds of thousands of synthetic images with corresponding ground truth labels @cite_3 @cite_8 . While virtual environments allow large-scale generation of ground-truth semantic labels, they present two problems: firstly, rendering pipelines are typically optimised for speed and may not accurately reflect real-world images (both above approaches suggest rendered images are used only for augmenting real-world datasets and hence manual labelling is still necessary); secondly, the actions of the vehicle and all other agents in the virtual world must be pre-programmed and may not resemble real-world traffic scenarios. A recent method uses sparse 3D prior information to transfer labels to real-world 2D images @cite_31 but requires sophisticated 3D reconstructions and manual 3D annotations. | {
"cite_N": [
"@cite_31",
"@cite_3",
"@cite_8"
],
"mid": [
"2140262469",
"2431874326",
""
],
"abstract": [
"Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.",
"Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. Recent revolutionary results of deep convolutional neural networks (DCNNs) foreshadow the advent of reliable classifiers to perform such visual tasks. However, DCNNs require learning of many parameters from raw images, thus, having a sufficient amount of diverse images with class annotations is needed. These annotations are obtained via cumbersome, human labour which is particularly challenging for semantic segmentation since pixel-level annotations are required. In this paper, we propose to use a virtual world to automatically generate realistic synthetic images with pixel-level annotations. Then, we address the question of how useful such data can be for semantic segmentation – in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations. We use SYNTHIA in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments with DCNNs that show how the inclusion of SYNTHIA in the training stage significantly improves performance on the semantic segmentation task.",
""
]
} |
1610.01238 | 2528537661 | We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving. | Some approaches have proposed bypassing segmentation entirely and learning a direct mapping from input images to vehicle behaviour @cite_16 @cite_36 . These methods also use the driver of the data collection vehicle to generate the supervised labels for the network (e.g. steering angle) and have recently demonstrated impressive results in real-world driving tests @cite_29 , but it is not clear how this approach generalises to scenarios where there are multiple possible drivable paths to consider (e.g. intersections). Our approach instead uses the data collection vehicle driver to implicitly label proposed paths in the image, but still allows a planning algorithm to choose the best path for the current route. | {
"cite_N": [
"@cite_36",
"@cite_29",
"@cite_16"
],
"mid": [
"",
"2342840547",
"2167224731"
],
"abstract": [
"",
"We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).",
"ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perform the task differs dramatically when the network is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand."
]
} |
1610.01376 | 2949967462 | This paper presents a novel approach for temporal and semantic segmentation of edited videos into meaningful segments, from the point of view of the storytelling structure. The objective is to decompose a long video into more manageable sequences, which can in turn be used to retrieve the most significant parts of it given a textual query and to provide an effective summarization. Previous video decomposition methods mainly employed perceptual cues, tackling the problem either as a story change detection, or as a similarity grouping task, and the lack of semantics limited their ability to identify story boundaries. Our proposal connects together perceptual, audio and semantic cues in a specialized deep network architecture designed with a combination of CNNs which generate an appropriate embedding, and clusters shots into connected sequences of semantic scenes, i.e. stories. A retrieval presentation strategy is also proposed, by selecting the semantically and aesthetically "most valuable" thumbnails to present, considering the query in order to improve the storytelling presentation. Finally, the subjective nature of the task is considered, by conducting experiments with different annotators and by proposing an algorithm to maximize the agreement between automatic results and human annotators. | Existing works in the field of automatic story detection can be roughly categorized into three groups @cite_37 : , that consider the way a video is structured in professional movie production, , where shots are arranged in a graph representation, and . | {
"cite_N": [
"@cite_37"
],
"mid": [
"2114762973"
],
"abstract": [
"We present a newly developed strategy for automatically segmenting movies into logical story units. A logical story unit can be understood as an approximation of a movie episode, which is a high-level temporal movie segment, characterized either by a single event (dialog, action scene, etc.) or by several events taking place in parallel. Since we consider a whole event and not a single shot to be the most natural retrieval unit for the movie category of video programs, the proposed segmentation is the crucial first step toward a concise and comprehensive content-based movie representation for browsing and retrieval purposes. The automation aspect is becoming increasingly important with the rising amount of information to be processed in video archives of the future. The segmentation process is designed to work on MPEG-DC sequences, where we have taken into account that at least a partial decoding is required for performing content-based operations on MPEG compressed video streams. The proposed technique allows for carrying out the segmentation procedure in a single pass through a video sequence."
]
} |
1610.01376 | 2949967462 | This paper presents a novel approach for temporal and semantic segmentation of edited videos into meaningful segments, from the point of view of the storytelling structure. The objective is to decompose a long video into more manageable sequences, which can in turn be used to retrieve the most significant parts of it given a textual query and to provide an effective summarization. Previous video decomposition methods mainly employed perceptual cues, tackling the problem either as a story change detection, or as a similarity grouping task, and the lack of semantics limited their ability to identify story boundaries. Our proposal connects together perceptual, audio and semantic cues in a specialized deep network architecture designed with a combination of CNNs which generate an appropriate embedding, and clusters shots into connected sequences of semantic scenes, i.e. stories. A retrieval presentation strategy is also proposed, by selecting the semantically and aesthetically "most valuable" thumbnails to present, considering the query in order to improve the storytelling presentation. Finally, the subjective nature of the task is considered, by conducting experiments with different annotators and by proposing an algorithm to maximize the agreement between automatic results and human annotators. | The drawback of rule-based methods is that they tend to fail in videos where film-editing rules are not followed strictly, or when two adjacent stories are similar and follow the same rules. The method proposed by Liu al in @cite_22 falls in this category: they propose a visual based probabilistic framework that imitates the authoring process. In @cite_24 , shots are represented by means of key-frames, clustered using spectral clustering and low level color features, and then labeled according to the clusters they belong to. Since video editing tends to follow repetitive patterns, boundaries are detected from the alignment score of the symbolic sequences, using the Needleman-Wunsch algorithm. | {
"cite_N": [
"@cite_24",
"@cite_22"
],
"mid": [
"2171186743",
"2030445831"
],
"abstract": [
"Video indexing requires the efficient segmentation of video into scenes. The video is first segmented into shots and a set of key-frames is extracted for each shot. Typical scene detection algorithms incorporate time distance in a shot similarity metric. In the method we propose, to overcome the difficulty of having prior knowledge of the scene duration, the shots are clustered into groups based only on their visual similarity and a label is assigned to each shot according to the group that it belongs to. Then, a sequence alignment algorithm is applied to detect when the pattern of shot labels changes, providing the final scene segmentation result. In this way shot similarity is computed based only on visual features, while ordering of shots is taken into account during sequence alignment. To cluster the shots into groups we propose an improved spectral clustering method that both estimates the number of clusters and employs the fast global k-means algorithm in the clustering stage after the eigenvector computation of the similarity matrix. The same spectral clustering method is applied to extract the key-frames of each shot and numerical experiments indicate that the content of each shot is efficiently summarized using the method we propose herein. Experiments on TV-series and movies also indicate that the proposed scene detection method accurately detects most of the scene boundaries while preserving a good tradeoff between recall and precision.",
"Compared with general videos, movies and TV shows attract a significantly larger portion of people across time and contain very rich and interesting narrative patterns of shots and scenes. In this paper, we aim to recover the inherent structure of scenes and shots in such video narratives. The obtained structure could be useful for subsequent video analysis tasks such as tracking objects across cuts, action retrieval, as well as enriching user browsing and video editing interfaces. Recent research on this problem has mainly focused on combining multiple cues such as scripts, subtitles, sound, or human faces. However, considering that visual information is sufficient for human to identify scene boundaries and some cues are not always available, we are motivated to design a purely visual approach. Observing that dialog patterns occur frequently in a movie TV show to form a scene, we propose a probabilistic framework to imitate the authoring process. The multi-thread shot model and contextual visual dynamics are embedded into a unified framework to capture the video hierarchy. We devise an efficient algorithm to jointly learn the parameters of the unified model. Experiments on two large datasets containing six movies and 24 episodes of Lost, a popular TV show with complex plot structures, are conducted. Comparative results show that, leveraging only visual cues, our method could successfully recover complicated shot threads and outperform several approaches. Moreover, our method is fast and advantageous for large-scale computation."
]
} |
1610.01376 | 2949967462 | This paper presents a novel approach for temporal and semantic segmentation of edited videos into meaningful segments, from the point of view of the storytelling structure. The objective is to decompose a long video into more manageable sequences, which can in turn be used to retrieve the most significant parts of it given a textual query and to provide an effective summarization. Previous video decomposition methods mainly employed perceptual cues, tackling the problem either as a story change detection, or as a similarity grouping task, and the lack of semantics limited their ability to identify story boundaries. Our proposal connects together perceptual, audio and semantic cues in a specialized deep network architecture designed with a combination of CNNs which generate an appropriate embedding, and clusters shots into connected sequences of semantic scenes, i.e. stories. A retrieval presentation strategy is also proposed, by selecting the semantically and aesthetically "most valuable" thumbnails to present, considering the query in order to improve the storytelling presentation. Finally, the subjective nature of the task is considered, by conducting experiments with different annotators and by proposing an algorithm to maximize the agreement between automatic results and human annotators. | Clustering-based solutions assume that similarity of shots can be used to group them into meaningful clusters, thus directly providing the final temporal boundaries. In @cite_21 , for instance, a Siamese Network is used together with features extracted from a CNN and time features to learn distances between shots. Spectral clustering is then applied to detect coherent sequences. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2132579158"
],
"abstract": [
"We present a model that automatically divides broadcast videos into coherent scenes by learning a distance measure between shots. Experiments are performed to demonstrate the effectiveness of our approach by comparing our algorithm against recent proposals for automatic scene segmentation. We also propose an improved performance measure that aims to reduce the gap between numerical evaluation and expected results, and propose and release a new benchmark dataset."
]
} |
1610.01376 | 2949967462 | This paper presents a novel approach for temporal and semantic segmentation of edited videos into meaningful segments, from the point of view of the storytelling structure. The objective is to decompose a long video into more manageable sequences, which can in turn be used to retrieve the most significant parts of it given a textual query and to provide an effective summarization. Previous video decomposition methods mainly employed perceptual cues, tackling the problem either as a story change detection, or as a similarity grouping task, and the lack of semantics limited their ability to identify story boundaries. Our proposal connects together perceptual, audio and semantic cues in a specialized deep network architecture designed with a combination of CNNs which generate an appropriate embedding, and clusters shots into connected sequences of semantic scenes, i.e. stories. A retrieval presentation strategy is also proposed, by selecting the semantically and aesthetically "most valuable" thumbnails to present, considering the query in order to improve the storytelling presentation. Finally, the subjective nature of the task is considered, by conducting experiments with different annotators and by proposing an algorithm to maximize the agreement between automatic results and human annotators. | Lot of work has also been proposed for video retrieval: with the explosive growth of online videos, this has become a hot topic in computer vision. In their seminal work, Sivic al proposed Video Google @cite_27 , a system that retrieves videos from a database via bag-of-words matching. Lew al @cite_16 reviewed earlier efforts in video retrieval, which mostly relied on feature-based relevance feedback or similar methods. | {
"cite_N": [
"@cite_27",
"@cite_16"
],
"mid": [
"2131846894",
"2147069236"
],
"abstract": [
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"Extending beyond the boundaries of science, art, and culture, content-based multimedia information retrieval provides new paradigms and methods for searching through the myriad variety of media all over the world. This survey reviews 100p recent articles on content-based multimedia information retrieval and discusses their role in current research directions which include browsing and search paradigms, user studies, affective computing, learning, semantic queries, new features and media types, high performance indexing, and evaluation techniques. Based on the current state of the art, we discuss the major challenges for the future."
]
} |
1610.01376 | 2949967462 | This paper presents a novel approach for temporal and semantic segmentation of edited videos into meaningful segments, from the point of view of the storytelling structure. The objective is to decompose a long video into more manageable sequences, which can in turn be used to retrieve the most significant parts of it given a textual query and to provide an effective summarization. Previous video decomposition methods mainly employed perceptual cues, tackling the problem either as a story change detection, or as a similarity grouping task, and the lack of semantics limited their ability to identify story boundaries. Our proposal connects together perceptual, audio and semantic cues in a specialized deep network architecture designed with a combination of CNNs which generate an appropriate embedding, and clusters shots into connected sequences of semantic scenes, i.e. stories. A retrieval presentation strategy is also proposed, by selecting the semantically and aesthetically "most valuable" thumbnails to present, considering the query in order to improve the storytelling presentation. Finally, the subjective nature of the task is considered, by conducting experiments with different annotators and by proposing an algorithm to maximize the agreement between automatic results and human annotators. | Retrieved results need eventually to be presented to the user, but previewing many videos playing simultaneously is not something feasible. The usual approach is to present a set of video thumbnails. Thumbnails are basically surrogates for videos @cite_36 , as they take the place of a video in search results. Therefore, they may not accurately represent the content of the video, and create an , i.e. a discrepancy between the information sought by the user and the actual content of the video. Most conventional methods aim at selecting the best'' thumbnail, and have focused on learning visual representativeness purely from visual content @cite_20 @cite_32 . However, more recent researches have focused on choosing query-dependent thumbnails to supply specific thumbnails for different queries. To reduce the intention gap, @cite_36 proposes a new kind of animated preview, constructed of frames taken from a full video, and a crowdsourced tagging process which enables the matching between query terms and videos. Their system, while going in the right direction, suffers from the need of manual annotations, which are often expensive and difficult to obtain. | {
"cite_N": [
"@cite_36",
"@cite_32",
"@cite_20"
],
"mid": [
"2054235944",
"2163527813",
"1966451496"
],
"abstract": [
"During online search, the user's expectations often differ from those of the author. This is known as the \"intention gap\" and is particularly problematic when searching for and discriminating between online video content. An author uses description and meta-data tags to label their content, but often cannot predict alternate interpretations or appropriations of their work. To address this intention gap, we present ThumbReels, a concept for query-sensitive video previews generated from crowdsourced, temporally defined semantic tagging. Further, we supply an open-source tool that supports on-the-fly temporal tagging of videos, whose output can be used for later search queries. A first user study validates the tool and concept. We then present a second study that shows participants found ThumbReels to better represent search terms than contemporary preview techniques.",
"The power of video over still images is the ability to represent dynamic activities. But video browsing and retrieval are inconvenient due to inherent spatio-temporal redundancies, where some time intervals may have no activity, or have activities that occur in a small image region. Video synopsis aims to provide a compact video representation, while preserving the essential activities of the original video. We present dynamic video synopsis, where most of the activity in the video is condensed by simultaneously showing several actions, even when they originally occurred at different times. For example, we can create a \"stroboscopic movie\", where multiple dynamic instances of a moving object are played simultaneously. This is an extension of the still stroboscopic picture. Previous approaches for video abstraction addressed mostly the temporal redundancy by selecting representative key-frames or time intervals. In dynamic video synopsis the activity is shifted into a significantly shorter period, in which the activity is much denser. Video examples can be found online in http: www.vision.huji.ac.il synopsis",
"With the rapid explosion of video data, compact representation of videos is becoming more and more desirable for efficient browsing and communication, which leads to a number of research works on video summarization in recent years. Among these works, summaries based on a set of still frames are frequently studied and applied due to its high compactness. However, the representativeness of the selected frames, which are taken as the compact representation of the video or video segment, has not been well studied. It is observed that frame representativeness is highly related to the following elements: image quality, user attention measure, visual details, and displaying duration. It is also observed that users have similar tendency in selecting the most representative frame for a certain video segment. In this paper, we developed a method to examine and evaluate the representativeness of video frames based on learning users' perceptive evaluations."
]
} |
1610.01185 | 2528897335 | For which sets A does there exist a mapping, computed by a total or partial recursive function, such that the mapping, when its domain is restricted to A, is a 1-to-1, onto mapping to @math ? And for which sets A does there exist such a mapping that respects the lexicographical ordering within A? Both cases are types of perfect, minimal hash functions. The complexity-theoretic versions of these notions are known as compression functions and ranking functions. The present paper defines and studies the recursion-theoretic versions of compression and ranking functions, and in particular studies the question of which sets have, or lack, such functions. Thus, this is a case where, in contrast to the usual direction of notion transferal, notions from complexity theory are inspiring notions, and an investigation, in computability theory. We show that the rankable and compressible sets broadly populate the 1-truth-table degrees, and we prove that every nonempty coRE cylinder is recursively compressible. | The most closely related papers are those mentioned in . Given the importance to this paper of mappings that are onto @math , we mention also the line of work, dating back to Brassard, Fortune, and Hopcroft's early paper on one-way functions @cite_21 , that looks at the complexity of inverting functions that map onto @math @cite_21 @cite_1 @cite_5 @cite_29 . However, both that line and the papers mentioned in are about complexity-theoretic functions, while in contrast the current paper is about recursion-theoretic functions. | {
"cite_N": [
"@cite_5",
"@cite_29",
"@cite_21",
"@cite_1"
],
"mid": [
"",
"2410312262",
"1495343332",
"2044637393"
],
"abstract": [
"",
"In dieser Habilitationsschrift werden Struktur und Eigenschaften von Komplexitatsklassen wie P und NP untersucht, vor allem im Hinblick auf: Zertifikatkomplexitat, Einwegfunktionen, Heuristiken gegen NP-Vollstandigkeit und Zahlkomplexitat. Zum letzten Punkt werden speziell untersucht: (a) die Komplexitat von Zahleigenschaften von Schaltkreisen, (b) Separationen von Zahlklassen mit Immunitat und (c) die Komplexitat des Zahlens der Losungen von ,,tally NP-Problemen.",
"Diffie and Hellman [2] propose the use of the exponential function in a finite field for cryptographic purposes. The proposal is based on the conjecture that the inverse function, the logarithm, is not feasibly computable. We show that a proof of this conjecture would have important consequences for theoretical computer science, even under the assumption that P @math NP.",
"We look at the hypothesis that all honest onto polynomial-time computable functions have a polynomial-time computable inverse. We show this hypothesis equivalent to several other complexity conjectures including: • In polynomial time, one can find accepting paths of nondeterministic polynomial-time Turing machines that accept Σ*. • Every total multivalued nondeterministic function has a polynomial-time computable refinement. • In polynomial time, one can compute satisfying assignments for any polynomial-time computable set of satisfiable formulae. • In polynomial time, one can convert the accepting computations of any nondeterministic Turing machine that accepts SAT to satisfying assignments.We compare these hypotheses with several other important complexity statements. We also examine the complexity of these statements where we only require a single bit instead of the entire inverse."
]
} |
1610.01185 | 2528897335 | For which sets A does there exist a mapping, computed by a total or partial recursive function, such that the mapping, when its domain is restricted to A, is a 1-to-1, onto mapping to @math ? And for which sets A does there exist such a mapping that respects the lexicographical ordering within A? Both cases are types of perfect, minimal hash functions. The complexity-theoretic versions of these notions are known as compression functions and ranking functions. The present paper defines and studies the recursion-theoretic versions of compression and ranking functions, and in particular studies the question of which sets have, or lack, such functions. Thus, this is a case where, in contrast to the usual direction of notion transferal, notions from complexity theory are inspiring notions, and an investigation, in computability theory. We show that the rankable and compressible sets broadly populate the 1-truth-table degrees, and we prove that every nonempty coRE cylinder is recursively compressible. | In fact, this paper is quite the reverse of the typical direction of inspiration. A large number of the core concepts of complexity theory are defined by direct analogy with notions from recursive function theory. As just a few examples, NP, the polynomial hierarchy @cite_0 @cite_17 , most of complexity theory's reduction notions @cite_27 , (complexity-theoretic) creative simple immune bimmune sets @cite_20 @cite_26 @cite_2 @cite_11 @cite_3 , and the semi-feasible sets @cite_22 are lifted quite directly from recursive function theory, with, as needed, the appropriate, natural changes to focus on the deterministic and nondeterministic polynomial-time realms. The debt that complexity theory owes to recursive function theory is huge. | {
"cite_N": [
"@cite_26",
"@cite_11",
"@cite_22",
"@cite_3",
"@cite_0",
"@cite_27",
"@cite_2",
"@cite_20",
"@cite_17"
],
"mid": [
"1981282181",
"",
"2007527356",
"2059997023",
"2007464118",
"1986203294",
"2005127553",
"2106436148",
"2065251436"
],
"abstract": [
"An infinite and co-infinite setA is bi-immune for a complexity classC if neitherA nor its complement has an infinite subset inC. We prove various equivalent characterizations of this notion. Also, we introduce a stronger version of bi-immunity and show how both notions relate to density and other properties of sets in EXPTIME.",
"",
"",
"Abstract We consider under the assumption P ≠ NP questions concerning the structure of the lattice of NP sets together with the sublattice P. We show that two questions which are slightly more complex than the known splitting properties of this lattice cannot be settled by arguments which relativize. The two questions which we consider are whether every infinite NP set contains an infinite P subset and whether there exists an NP-simple set. We construct several oracles, all of which make P ≠ NP, and which in addition make the above-mentioned statements either true or false. In particular we give a positive answer to the question, raised by Bennett and Gill (1981), whether an oracle B exists making P B ≠ NP B and such that every infinite set in NP B has an infinite subset in P B . The constructions of the oracles are finite injury priority arguments.",
"",
"Abstract Various forms of polynomial time reducibility are compared. Among the forms examined are many-one, bounded truth table, truth table and Turing reducibility. The effect of introducing nondeterminism into reduction procedures is also examined.",
"Abstract Two structurally defined types of NP-sets are studied. k -Simple sets are defined and shown to exist in NP. Other properties of these sets are investigated. k -Creative sets, as previously defined by Joseph and Young (1985), are next considered. A new condition is given which implies that a set is k -creative. Several previously considered NP-complete sets are proved to be k -creative.",
"The many types of resource-bounded reductions that are both an object of study and a research tool in structural complexity theory have given rise to a large variety of completeness notions. A complete set in a complexity class is a manageable object that represents the structure of the entire class. The study of its structure can reveal properties that are general in that the complexity class, and the study of the structure of complete sets in different classes, can reveal secrets about the relation between these classes. Research into all sorts of aspects and properties of complete sets has been and will be a major topic in structural complexity theory. In this expository paper, we review the progress that has been made in recent years on selected topics in the study of complete sets. >",
"Abstract The polynomial-time hierarchy is that subrecursive analog of the Kleene arithmetical hierarchy in which deterministic (nondeterministic) polynomial time plays the role of recursive (recursively enumerable) time. Known properties of the polynomial-time hierarchy are summarized. A word problem which is complete in the second stage of the hierarchy is exhibited. In the analogy between the polynomial-time hierarchy and the arithmetical hierarchy, the first order theory of equality plays the role of elementary arithmetic (as the ω-jump of the hierarchy). The problem of deciding validity in the theory of equality is shown to be complete in polynomial-space, and close upper and lower bounds on the space complexity of this problem are established."
]
} |
1610.00883 | 2952072734 | This paper makes a simple increment to state-of-the-art in sarcasm detection research. Existing approaches are unable to capture subtle forms of context incongruity which lies at the heart of sarcasm. We explore if prior work can be enhanced using semantic similarity discordance between word embeddings. We augment word embedding-based features to four feature sets reported in the past. We also experiment with four types of word embeddings. We observe an improvement in sarcasm detection, irrespective of the word embedding used or the original feature set to which our features are augmented. For example, this augmentation results in an improvement in F-score of around 4 for three out of these four feature sets, and a minor degradation in case of the fourth, when Word2Vec embeddings are used. Finally, a comparison of the four embeddings shows that Word2Vec and dependency weight-based features outperform LSA and GloVe, in terms of their benefit to sarcasm detection. | Early sarcasm detection research focused on speech @cite_17 and lexical features @cite_15 . Several other features have been proposed @cite_15 @cite_14 @cite_16 @cite_10 @cite_12 @cite_1 @cite_7 @cite_2 @cite_11 @cite_5 @cite_6 . Of particular relevance to our work are papers that aim to first extract patterns relevant to sarcasm detection. use a semi-supervised approach that extracts sentiment-bearing patterns for sarcasm detection. extract phrases corresponding to implicit incongruity i.e. the situation where sentiment is expressed without use of sentiment words. describe a bootstrapping algorithm that iteratively discovers a set of positive verbs and negative situation phrases, which are later used in a sarcasm detection algorithm. also perform semi-supervised extraction of patterns for sarcasm detection. The only prior work which uses word embeddings for a related task of sarcasm detection is by . They model sarcasm detection as a word sense disambiguation task, and use embeddings to identify whether a word is used in the sarcastic or non-sarcastic sense. Two sense vectors for every word are created: one for literal sense and one for sarcastic sense. The final sense is determined based on the similarity of these sense vectors with the sentence vector. | {
"cite_N": [
"@cite_14",
"@cite_11",
"@cite_7",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2251920663",
"1842080548",
"2251971374",
"2405202646",
"2157961599",
"2142112646",
"2250489604",
"1983578042",
"2250247764",
"2114661483",
"",
"2098437222"
],
"abstract": [
"The relationship between context incongruity and sarcasm has been studied in linguistics. We present a computational system that harnesses context incongruity as a basis for sarcasm detection. Our statistical sarcasm classifiers incorporate two kinds of incongruity features: explicit and implicit. We show the benefit of our incongruity features for two text forms tweets and discussion forum posts. Our system also outperforms two past works (with Fscore improvement of 10-20 ). We also show how our features can capture intersentential incongruity.",
"Irony is an effective but challenging mode of communication that allows a speaker to express sentiment-rich viewpoints with concision, sharpness and humour. Irony is especially common in online documents that express subjective and deeply-felt opinions, and thus represents a significant obstacle to the accurate analysis of sentiment in web texts. In this paper we look at one commonly used framing device for linguistic irony --the simile --to show how irony is often marked in ways that make it computationally feasible to detect. We conduct a very large corpus analysis of web-harvested similes to identify the most interesting characteristics of ironic comparisons, and provide an empirical evaluation of a new algorithm for separating ironic from non-ironic similes.",
"Automatically detecting verbal irony (roughly, sarcasm) in online content is important for many practical applications (e.g., sentiment detection), but it is difficult. Previous approaches have relied predominantly on signal gleaned from word counts and grammatical cues. But such approaches fail to exploit the context in which comments are embedded. We thus propose a novel strategy for verbal irony classification that exploits contextual features, specifically by combining noun phrases and sentiment extracted from comments with the forum type (e.g., conservative or liberal) to which they were posted. We show that this approach improves verbal irony classification performance. Furthermore, because this method generates a very large feature space (and we expect predictive contextual features to be strong but few), we propose a mixed regularization strategy that places a sparsity-inducing 1 penalty on the contextual feature weights on top of the 2 penalty applied to all model coefficients. This increases model sparsity and reduces the variance of model performance.",
"While a fair amount of work has been done on automatically detecting emotion in human speech, there has been little research on sarcasm detection. Although sarcastic speech acts are inherently subjective, humans have relatively clear intuitions as to what constitutes sarcastic speech. In this paper, we present a system for automatic sarcasm detection. Using a new acted speech corpus that is annotated for sarcastic and sincere speech, we examine a number of features that are indicative of sarcasm. The first set of features looks at a baseline of basic acoustic features that have been found to be helpful in human sarcasm identification. We then present an effective way of modeling and applying prosodic contours to the task of automatic sarcasm detection. This approach applies sequential modeling to categorical representations of pitch and intensity contours obtained via k-means clustering. Using a SimpleLogistic (LogitBoost) classifier, we are able to predict sarcasm with 81.57 accuracy. This result suggests that certain pitch and intensity contours are predictive of sarcastic speech.",
"Irony is an important device in human communication, both in everyday spoken conversations as well as in written texts including books, websites, chats, reviews, and Twitter messages among others. Specific cases of irony and sarcasm have been studied in different contexts but, to the best of our knowledge, only recently the first publicly available corpus including annotations about whether a text is ironic or not has been published by Filatova (2012). However, no baseline for classification of ironic or sarcastic reviews has been provided. With this paper, we aim at closing this gap. We formulate the problem as a supervised classification task and evaluate different classifiers, reaching an F1-measure of up to 74 using logistic regression. We analyze the impact of a number of features which have been proposed in previous research as well as combinations of them.",
"Automatically detecting verbal irony (roughly, sarcasm) is a challenging task because ironists say something other than ‐ and often opposite to ‐ what they actually mean. Discerning ironic intent exclusively from the words and syntax comprising texts (e.g., tweets, forum posts) is therefore not always possible: additional contextual information about the speaker and or the topic at hand is often necessary. We introduce a new corpus that provides empirical evidence for this claim. We show that annotators frequently require context to make judgements concerning ironic intent, and that machine learning approaches tend to misclassify those same comments for which annotators required additional context.",
"Sarcasm transforms the polarity of an apparently positive or negative utterance into its opposite. We report on a method for constructing a corpus of sarcastic Twitter messages in which determination of the sarcasm of each message has been made by its author. We use this reliable corpus to compare sarcastic utterances in Twitter to utterances that express positive or negative attitudes without sarcasm. We investigate the impact of lexical and pragmatic factors on machine learning effectiveness for identifying sarcastic utterances and we compare the performance of machine learning techniques and human judges on this task. Perhaps unsurprisingly, neither the human judges nor the machine learning techniques perform very well.",
"How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched.",
"Sarcasm understanding may require information beyond the text itself, as in the case of ‘I absolutely love this restaurant!’ which may be sarcastic, depending on the contextual situation. We present the first quantitative evidence to show that historical tweets by an author can provide additional context for sarcasm detection. Our sarcasm detection approach uses two components: a contrast-based predictor (that identifies if there is a sentiment contrast within a target tweet), and a historical tweet-based predictor (that identifies if the sentiment expressed towards an entity in the target tweet agrees with sentiment expressed by the author towards that entity in the past).",
"To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with the hashtag ‘#sarcasm’. We collected a training corpus of about 78 thousand Dutch tweets with this hashtag. Assuming that the human labeling is correct (annotation of a sample indicates that about 85 of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a test set of a day’s stream of 3.3 million Dutch tweets. Of the 135 explicitly marked tweets on this day, we detect 101 (75 ) when we remove the hashtag. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 30 of the top-250 ranked tweets are indeed sarcastic. Analysis shows that sarcasm is often signalled by hyperbole, using intensifiers and exclamations; in contrast, non-hyperbolic sarcastic messages often receive an explicit marker. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of nonverbal expressions that people employ in live interaction when conveying sarcasm.",
"",
"The robust understanding of sarcasm in a spoken dialogue system requires a reformulation of the dialogue manager’s basic assumptions behind, for example, user behavior and grounding strategies. But automatically detecting a sarcastic tone of voice is not a simple matter. This paper presents some experiments toward sarcasm recognition using prosodic, spectral, and contextual cues. Our results demonstrate that spectral and contextual features can be used to detect sarcasm as well as a human annotator would, and confirm a long-held claim in the field of psychology – that prosody alone is not sufficient to discern whether a speaker is being sarcastic. Index Terms: dialogue, user modeling, sarcasm, speech acts"
]
} |
1610.00850 | 2952797731 | Motivated by recent advances in Deep Learning for robot control, this paper considers two learning algorithms in terms of how they acquire demonstrations. "Human-Centric" (HC) sampling is the standard supervised learning algorithm, where a human supervisor demonstrates the task by teleoperating the robot to provide trajectories consisting of state-control pairs. "Robot-Centric" (RC) sampling is an increasingly popular alternative used in algorithms such as DAgger, where a human supervisor observes the robot executing a learned policy and provides corrective control labels for each state visited. RC sampling can be challenging for human supervisors and prone to mislabeling. RC sampling can also induce error in policy performance because it repeatedly visits areas of the state space that are harder to learn. Although policies learned with RC sampling can be superior to HC sampling for standard learning models such as linear SVMs, policies learned with HC sampling may be comparable with highly-expressive learning models such as deep learning and hyper-parametric decision trees, which have little model error. We compare HC and RC using a grid world and a physical robot singulation task, where in the latter the input is a binary image of a connected set of objects on a planar worksurface and the policy generates a motion of the gripper to separate one object from the rest. We observe in simulation that for linear SVMs, policies learned with RC outperformed those learned with HC but that with deep models this advantage disappears. We also find that with RC, the corrective control labels provided by humans can be highly inconsistent. We prove there exists a class of examples where in the limit, HC is guaranteed to converge to an optimal policy while RC may fail to converge. | examined and derived that the error in the worst case from this approach can go quadratic in the time horizon, @math @cite_3 . The intuition behind this analysis is that if the distribution induced by the robot's policy is different from the supervisor's, the robot could incur the maximum error. Expressive policies can help manage this term, @math , by achieving very low training error. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2396217537"
],
"abstract": [
"One way to approach end-to-end autonomous driving is to learn a policy function that maps from a sensory input, such as an image frame from a front-facing camera, to a driving action, by imitating an expert driver, or a reference policy. This can be done by supervised learning, where a policy function is tuned to minimize the difference between the predicted and ground-truth actions. A policy function trained in this way however is known to suffer from unexpected behaviours due to the mismatch between the states reachable by the reference policy and trained policy functions. More advanced algorithms for imitation learning, such as DAgger, addresses this issue by iteratively collecting training examples from both reference and trained policies. These algorithms often requires a large number of queries to a reference policy, which is undesirable as the reference policy is often expensive. In this paper, we propose an extension of the DAgger, called SafeDAgger, that is query-efficient and more suitable for end-to-end autonomous driving. We evaluate the proposed SafeDAgger in a car racing simulator and show that it indeed requires less queries to a reference policy. We observe a significant speed up in convergence, which we conjecture to be due to the effect of automated curriculum learning."
]
} |
1610.00850 | 2952797731 | Motivated by recent advances in Deep Learning for robot control, this paper considers two learning algorithms in terms of how they acquire demonstrations. "Human-Centric" (HC) sampling is the standard supervised learning algorithm, where a human supervisor demonstrates the task by teleoperating the robot to provide trajectories consisting of state-control pairs. "Robot-Centric" (RC) sampling is an increasingly popular alternative used in algorithms such as DAgger, where a human supervisor observes the robot executing a learned policy and provides corrective control labels for each state visited. RC sampling can be challenging for human supervisors and prone to mislabeling. RC sampling can also induce error in policy performance because it repeatedly visits areas of the state space that are harder to learn. Although policies learned with RC sampling can be superior to HC sampling for standard learning models such as linear SVMs, policies learned with HC sampling may be comparable with highly-expressive learning models such as deep learning and hyper-parametric decision trees, which have little model error. We compare HC and RC using a grid world and a physical robot singulation task, where in the latter the input is a binary image of a connected set of objects on a planar worksurface and the policy generates a motion of the gripper to separate one object from the rest. We observe in simulation that for linear SVMs, policies learned with RC outperformed those learned with HC but that with deep models this advantage disappears. We also find that with RC, the corrective control labels provided by humans can be highly inconsistent. We prove there exists a class of examples where in the limit, HC is guaranteed to converge to an optimal policy while RC may fail to converge. | has been used in numerous robotic examples, including flying a quadcopter through a forest where the state space is image data taken from an onboard sensor @cite_6 . Other successful examples include teaching a robotic wheelchair to navigate to target positions @cite_21 , teaching a robot to follow verbal instructions to navigate across an office building @cite_14 and teaching a robot to grasp in clutter @cite_2 . | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_6",
"@cite_2"
],
"mid": [
"1991374750",
"2295827790",
"1980969546",
"2551182329"
],
"abstract": [
"The use of spoken instructions in human-robot teams holds the promise of enabling untrained users to effectively control complex robotic systems in a natural and intuitive way. Providing robots with the capability to understand natural language directions would enable effortless coordination in human robot teams that operate in non-specialized unknown environments. However, natural language direction following through unknown environments requires understanding the meaning of language, using a partial semantic world model to generate actions in the world, and reasoning about the environment and landmarks that have not yet been detected. We address the problem of robots following natural language directions through complex unknown environments. By exploiting the structure of spatial language, we can frame direction following as a problem of sequential decision making under uncertainty. We learn a policy which predicts a sequence of actions that follow the directions by exploring the environment and discovering landmarks, backtracking when necessary, and explicitly declaring when it has reached the destination. We use imitation learning to train the policy, using demonstrations of people following directions. By training explicitly in unknown environments, we can generalize to situations that have not been encountered previously.",
"Imitation learning is an efficient method for many robots to acquire complex skills. Some recent approaches to imitation learning provide strong theoretical performance guarantees. However, there remain crucial practical issues, especially during the training phase, where the training strategy may require execution of control policies that are possibly harmful to the robot or its environment. Moreover, these algorithms often require more demonstrations than necessary to achieve good performance in practice. This paper introduces a new approach called Maximum Mean Discrepancy Imitation Learning that uses fewer demonstrations and safer exploration policy than existing methods, while preserving strong theoretical guarantees on performance. We demonstrate empirical performance of this method for effective navigation control of a social robot in a populated environment, where safety and efficiency during learning are primary considerations.",
"Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairly straight-forward, as expensive sensors and monitoring devices can be employed. In contrast, obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAVs) which operate at low altitude in cluttered environments. Unlike large vehicles, MAVs can only carry very light sensors, such as cameras, making autonomous navigation through obstacles much more challenging. In this paper, we describe a system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments. Using only a single cheap camera to perceive the environment, we are able to maintain a constant velocity of up to 1.5m s. Given a small set of human pilot demonstrations, we use recent state-of-the-art imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading. We demonstrate the performance of our system in a more controlled environment indoors, and in real natural forest environments outdoors.",
"For applications such as Amazon warehouse order fulfillment, robots must grasp a desired object amid clutter: other objects that block direct access. This can be difficult to program explicitly due to uncertainty in friction and push mechanics and the variety of objects that can be encountered. Deep Learning networks combined with Online Learning from Demonstration (LfD) algorithms such as DAgger and SHIV have potential to learn robot control policies for such tasks where the input is a camera image and system dynamics and the cost function are unknown. To explore this idea, we introduce a version of the grasping in clutter problem where a yellow cylinder must be grasped by a planar robot arm amid extruded objects in a variety of shapes and positions. To reduce the burden on human experts to provide demonstrations, we propose using a hierarchy of three levels of supervisors: a fast motion planner that ignores obstacles, crowd-sourced human workers who provide appropriate robot control values remotely via online videos, and a local human expert. Physical experiments suggest that with 160 expert demonstrations, using the hierarchy of supervisors can increase the probability of a successful grasp (reliability) from 55 to 90 ."
]
} |
1610.00850 | 2952797731 | Motivated by recent advances in Deep Learning for robot control, this paper considers two learning algorithms in terms of how they acquire demonstrations. "Human-Centric" (HC) sampling is the standard supervised learning algorithm, where a human supervisor demonstrates the task by teleoperating the robot to provide trajectories consisting of state-control pairs. "Robot-Centric" (RC) sampling is an increasingly popular alternative used in algorithms such as DAgger, where a human supervisor observes the robot executing a learned policy and provides corrective control labels for each state visited. RC sampling can be challenging for human supervisors and prone to mislabeling. RC sampling can also induce error in policy performance because it repeatedly visits areas of the state space that are harder to learn. Although policies learned with RC sampling can be superior to HC sampling for standard learning models such as linear SVMs, policies learned with HC sampling may be comparable with highly-expressive learning models such as deep learning and hyper-parametric decision trees, which have little model error. We compare HC and RC using a grid world and a physical robot singulation task, where in the latter the input is a binary image of a connected set of objects on a planar worksurface and the policy generates a motion of the gripper to separate one object from the rest. We observe in simulation that for linear SVMs, policies learned with RC outperformed those learned with HC but that with deep models this advantage disappears. We also find that with RC, the corrective control labels provided by humans can be highly inconsistent. We prove there exists a class of examples where in the limit, HC is guaranteed to converge to an optimal policy while RC may fail to converge. | @cite_1 analyzed RC sampling as online optimization. They propose DAgger, an RC sampling algorithm, and show that the error for the robot’s policy is linear in @math for strongly convex losses (e.g. regularized linear or kernelized regression). However, the error also depends on the expected loss on the data collected during RC, which may be high due to observing complex recovery behaviors. We analyze this effect and show how it can prevent RC from converging to the supervisor's policy in Section V-A. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1931877416"
],
"abstract": [
"Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem."
]
} |
1610.00850 | 2952797731 | Motivated by recent advances in Deep Learning for robot control, this paper considers two learning algorithms in terms of how they acquire demonstrations. "Human-Centric" (HC) sampling is the standard supervised learning algorithm, where a human supervisor demonstrates the task by teleoperating the robot to provide trajectories consisting of state-control pairs. "Robot-Centric" (RC) sampling is an increasingly popular alternative used in algorithms such as DAgger, where a human supervisor observes the robot executing a learned policy and provides corrective control labels for each state visited. RC sampling can be challenging for human supervisors and prone to mislabeling. RC sampling can also induce error in policy performance because it repeatedly visits areas of the state space that are harder to learn. Although policies learned with RC sampling can be superior to HC sampling for standard learning models such as linear SVMs, policies learned with HC sampling may be comparable with highly-expressive learning models such as deep learning and hyper-parametric decision trees, which have little model error. We compare HC and RC using a grid world and a physical robot singulation task, where in the latter the input is a binary image of a connected set of objects on a planar worksurface and the policy generates a motion of the gripper to separate one object from the rest. We observe in simulation that for linear SVMs, policies learned with RC outperformed those learned with HC but that with deep models this advantage disappears. We also find that with RC, the corrective control labels provided by humans can be highly inconsistent. We prove there exists a class of examples where in the limit, HC is guaranteed to converge to an optimal policy while RC may fail to converge. | identified that the RC sampling can force the robot into states that are harder to learn and proposed weighting the samples to correct for this. They also proposed forcing the supervisor to guide the robot’s policy to better regions of the state space @cite_22 . This approach is used for an algorithmic supervisor and assumes the supervisor can both be modified and the noise distribution is known. Guided Policy Search also assumes low dimensional state space, where dynamics are known or can be reasonably estimated. We are interested in maintaining the original assumption of @cite_1 , in which there is an unknown supervisor whose demonstrations cannot be modified and potentially high dimensional image state spaces. Finally, proposed changing the supervisor’s example to be easier to learn for the robot’s policy @cite_15 . We are interested in examining how changing the robot’s policy expressiveness can change relative performance of RC and HC. | {
"cite_N": [
"@cite_15",
"@cite_1",
"@cite_22"
],
"mid": [
"2152588577",
"1931877416",
"2098284983"
],
"abstract": [
"Imitation Learning has been shown to be successful in solving many challenging real-world problems. Some recent approaches give strong performance guarantees by training the policy iteratively. However, it is important to note that these guarantees depend on how well the policy we found can imitate the oracle on the training data. When there is a substantial difference between the oracle's ability and the learner's policy space, we may fail to find a policy that has low error on the training set. In such cases, we propose to use a coach that demonstrates easy-to-learn actions for the learner and gradually approaches the oracle. By a reduction of learning by demonstration to online learning, we prove that coaching can yield a lower regret bound than using the oracle. We apply our algorithm to cost-sensitive dynamic feature selection, a hard decision problem that considers a user-specified accuracy-cost trade-off. Experimental results on UCI datasets show that our method outperforms state-of-the-art imitation learning methods in dynamic feature selection and two static feature selection methods.",
"Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.",
"In order to learn effective control policies for dynamical systems, policy search methods must be able to discover successful executions of the desired task. While random exploration can work well in simple domains, complex and high-dimensional tasks present a serious challenge, particularly when combined with high-dimensional policies that make parameter-space exploration infeasible. We present a method that uses trajectory optimization as a powerful exploration strategy that guides the policy search. A variational decomposition of a maximum likelihood policy objective allows us to use standard trajectory optimization algorithms such as differential dynamic programming, interleaved with standard supervised learning for the policy itself. We demonstrate that the resulting algorithm can outperform prior methods on two challenging locomotion tasks."
]
} |
1610.00850 | 2952797731 | Motivated by recent advances in Deep Learning for robot control, this paper considers two learning algorithms in terms of how they acquire demonstrations. "Human-Centric" (HC) sampling is the standard supervised learning algorithm, where a human supervisor demonstrates the task by teleoperating the robot to provide trajectories consisting of state-control pairs. "Robot-Centric" (RC) sampling is an increasingly popular alternative used in algorithms such as DAgger, where a human supervisor observes the robot executing a learned policy and provides corrective control labels for each state visited. RC sampling can be challenging for human supervisors and prone to mislabeling. RC sampling can also induce error in policy performance because it repeatedly visits areas of the state space that are harder to learn. Although policies learned with RC sampling can be superior to HC sampling for standard learning models such as linear SVMs, policies learned with HC sampling may be comparable with highly-expressive learning models such as deep learning and hyper-parametric decision trees, which have little model error. We compare HC and RC using a grid world and a physical robot singulation task, where in the latter the input is a binary image of a connected set of objects on a planar worksurface and the policy generates a motion of the gripper to separate one object from the rest. We observe in simulation that for linear SVMs, policies learned with RC outperformed those learned with HC but that with deep models this advantage disappears. We also find that with RC, the corrective control labels provided by humans can be highly inconsistent. We prove there exists a class of examples where in the limit, HC is guaranteed to converge to an optimal policy while RC may fail to converge. | Standard techniques for providing demonstrations to a robot are teleoperation, kinesthetic and waypoint specification @cite_19 @cite_0 @cite_16 . Kinesthetic teaching is defined as moving the robot body via a human exerting force on the robot itself. Teleoperation uses an interface such as a joystick or video game controller to control the position of the robot end effector. Waypoint specification, or Keyframes, has a human select positions in the space the robot needs to visit. These methods are forms of HC sampling because the human guides the robot through the task. We look specifically at teleoperation and compare it to RC's form of retroactive feedback. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_16"
],
"mid": [
"2402898455",
"187005120",
""
],
"abstract": [
"The field of robot Learning from Demonstration (LfD) makes use of several input modalities for demonstrations (teleoperation, kinesthetic teaching, marker- and vision-based motion tracking). In this paper we present two experiments aimed at identifying and overcoming challenges associated with using teleoperation as an input modality for LfD. Our first experiment compares kinesthetic teaching and teleoperation and highlights some inherent problems associated with teleoperation; specifically uncomfortable user interactions and inaccurate robot demonstrations. Our second experiment is focused on overcoming these problems and designing the teleoperation interaction to be more suitable for LfD. In previous work we have proposed a novel demonstration strategy using the concept of keyframes, where demonstrations are in the form of a discrete set of robot configurations. Keyframes can be naturally combined with continuous trajectory demonstrations to generate a hybrid strategy. We perform user studies to evaluate each of these demonstration strategies individually and show that keyframes are intuitive to the users and are particularly useful in providing noise-free demonstrations. We find that users prefer the hybrid strategy best for demonstrating tasks to a robot by teleoperation.",
"We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a Human–Robot Interaction perspective. Our approach—Keyframe-based Learning from Demonstration (KLfD)—takes demonstrations that consist of keyframes; a sparse set of points in the state space that produces the intended skill when visited in sequence. The conventional type of trajectory demonstrations or a hybrid of the two are also handled by KLfD through a conversion to keyframes. Our method produces a skill model that consists of an ordered set of keyframe clusters, which we call Sequential Pose Distributions (SPD). The skill is reproduced by splining between clusters. We present results from two domains: mouse gestures in 2D and scooping, pouring and placing skills on a humanoid robot. KLfD has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations. Additionally, we demonstrate that KLfD may be preferable when demonstration type is suited for the skill.",
""
]
} |
1610.00529 | 2951456741 | We present a policy search method for learning complex feedback control policies that map from high-dimensional sensory inputs to motor torques, for manipulation tasks with discontinuous contact dynamics. We build on a prior technique called guided policy search (GPS), which iteratively optimizes a set of local policies for specific instances of a task, and uses these to train a complex, high-dimensional global policy that generalizes across task instances. We extend GPS in the following ways: (1) we propose the use of a model-free local optimizer based on path integral stochastic optimal control (PI2), which enables us to learn local policies for tasks with highly discontinuous contact dynamics; and (2) we enable GPS to train on a new set of task instances in every iteration by using on-policy sampling: this increases the diversity of the instances that the policy is trained on, and is crucial for achieving good generalization. We show that these contributions enable us to learn deep neural network policies that can directly perform torque control from visual input. We validate the method on a challenging door opening task and a pick-and-place task, and we demonstrate that our approach substantially outperforms the prior LQR-based local policy optimizer on these tasks. Furthermore, we show that on-policy sampling significantly increases the generalization ability of these policies. | Policy search methods have been used in robotics for a variety of tasks, such as manipulation @cite_18 @cite_15 @cite_12 , playing table tennis @cite_31 and ball-in-a-cup @cite_7 games, regrasping @cite_10 , and locomotion @cite_24 @cite_30 @cite_8 . Most of these works use carefully designed, specialized policies that either employ domain knowledge, or have a low number of parameters. It has been empirically observed that training high-dimensional policies, such as deep neural networks, becomes exceedingly difficult with standard model-free policy search methods @cite_13 . Although deep reinforcement learning methods have made considerable progress in this regard in recent years @cite_5 @cite_6 , their high sample complexity has limited their application to real-world robotic learning problems. | {
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_18",
"@cite_7",
"@cite_15",
"@cite_8",
"@cite_6",
"@cite_24",
"@cite_5",
"@cite_31",
"@cite_10",
"@cite_12"
],
"mid": [
"2109008048",
"2012587148",
"2161395589",
"2108579172",
"1520597402",
"2051620263",
"2173248099",
"2139053308",
"2949608212",
"2591436094",
"2340935928",
"1979709633"
],
"abstract": [
"We present a learning system which is able to quickly and reliably acquire a robust feedback control policy for 3D dynamic walking from a blank-slate using only trials implemented on our physical robot. The robot begins walking within a minute and learning converges in approximately 20 minutes. This success can be attributed to the mechanics of our robot, which are modeled after a passive dynamic walker, and to a dramatic reduction in the dimensionality of the learning problem. We reduce the dimensionality by designing a robot with only 6 internal degrees of freedom and 4 actuators, by decomposing the control system in the frontal and sagittal planes, and by formulating the learning problem on the discrete return map dynamics. We apply a stochastic policy gradient algorithm to this reduced problem and decrease the variance of the update using a state-based estimate of the expected cost. This optimized learning system works quickly enough that the robot is able to continually adapt to the terrain as it walks.",
"Policy search is a subfield in reinforcement learning which focuses on finding good parameters for a given policy parametrization. It is well suited for robotics as it can cope with high-dimensional state and action spaces, one of the main challenges in robot learning. We review recent successes of both model-free and model-based policy search in robot learning.Model-free policy search is a general approach to learn policies based on sampled trajectories. We classify model-free methods based on their policy evaluation strategy, policy update strategy, and exploration strategy and present a unified view on existing algorithms. Learning a policy is often easier than learning an accurate forward model, and, hence, model-free methods are more frequently used in practice. However, for each sampled trajectory, it is necessary to interact with the robot, which can be time consuming and challenging in practice. Model-based policy search addresses this problem by first learning a simulator of the robot's dynamics from data. Subsequently, the simulator generates trajectories that are used for policy learning. For both model-free and model-based policy search methods, we review their respective properties and their applicability to robotic systems.",
"We provide a general approach for learning robotic motor skills from human demonstration. To represent an observed movement, a non-linear differential equation is learned such that it reproduces this movement. Based on this representation, we build a library of movements by labeling each recorded movement according to task and context (e.g., grasping, placing, and releasing). Our differential equation is formulated such that generalization can be achieved simply by adapting a start and a goal parameter in the equation to the desired position values of a movement. For object manipulation, we present how our framework extends to the control of gripper orientation and finger position. The feasibility of our approach is demonstrated in simulation as well as on the Sarcos dextrous robot arm. The robot learned a pick-and-place operation and a water-serving task and could generalize these tasks to novel situations.",
"Dynamic system-based motor primitives have enabled robots to learn complex tasks ranging from Tennis-swings to locomotion. However, to date there have been only few extensions which have incorporated perceptual coupling to variables of external focus, and, furthermore, these modifications have relied upon handcrafted solutions. Humans learn how to couple their movement primitives with external variables. Clearly, such a solution is needed in robotics. In this paper, we propose an augmented version of the dynamic systems motor primitives which incorporates perceptual coupling to an external variable. The resulting perceptually driven motor primitives include the previous primitives as a special case and can inherit some of their interesting properties. We show that these motor primitives can perform complex tasks such a Ball-in-a-Cup or Kendama task even with large variances in the initial conditions where a skilled human player would be challenged. For doing so, we initialize the motor primitives in the traditional way by imitation learning without perceptual coupling. Subsequently, we improve the motor primitives using a novel reinforcement learning method which is particularly well-suited for motor primitives.",
"Over the last years, there has been substantial progress in robust manipulation in unstructured environments. The long-term goal of our work is to get away from precise, but very expensive robotic systems and to develop affordable, potentially imprecise, self-adaptive manipulator systems that can interactively perform tasks such as playing with children. In this paper, we demonstrate how a low-cost off-the-shelf robotic system can learn closed-loop policies for a stacking task in only a handful of trials-from scratch. Our manipulator is inaccurate and provides no pose feedback. For learning a controller in the work space of a Kinect-style depth camera, we use a model-based reinforcement learning technique. Our learning method is data efficient, reduces model bias, and deals with several noise sources in a principled way during long-term planning. We present a way of incorporating state-space constraints into the learning process and analyze the learning gain by exploiting the sequential structure of the stacking task.",
"In this paper we describe a learning framework for a central pattern generator (CPG)-based biped locomotion controller using a policy gradient method. Our goals in this study are to achieve CPG-based biped walking with a 3D hardware humanoid and to develop an efficient learning algorithm with CPG by reducing the dimensionality of the state space used for learning. We demonstrate that an appropriate feedback controller can be acquired within a few thousand trials by numerical simulations and the controller obtained in numerical simulation achieves stable walking with a physical robot in the real world. Numerical simulations and hardware experiments evaluate the walking velocity and stability. The results suggest that the learning algorithm is capable of adapting to environmental changes. Furthermore, we present an online learning scheme with an initial policy for a hardware robot to improve the controller within 200 iterations.",
"We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"This paper presents a machine learning approach to optimizing a quadrupedal trot gait for forward speed. Given a parameterized walk designed for a specific robot, we propose using a form of policy gradient reinforcement learning to automatically search the set of possible parameters with the goal of finding the fastest possible walk. We implement and test our approach on a commercially available quadrupedal robot platform, namely the Sony Aibo robot. After about three hours of learning, all on the physical robots and with no human intervention other than to change the batteries, the robots achieved a gait faster than any previously known gait known for the Aibo, significantly outperforming a variety of existing hand-coded and learned solutions.",
"We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.",
"",
"One of the central tasks for a household robot is searching for specific objects. It does not only require localizing the target object but also identifying promising search locations in the scene if the target is not immediately visible. As computation time and hardware resources are usually limited in robotics, it is desirable to avoid expensive visual processing steps that are exhaustively applied over the entire image. The human visual system can quickly select those image locations that have to be processed in detail for a given task. This allows us to cope with huge amounts of information and to efficiently deploy the limited capacities of our visual system. In this paper, we therefore propose to use human fixation data to train a top-down saliency model that predicts relevant image locations when searching for specific objects. We show that the learned model can successfully prune bounding box proposals without rejecting the ground truth object locations. In this aspect, the proposed model outperforms a model that is trained only on the ground truth segmentations of the target object instead of fixation data.",
"Tactile sensing is a fundamental component of object manipulation and tool handling skills. With robots entering unstructured environments, tactile feedback also becomes an important ability for robot manipulation. In this work, we explore how a robot can learn to use tactile sensing in object manipulation tasks. We first address the problem of in-hand object localization and adapt three pose estimation algorithms from computer vision. Second, we employ dynamic motor primitives to learn robot movements from human demonstrations and record desired tactile signal trajectories. Then, we add tactile feedback to the control loop and apply relative entropy policy search to learn the parameters of the tactile coupling. Additionally, we show how the learning of tactile feedback can be performed more efficiently by reducing the dimensionality of the tactile information through spectral clustering and principal component analysis. Our approach is implemented on a real robot, which learns to perform a scraping task with a spatula in an altered environment."
]
} |
1610.00529 | 2951456741 | We present a policy search method for learning complex feedback control policies that map from high-dimensional sensory inputs to motor torques, for manipulation tasks with discontinuous contact dynamics. We build on a prior technique called guided policy search (GPS), which iteratively optimizes a set of local policies for specific instances of a task, and uses these to train a complex, high-dimensional global policy that generalizes across task instances. We extend GPS in the following ways: (1) we propose the use of a model-free local optimizer based on path integral stochastic optimal control (PI2), which enables us to learn local policies for tasks with highly discontinuous contact dynamics; and (2) we enable GPS to train on a new set of task instances in every iteration by using on-policy sampling: this increases the diversity of the instances that the policy is trained on, and is crucial for achieving good generalization. We show that these contributions enable us to learn deep neural network policies that can directly perform torque control from visual input. We validate the method on a challenging door opening task and a pick-and-place task, and we demonstrate that our approach substantially outperforms the prior LQR-based local policy optimizer on these tasks. Furthermore, we show that on-policy sampling significantly increases the generalization ability of these policies. | Guided policy search @cite_23 (GPS) seeks to address this challenge by decomposing policy search into trajectory optimization and supervised learning of a general high-dimensional policy. GPS was applied to various robotic tasks @cite_4 @cite_1 @cite_16 . However, the use of a model-based teacher'' to supervise the policy has placed considerable limitations on such methods. Most prior work has used LQR with fitted local time-varying linear models as the teacher @cite_4 , which can handle unknown dynamics, but struggles with problems that are inherently discontinuous, such as door opening: if the robot misses the door handle, it is difficult for a smooth LQR-based optimizer to understand how to improve the behavior. We extend GPS to tasks with highly discontinuous dynamics and non-differentiable costs by replacing the model-based LQR supervisor with PI @math , a model-free reinforcement learning algorithm based on stochastic optimal control @cite_11 . | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_23",
"@cite_16",
"@cite_11"
],
"mid": [
"2121103318",
"2952842465",
"2104733512",
"2964161785",
"1925816294"
],
"abstract": [
"We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learning a global model, which can be difficult when the dynamics are complex and discontinuous. We show that this hybrid approach requires many fewer samples than model-free methods, and can handle complex, nonsmooth dynamics that can pose a challenge for model-based techniques. We present experiments showing that our method can be used to learn complex neural network policies that successfully execute simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation.",
"Autonomous learning of object manipulation skills can enable robots to acquire rich behavioral repertoires that scale to the variety of objects found in the real world. However, current motion skill learning methods typically restrict the behavior to a compact, low-dimensional representation, limiting its expressiveness and generality. In this paper, we extend a recently developed policy search method la-lnnpg-14 and use it to learn a range of dynamic manipulation behaviors with highly general policy representations, without using known models or example demonstrations. Our approach learns a set of trajectories for the desired motion skill by using iteratively refitted time-varying linear models, and then unifies these trajectories into a single control policy that can generalize to new situations. To enable this method to run on a real robot, we introduce several improvements that reduce the sample count and automate parameter selection. We show that our method can acquire fast, fluent behaviors after only minutes of interaction time, and can learn robust controllers for complex tasks, including putting together a toy airplane, stacking tight-fitting lego blocks, placing wooden rings onto tight-fitting pegs, inserting a shoe tree into a shoe, and screwing bottle caps onto bottles.",
"Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. We present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. We show how differential dynamic programming can be used to generate suitable guiding samples, and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search. We evaluate the method by learning neural network controllers for planar swimming, hopping, and walking, as well as simulated 3D humanoid running.",
"Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.",
"With the goal to generate more scalable algorithms with higher efficiency and fewer open parameters, reinforcement learning (RL) has recently moved towards combining classical techniques from optimal control and dynamic programming with modern learning techniques from statistical estimation theory. In this vein, this paper suggests to use the framework of stochastic optimal control with path integrals to derive a novel approach to RL with parameterized policies. While solidly grounded in value function estimation and optimal control based on the stochastic Hamilton-Jacobi-Bellman (HJB) equations, policy improvements can be transformed into an approximation problem of a path integral which has no open algorithmic parameters other than the exploration noise. The resulting algorithm can be conceived of as model-based, semi-model-based, or even model free, depending on how the learning problem is structured. The update equations have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Our new algorithm demonstrates interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slightly heuristically motivated probability matching approach can actually perform well. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a simulated 12 degree-of-freedom robot dog illustrates the functionality of our algorithm in a complex robot learning scenario. We believe that Policy Improvement with Path Integrals (PI2) offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL based on trajectory roll-outs."
]
} |
1610.00529 | 2951456741 | We present a policy search method for learning complex feedback control policies that map from high-dimensional sensory inputs to motor torques, for manipulation tasks with discontinuous contact dynamics. We build on a prior technique called guided policy search (GPS), which iteratively optimizes a set of local policies for specific instances of a task, and uses these to train a complex, high-dimensional global policy that generalizes across task instances. We extend GPS in the following ways: (1) we propose the use of a model-free local optimizer based on path integral stochastic optimal control (PI2), which enables us to learn local policies for tasks with highly discontinuous contact dynamics; and (2) we enable GPS to train on a new set of task instances in every iteration by using on-policy sampling: this increases the diversity of the instances that the policy is trained on, and is crucial for achieving good generalization. We show that these contributions enable us to learn deep neural network policies that can directly perform torque control from visual input. We validate the method on a challenging door opening task and a pick-and-place task, and we demonstrate that our approach substantially outperforms the prior LQR-based local policy optimizer on these tasks. Furthermore, we show that on-policy sampling significantly increases the generalization ability of these policies. | Stochastic policy search methods can be improved by limiting the information loss between updates, by means of a KL-divergence constraint @cite_19 . In this work, we similarly constrain the KL-divergence between PI @math updates, in a framework similar to @cite_33 and @cite_26 . In @cite_22 , the authors propose to learn high-dimensional sensor-based policies through supervised learning using the relative entropy method to reweight state-action samples of the policy. While the goal of learning high-dimensional nonlinear policies is similar to our work, we optimize the individual instance trajectories separately, and then combine them into a single policy with supervised learning. As shown in our simulated experimental evaluation, this substantially improves the effectiveness of the method and allows us to tackle more complex tasks. We also extend guided policy search by choosing new random instances at each iteration, based on the on-policy sampling technique proposed in @cite_9 , which substantially improves the generalization of the resulting policy. | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_33",
"@cite_9",
"@cite_19"
],
"mid": [
"91905023",
"314779054",
"2001095967",
"2963630259",
"1499669280"
],
"abstract": [
"Path integral (PI) control defines a general class of control problems for which the optimal control computation is equivalent to an inference problem that can be solved by evaluation of a path integral over state trajectories. However, this potential is mostly unused in real-world problems because of two main limitations: first, current approaches can typically only be applied to learn open-loop controllers and second, current sampling procedures are inefficient and not scalable to high dimensional systems. We introduce the efficient Path Integral Relative-Entropy Policy Search (PI-REPS) algorithm for learning feedback policies with PI control. Our algorithm is inspired by information theoretic policy updates that are often used in policy search. We use these updates to approximate the state trajectory distribution that is known to be optimal from the PI control theory. Our approach allows for a principled treatment of different sampling distributions and can be used to estimate many types of parametric or non-parametric feedback controllers. We show that PI-REPS significantly outperforms current methods and is able to solve tasks that are out of reach for current methods.",
"Learning complex control policies from highdimensional sensory input is a challenge for reinforcement learning algorithms. Kernel methods that approximate values functions or transition models can address this problem. Yet, many current approaches rely on instable greedy maximization. In this paper, we develop a policy search algorithm that integrates robust policy updates and kernel embeddings. Our method can learn nonparametric control policies for innite horizon continuous MDPs with high-dimensional sensory representations. We show that our method outperforms related approaches, and that our algorithm can learn an underpowered swing-up task task directly from highdimensional image data.",
"Many Stochastic Optimal Control (SOC) approaches rely on samples to either obtain an estimate of the value function or a linearisation of the underlying system model. However, these approaches typically neglect the fact that the accuracy of the policy update depends on the closeness of the resulting trajectory distribution to these samples. The greedy operator does not consider such closeness constraint to the samples. Hence, the greedy operator can lead to oscillations or even instabilities in the policy updates. Such undesired behaviour is likely to result in an inferior performance of the estimated policy. We reuse inspiration from the reinforcement learning community and relax the greedy operator used in SOC with an information theoretic bound that limits the ‘distance’ of two subsequent trajectory distributions in a policy update. The introduced bound ensures a smooth and stable policy update. Our method is also well suited for model-based reinforcement learning, where we estimate the system dynamics model from data. As this model is likely to be inaccurate, it might be dangerous to exploit the model greedily. Instead, our bound ensures that we generate new data in the vicinity of the current data, such that we can improve our estimate of the system dynamics model. We show that our approach outperforms several state of the art approaches on challenging simulated robot control tasks.",
"Guided policy search algorithms can be used to optimize complex nonlinear policies, such as deep neural networks, without directly computing policy gradients in the high-dimensional parameter space. Instead, these methods use supervised learning to train the policy to mimic a “teacher” algorithm, such as a trajectory optimizer or a trajectory-centric reinforcement learning method. Guided policy search methods provide asymptotic local convergence guarantees by construction, but it is not clear how much the policy improves within a small, finite number of iterations. We show that guided policy search algorithms can be interpreted as an approximate variant of mirror descent, where the projection onto the constraint manifold is not exact. We derive a new guided policy search algorithm that is simpler and provides appealing improvement and convergence guarantees in simplified convex and linear settings, and show that in the more general nonlinear setting, the error in the projection step can be bounded. We provide empirical results on several simulated robotic manipulation tasks that show that our method is stable and achieves similar or better performance when compared to prior guided policy search methods, with a simpler formulation and fewer hyperparameters.",
"Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems."
]
} |
1610.00529 | 2951456741 | We present a policy search method for learning complex feedback control policies that map from high-dimensional sensory inputs to motor torques, for manipulation tasks with discontinuous contact dynamics. We build on a prior technique called guided policy search (GPS), which iteratively optimizes a set of local policies for specific instances of a task, and uses these to train a complex, high-dimensional global policy that generalizes across task instances. We extend GPS in the following ways: (1) we propose the use of a model-free local optimizer based on path integral stochastic optimal control (PI2), which enables us to learn local policies for tasks with highly discontinuous contact dynamics; and (2) we enable GPS to train on a new set of task instances in every iteration by using on-policy sampling: this increases the diversity of the instances that the policy is trained on, and is crucial for achieving good generalization. We show that these contributions enable us to learn deep neural network policies that can directly perform torque control from visual input. We validate the method on a challenging door opening task and a pick-and-place task, and we demonstrate that our approach substantially outperforms the prior LQR-based local policy optimizer on these tasks. Furthermore, we show that on-policy sampling significantly increases the generalization ability of these policies. | Our deep neural network policies directly use visual features from the robot's camera to perform the task. The features are learned automatically on a pose detection proxy task, using an improved version of the spatial feature points architecture @cite_16 based on convolutional neural networks (CNNs) @cite_2 @cite_29 . In @cite_0 , visual and control layers of a racing video game policy are learned separately using neuroevolution. Using pre-trained visual features enables efficient learning of smaller controllers with RL. In our work, visual features are pre-trained based on object and robot end-effector poses. By combining visual pre-training, initialization from kinesthetic demonstrations, and global policy sampling with PI @math , we are able to learn complex visuomotor behaviors for contact-rich tasks with discontinuous dynamics, such as door opening and pick-and-place. | {
"cite_N": [
"@cite_0",
"@cite_29",
"@cite_16",
"@cite_2"
],
"mid": [
"2165776394",
"2076063813",
"2964161785",
"2101926813"
],
"abstract": [
"Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.",
"In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.",
"Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.",
"A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by “learning without a teacher”, and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname “neocognitron”. After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consits of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of “S-cells”, which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of “C-cells” similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any “teacher” during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cell of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern."
]
} |
1610.00572 | 2529836050 | We describe an Arabic-Hebrew parallel corpus of TED talks built upon WIT3, the Web inventory that repurposes the original content of the TED website in a way which is more convenient for MT researchers. The benchmark consists of about 2,000 talks, whose subtitles in Arabic and Hebrew have been accurately aligned and rearranged in sentences, for a total of about 3.5M tokens per language. Talks have been partitioned in train, development and test sets similarly in all respects to the MT tasks of the IWSLT 2016 evaluation campaign. In addition to describing the benchmark, we list the problems encountered in preparing it and the novel methods designed to solve them. Baseline MT results and some measures on sentence length are provided as an extrinsic evaluation of the quality of the benchmark. | To the best of our knowledge, to date the richest collection of publicly available Arabic-Hebrew parallel corpora is part of the OPUS project; opus.lingfil.uu.se in total, it provides more than 110M tokens per language subdivided into 5 corpora, OpenSubtitles2016 being by far the largest. The OpenSubtitles2016 collection @cite_3 opus.lingfil.uu.se OpenSubtitles2016.php provides parallel subtitles of movies and TV programs made available by the Open multilanguage subtitle database. www.opensubtitles.org The size of this corpus makes it outstandingly valuable; nevertheless, the translation of such kind of subtitles is often less literal than in other domains (even TED), likely affecting the accuracy of the fully automatic processing implemented for parallelizing the Arabic and Hebrew subtitles. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2419539795"
],
"abstract": [
"We present a new major release of the OpenSubtitles collection of parallel corpora. The release is compiled from a large database of movie and TV subtitles and includes a total of 1689 bitexts spanning 2.6 billion sentences across 60 languages. The release also incorporates a number of enhancements in the preprocessing and alignment of the subtitles, such as the automatic correction of OCR errors and the use of meta-data to estimate the quality of each subtitle and score subtitle pairs."
]
} |
1610.00889 | 2529042507 | The growing popularity of mobile and wearable devices with built-in cameras, the bright prospect of camera related applications such as augmented reality and life-logging system, the increased ease of taking and sharing photos, and advances in computer vision techniques have greatly facilitated people's lives in many aspects, but have also inevitably raised people's concerns about visual privacy at the same time. Motivated by recent user studies that people's privacy concerns are dependent on the context, in this paper, we propose Cardea, a context-aware and interactive visual privacy protection framework that enforces privacy protection according to people's privacy preferences. The framework provides people with fine-grained visual privacy protection using: i) personal privacy profiles, with which people can define their context-dependent privacy preferences; and ii) visual indicators: face features, for devices to automatically locate individuals who request privacy protection; and iii) hand gestures, for people to flexibly interact with cameras to temporarily change their privacy preferences. We design and implement the framework consisting of the client app on Android devices and the cloud server. Our evaluation results confirm this framework is practical and effective with 86 overall accuracy, showing promising future for context-aware visual privacy protection from pervasive cameras. | @cite_24 propose an approach in which closed devices can encrypt data together during recording utilizing short range wireless communication to exchange public keys and negotiate encryption key. Only by obtaining all of the permissions from people who encrypt the recording can one decrypts it. @cite_5 @cite_2 present methods that third--party applications such as perceptual and augmented reality applications have access to only higher-level objects such as a skeleton or a face instead of raw sensor feeds. @cite_16 propose a system that gives users control to mark secure regions that the camera have access to, therefore cameras are prevented from capturing sensitive information. Unlike above solutions, our work focuses on protecting bystanders' privacy by respecting their privacy preferences when they are captured in photos. | {
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_16",
"@cite_2"
],
"mid": [
"2142835102",
"2066588645",
"2461714859",
""
],
"abstract": [
"The growing popularity of inexpensive, portable recording devices, such as cellular phone cameras and compact digital audio recorders, presents a significant new threat to privacy. We propose a set of technologies that can be integrated into recording devices to provide stronger, more accurately targeted privacy protections than other legal and technical measures now under consideration. Our design is based on an informed consent principle, which it supports by the use of novel devices and protocols that automate negotiations over consent and ensure appropriate safeguards on recorded data. We define the protocols needed for this purpose and establish their security. We also describe a working prototype implementation that safeguards audio recorded by laptop PCs in a wireless network.",
"Perceptual, \"context-aware\" applications that observe their environment and interact with users via cameras and other sensors are becoming ubiquitous on personal computers, mobile phones, gaming platforms, household robots, and augmented-reality devices. This raises new privacy risks. We describe the design and implementation of DARKLY, a practical privacy protection system for the increasingly common scenario where an untrusted, third-party perceptual application is running on a trusted device. DARKLY is integrated with OpenCV, a popular computer vision library used by such applications to access visual inputs. It deploys multiple privacy protection mechanisms, including access control, algorithmic privacy transforms, and user audit. We evaluate DARKLY on 20 perceptual applications that perform diverse tasks such as image recognition, object tracking, security surveillance, and face detection. These applications run on DARKLY unmodified or with very few modifications and minimal performance overheads vs. native OpenCV. In most cases, privacy enforcement does not reduce the applications' functionality or accuracy. For the rest, we quantify the tradeoff between privacy and utility and demonstrate that utility remains acceptable even with strong privacy protection.",
"Users are increasingly vulnerable to inadvertently leaking sensitive information through cameras. In this paper, we investigate an approach to mitigating the risk of such inadvertent leaks called privacy markers. Privacy markers give users fine-grained control of what visual information an app can access through a device's camera. We present two examples of this approach: PrivateEye, which allows a user to mark regions of a two-dimensional surface as safe to release to an app, and WaveOff, which does the same for three-dimensional objects. We have integrated both systems with Android's camera subsystem. Experiments with our prototype show that a Nexus 5 smartphone can deliver near realtime frame rates while protecting secret information, and a 26-person user study elicited positive feedback on our prototype's speed and ease-of-use.",
""
]
} |
1610.00889 | 2529042507 | The growing popularity of mobile and wearable devices with built-in cameras, the bright prospect of camera related applications such as augmented reality and life-logging system, the increased ease of taking and sharing photos, and advances in computer vision techniques have greatly facilitated people's lives in many aspects, but have also inevitably raised people's concerns about visual privacy at the same time. Motivated by recent user studies that people's privacy concerns are dependent on the context, in this paper, we propose Cardea, a context-aware and interactive visual privacy protection framework that enforces privacy protection according to people's privacy preferences. The framework provides people with fine-grained visual privacy protection using: i) personal privacy profiles, with which people can define their context-dependent privacy preferences; and ii) visual indicators: face features, for devices to automatically locate individuals who request privacy protection; and iii) hand gestures, for people to flexibly interact with cameras to temporarily change their privacy preferences. We design and implement the framework consisting of the client app on Android devices and the cloud server. Our evaluation results confirm this framework is practical and effective with 86 overall accuracy, showing promising future for context-aware visual privacy protection from pervasive cameras. | A recently interesting work I--pic @cite_0 allows people to broadcast their privacy preferences and appearance information to nearby devices using BLE. This work can be incorporated into our framework. Furthermore, we specify context elements that have not been considered before, such as scene and presence of others. Besides, we provide a convenient mechanism for people to temporarily change their privacy preferences using hand gestures when facing the camera, while broadcasted data in I-pic may not be received by people who take images, or the received data is outdated. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2467391961"
],
"abstract": [
"The ubiquity of portable mobile devices equipped with built-in cameras have led to a transformation in how and when digital images are captured, shared, and archived. Photographs and videos from social gatherings, public events, and even crime scenes are commonplace online. While the spontaneity afforded by these devices have led to new personal and creative outlets, privacy concerns of bystanders (and indeed, in some cases, unwilling subjects) have remained largely unaddressed. We present I-Pic, a trusted software platform that integrates digital capture with user-defined privacy. In I-Pic, users choose alevel of privacy (e.g., image capture allowed or not) based upon social context (e.g., out in public vs. with friends vs. at workplace). Privacy choices of nearby users are advertised via short-range radio, and I-Pic-compliant capture platforms generate edited media to conform to privacy choices of image subjects. I-Pic uses secure multiparty computation to ensure that users' visual features and privacy choices are not revealed publicly, regardless of whether they are the subjects of an image capture. Just as importantly, I-Pic preserves the ease-of-use and spontaneous nature of capture and sharing between trusted users. Our evaluation of I-Pic shows that a practical, energy-efficient system that conforms to the privacy choices of many users within a scene can be built and deployed using current hardware."
]
} |
1610.00574 | 2797819729 | Given a large dataset of binary codes and a binary query point, we address how to efficiently find @math codes in the dataset that yield the largest cosine similarities to the query. The straightforward answer to this problem is to compare the query with all items in the dataset, but this is practical only for small datasets. One potential solution to enhance the search time and achieve sublinear cost is to use a hash table populated with binary codes of the dataset and then look up the nearby buckets to the query to retrieve the nearest neighbors. However, if codes are compared in terms of cosine similarity rather than the Hamming distance, then the main issue is that the order of buckets to probe is not evident. To examine this issue, we first elaborate on the connection between the Hamming distance and the cosine similarity. Doing this allows us to systematically find the probing sequence in the hash table. However, solving the nearest neighbor search with a single table is only practical for short binary codes. To address this issue, we propose the angular multi-index hashing search algorithm which relies on building multiple hash tables on binary code substrings. The proposed search algorithm solves the exact angular @math nearest neighbor problem in a time that is often orders of magnitude faster than the linear scan baseline and even approximation methods. | A classical paradigm to reduce the computational cost of the nearest neighbor search relies on tree-based indexing structures which offer logarithmic query time, @math , on the average. Perhaps the best known example of such techniques is the kd-tree @cite_4 with a worst-case search time of @math . Following @cite_4 , many other tree-based indexing algorithms have been proposed (see @cite_22 for an overview). However, these methods suffer from a phenomenon commonly known as the curse of dimensionality . This means, as the number of dimensions increases, these methods quickly reduce to the exhaustive search. Typically, the kd-tree and its variants are efficient for dimensions less than 20 @cite_44 @cite_14 . | {
"cite_N": [
"@cite_44",
"@cite_14",
"@cite_4",
"@cite_22"
],
"mid": [
"",
"1541459201",
"2165558283",
"1497953515"
],
"abstract": [
"",
"For similarity search in high-dimensional vector spaces (or ‘HDVSs’), researchers have proposed a number of new methods (or adaptations of existing methods) based, in the main, on data-space partitioning. However, the performance of these methods generally degrades as dimensionality increases. Although this phenomenon-known as the ‘dimensional curse’-is well known, little or no quantitative a.nalysis of the phenomenon is available. In this paper, we provide a detailed analysis of partitioning and clustering techniques for similarity search in HDVSs. We show formally that these methods exhibit linear complexity at high dimensionality, and that existing methods are outperformed on average by a simple sequential scan if the number of dimensions exceeds around 10. Consequently, we come up with an alternative organization based on approximations to make the unavoidable sequential scan as fast as possible. We describe a simple vector approximation scheme, called VA-file, and report on an experimental evaluation of this and of two tree-based index methods (an R*-tree and an X-tree).",
"This paper develops the multidimensional binary search tree (or k -d tree, where k is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The k -d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an n record file are: insertion, O (log n ); deletion of the root, O ( n ( k -1) k ); deletion of a random node, O (log n ); and optimization (guarantees logarithmic performance of searches), O ( n log n ). Search algorithms are given for partial match queries with t keys specified [proven maximum running time of O ( n ( k - t ) k )] and for nearest neighbor queries [empirically observed average running time of O (log n ).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that k -d trees could be quite useful in many applications, and examples of potential uses are given.",
"Multidimensional data is data that exists and changes in more than one dimension, by time, or spatially, or both, sometimes dynamically. Think here of tracking hurricane data in order to project the storm's path, for just one example. As spatial and other multidimensional data structures become increasingly important for the applications in game programming, data mining, bioinformatics, and many other areas--including astronomy, geographic information systems, physics, etc., the need for a comprehensive book on the subject is paramount. This book is truly a life's work by the author who is clearly the best person for the job."
]
} |
1610.00574 | 2797819729 | Given a large dataset of binary codes and a binary query point, we address how to efficiently find @math codes in the dataset that yield the largest cosine similarities to the query. The straightforward answer to this problem is to compare the query with all items in the dataset, but this is practical only for small datasets. One potential solution to enhance the search time and achieve sublinear cost is to use a hash table populated with binary codes of the dataset and then look up the nearby buckets to the query to retrieve the nearest neighbors. However, if codes are compared in terms of cosine similarity rather than the Hamming distance, then the main issue is that the order of buckets to probe is not evident. To examine this issue, we first elaborate on the connection between the Hamming distance and the cosine similarity. Doing this allows us to systematically find the probing sequence in the hash table. However, solving the nearest neighbor search with a single table is only practical for short binary codes. To address this issue, we propose the angular multi-index hashing search algorithm which relies on building multiple hash tables on binary code substrings. The proposed search algorithm solves the exact angular @math nearest neighbor problem in a time that is often orders of magnitude faster than the linear scan baseline and even approximation methods. | While Hamming distance is the most popular measure of similarity used to compare binary codes @cite_7 @cite_31 , in some applications, codes are compared in terms of cosine similarities. For example, @cite_10 have developed a binary hashing technique in which the resulting codes are compared in terms of their corresponding angles. Also, in Bag of Words representation, it is common to compare the codes with respect to the cosine similarity measure @cite_32 . Researchers have also proposed other distance measures for binary codes such as spherical Hamming distance @cite_34 and Manhattan distance @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_32",
"@cite_31",
"@cite_34",
"@cite_10"
],
"mid": [
"2061441693",
"2071572981",
"2019096529",
"2122196799",
"",
"2221852422"
],
"abstract": [
"Hashing is used to learn binary-code representation for data with expectation of preserving the neighborhood structure in the original feature space. Due to its fast query speed and reduced storage cost, hashing has been widely used for efficient nearest neighbor search in a large variety of applications like text and image retrieval. Most existing hashing methods adopt Hamming distance to measure the similarity (neighborhood) between points in the hashcode space. However, one problem with Hamming distance is that it may destroy the neighborhood structure in the original feature space, which violates the essential goal of hashing. In this paper, Manhattan hashing (MH), which is based on Manhattan distance, is proposed to solve the problem of Hamming distance based hashing. The basic idea of MH is to encode each projected dimension with multiple bits of natural binary code (NBC), based on which the Manhattan distance between points in the hashcode space is calculated for nearest neighbor search. MH can effectively preserve the neighborhood structure in the data to achieve the goal of hashing. To the best of our knowledge, this is the first work to adopt Manhattan distance with NBC for hashing. Experiments on several large-scale image data sets containing up to one million points show that our MH method can significantly outperform other state-of-the-art methods.",
"In this paper we study the problem of finding the approximate nearest neighbor of a query point in the high dimensional space, focusing on the Euclidean space. The earlier approaches use locality-preserving hash functions (that tend to map nearby points to the same value) to construct several hash tables to ensure that the query point hashes to the same bucket as its nearest neighbor in at least one table. Our approach is different - we use one (or a few) hash table and hash several randomly chosen points in the neighborhood of the query point showing that at least one of them will hash to the bucket containing its nearest neighbor. We show that the number of randomly chosen points in the neighborhood of the query point q required depends on the entropy of the hash value h(p) of a random point p at the same distance from q at its nearest neighbor, given q and the locality preserving hash function h chosen randomly from the hash family. Precisely, we show that if the entropy I(h(p)|q, h) = M and g is a bound on the probability that two far-off points will hash to the same bucket, then we can find the approximate nearest neighbor in O(np) time and near linear O(n) space where p = M log(1 g). Alternatively we can build a data structure of size O(n1 (1-p)) to answer queries in O(d) time. By applying this analysis to the locality preserving hash functions in [17, 21, 6] and adjusting the parameters we show that the c nearest neighbor can be computed in time O(np) and near linear space where p a 2.06 c as c becomes large.",
"We constructed a corpus of digitized texts containing about 4 of all books ever printed. Analysis of this corpus enables us to investigate cultural trends quantitatively. We survey the vast terrain of ‘culturomics,’ focusing on linguistic and cultural phenomena that were reflected in the English language between 1800 and 2000. We show how this approach can provide insights about fields as diverse as lexicography, the evolution of grammar, collective memory, the adoption of technology, the pursuit of fame, censorship, and historical epidemiology. Culturomics extends the boundaries of rigorous quantitative inquiry to a wide array of new phenomena spanning the social sciences and the humanities.",
"There has been growing interest in mapping image data onto compact binary codes for fast near neighbor search in vision applications. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used in this way, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact K-nearest neighbor search in Hamming space. The algorithm is straightforward to implement, storage efficient, and it has sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speed-ups over a linear scan baseline and for datasets with up to one billion items, 64- or 128-bit codes, and search radii up to 25 bits.",
"",
"We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods."
]
} |
1610.00574 | 2797819729 | Given a large dataset of binary codes and a binary query point, we address how to efficiently find @math codes in the dataset that yield the largest cosine similarities to the query. The straightforward answer to this problem is to compare the query with all items in the dataset, but this is practical only for small datasets. One potential solution to enhance the search time and achieve sublinear cost is to use a hash table populated with binary codes of the dataset and then look up the nearby buckets to the query to retrieve the nearest neighbors. However, if codes are compared in terms of cosine similarity rather than the Hamming distance, then the main issue is that the order of buckets to probe is not evident. To examine this issue, we first elaborate on the connection between the Hamming distance and the cosine similarity. Doing this allows us to systematically find the probing sequence in the hash table. However, solving the nearest neighbor search with a single table is only practical for short binary codes. To address this issue, we propose the angular multi-index hashing search algorithm which relies on building multiple hash tables on binary code substrings. The proposed search algorithm solves the exact angular @math nearest neighbor problem in a time that is often orders of magnitude faster than the linear scan baseline and even approximation methods. | The idea of using hash tables to avoid exhaustive search in (ANN) techniques has been studied in recent years. @cite_11 proposed an algorithm for partitioning a pool of hash functions into sets such that the hash functions of each set are independent of each other. Then, they use each set of hash functions to form a separate hash table. @cite_5 have proposed the inverted multi-index technique to solve the ANN problem for problems in which the distance between data points are estimated with codewords of multiple codebooks. This technique creates two hash tables by decomposing data vectors into two disjoint substrings and hashing each substring in one of the tables. The query is similarly decomposed into two substrings and the search is performed in each hash table to find the corresponding nearest neighbor. More recently, @cite_52 have used multiple hash tables to reduce the search time, whereby the distance between items are approximated by codewords of multiple codebooks. @cite_41 have also proposed a non-exhaustive search algorithm based on the branch and bound technique. | {
"cite_N": [
"@cite_41",
"@cite_5",
"@cite_52",
"@cite_11"
],
"mid": [
"2158578794",
"1854214752",
"",
"118147013"
],
"abstract": [
"Approximate nearest neighbor search (ANNS) is a basic and important technique used in many tasks such as object recognition. It involves two processes: selecting nearest neighbor candidates and performing a brute-force search of these candidates. Only the former though has scope for improvement. In most existing methods, it approximates the space by quantization. It then calculates all the distances between the query and all the quantized values (e.g., clusters or bit sequences), and selects a fixed number of candidates close to the query. The performance of the method is evaluated based on accuracy as a function of the number of candidates. This evaluation seems rational but poses a serious problem; it ignores the computational cost of the process of selection. In this paper, we propose a new ANNS method that takes into account costs in the selection process. Whereas existing methods employ computationally expensive techniques such as comparative sort and heap, the proposed method does not. This realizes a significantly more efficient search. We have succeeded in reducing computation times by one-third compared with the state-of-theart on an experiment using 100 million SIFT features.",
"The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.",
"",
"Recent years have witnessed the success of hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually employed to retrieve more desired results from all hit buckets of each table. However, there are rare works studying the unified approach to constructing multiple informative hash tables except the widely used random way. In this paper, we regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. The construction method is general and compatible with different types of hashing algorithms using different feature spaces and or parameter settings. Extensive experiments on two large-scale benchmarks demonstrate that the proposed method outperforms both naive construction method and state-of-the-art hashing algorithms, with up to 65.93 accuracy gains."
]
} |
1610.00660 | 2528022998 | Face Recognition (FR) has been the interest to several researchers over the past few decades due to its passive nature of biometric authentication. Despite high accuracy achieved by face recognition algorithms under controlled conditions, achieving the same performance for face images obtained in surveillance scenarios, is a major hurdle. Some attempts have been made to super-resolve the low-resolution face images and improve the contrast, without considerable degree of success. The proposed technique in this paper tries to cope with the very low resolution and low contrast face images obtained from surveillance cameras, for FR under surveillance conditions. For Support Vector Machine classification, the selection of appropriate kernel has been a widely discussed issue in the research community. In this paper, we propose a novel kernel selection technique termed as MFKL (Multi-Feature Kernel Learning) to obtain the best feature-kernel pairing. Our proposed technique employs a effective kernel selection by Multiple Kernel Learning (MKL) method, to choose the optimal kernel to be used along with unsupervised domain adaptation method in the Reproducing Kernel Hilbert Space (RKHS), for a solution to the problem. Rigorous experimentation has been performed on three real-world surveillance face datasets : FR , SCface and ChokePoint. Results have been shown using Rank-1 Recognition Accuracy, ROC and CMC measures. Our proposed method outperforms all other recent state-of-the-art techniques by a considerable margin. | Recently, face recognition research in real-life surveillance has become very popular. For high data transmission speed and easy data storage, surveillance cameras generally produce images in low resolution, and face images captured directly by surveillance cameras are usually very small. Besides, images taken by surveillance cameras are generally with noises and corruptions, due to the uncontrolled circumstances and distances. @cite_2 proposed a super-resolution approach to increase the recognition performance for very low-resolution face images. They employ a minimum mean square error estimator to learn the relationship between low and high resolution training pairs. A further discriminative constraint is added to the learning approach using the class label information. @cite_17 proposed a matching algorithm through using Multidimensional Scaling (MDS). In their approach both the low and high resolution training pairs are projected into a kernel space. Transformation relationship is then learned in the kernel space through iterative majorization algorithm, which is used to match the low-resolution test faces to the high-resolution gallery faces. Similarly, @cite_48 proposed the Coupled Kernel Embedding approach, where they map the low and high resolution face images onto different kernel spaces and then transform them to a learned subspace for recognition. | {
"cite_N": [
"@cite_48",
"@cite_17",
"@cite_2"
],
"mid": [
"1998594584",
"2055444136",
"2054515210"
],
"abstract": [
"Practical video scene and face recognition systems are sometimes confronted with low-resolution (LR) images. The faces may be very small even if the video is clear, thus it is difficult to directly measure the similarity between the faces and the high-resolution (HR) training samples. Face recognition based on traditional super-resolution (SR) methods usually have limited performance because the target of SR may not be consistent with that of classification, and time-consuming SR algorithms are not suitable for real-time applications. In this paper, a new feature extraction method called coupled kernel embedding (CKE) is proposed for LR face recognition without any SR preprocessing. In this method, the final kernel matrix is constructed by concatenating two individual kernel matrices in the diagonal direction, and the (semi)positively definite properties are preserved for optimization. CKE addresses the problem of comparing multimodal data that are difficult for conventional methods in practice due to the lack of an efficient similarity measure. Particularly, different kernel types (e.g., linear, Gaussian, polynomial) can be integrated into a uniform optimization objective, which cannot be achieved by simple linear methods. CKE solves this problem by minimizing the dissimilarities captured by their kernel Gram matrices in the LR and HR spaces. In the implementation, the nonlinear objective function is minimized by a generalized eigenvalue decomposition. Experiments on benchmark and real databases show that our CKE method indeed improves the recognition performance.",
"Face recognition performance degrades considerably when the input images are of Low Resolution (LR), as is often the case for images taken by surveillance cameras or from a large distance. In this paper, we propose a novel approach for matching low-resolution probe images with higher resolution gallery images, which are often available during enrollment, using Multidimensional Scaling (MDS). The ideal scenario is when both the probe and gallery images are of high enough resolution to discriminate across different subjects. The proposed method simultaneously embeds the low-resolution probe images and the high-resolution gallery images in a common space such that the distance between them in the transformed space approximates the distance had both the images been of high resolution. The two mappings are learned simultaneously from high-resolution training images using an iterative majorization algorithm. Extensive evaluation of the proposed approach on the Multi-PIE data set with probe image resolution as low as 8 × 6 pixels illustrates the usefulness of the method. We show that the proposed approach improves the matching performance significantly as compared to performing matching in the low-resolution domain or using super-resolution techniques to obtain a higher resolution test image prior to recognition. Experiments on low-resolution surveillance images from the Surveillance Cameras Face Database further highlight the effectiveness of the approach.",
"This paper addresses the very low resolution (VLR) problem in face recognition in which the resolution of the face image to be recognized is lower than 16 × 16. With the increasing demand of surveillance camera-based applications, the VLR problem happens in many face application systems. Existing face recognition algorithms are not able to give satisfactory performance on the VLR face image. While face super-resolution (SR) methods can be employed to enhance the resolution of the images, the existing learning-based face SR methods do not perform well on such a VLR face image. To overcome this problem, this paper proposes a novel approach to learn the relationship between the high-resolution image space and the VLR image space for face SR. Based on this new approach, two constraints, namely, new data and discriminative constraints, are designed for good visuality and face recognition applications under the VLR problem, respectively. Experimental results show that the proposed SR algorithm based on relationship learning outperforms the existing algorithms in public face databases."
]
} |
1610.00660 | 2528022998 | Face Recognition (FR) has been the interest to several researchers over the past few decades due to its passive nature of biometric authentication. Despite high accuracy achieved by face recognition algorithms under controlled conditions, achieving the same performance for face images obtained in surveillance scenarios, is a major hurdle. Some attempts have been made to super-resolve the low-resolution face images and improve the contrast, without considerable degree of success. The proposed technique in this paper tries to cope with the very low resolution and low contrast face images obtained from surveillance cameras, for FR under surveillance conditions. For Support Vector Machine classification, the selection of appropriate kernel has been a widely discussed issue in the research community. In this paper, we propose a novel kernel selection technique termed as MFKL (Multi-Feature Kernel Learning) to obtain the best feature-kernel pairing. Our proposed technique employs a effective kernel selection by Multiple Kernel Learning (MKL) method, to choose the optimal kernel to be used along with unsupervised domain adaptation method in the Reproducing Kernel Hilbert Space (RKHS), for a solution to the problem. Rigorous experimentation has been performed on three real-world surveillance face datasets : FR , SCface and ChokePoint. Results have been shown using Rank-1 Recognition Accuracy, ROC and CMC measures. Our proposed method outperforms all other recent state-of-the-art techniques by a considerable margin. | in @cite_43 proposed an approach with the combination of partial restoration (using super-resolution) of probe samples and degradation of gallery samples. The authors also proposed a outdoor surveillance dataset, FR @cite_43 for evaluating their approach. In our previous work proposed in @cite_18 , we aim to bridge the gap of resolution and contrast using super-resolution and contrast stretching on the probe samples and degrading the gallery samples by downsampling the gallery samples and introducing a Gaussian blur to the downsampled images. In addition to these measures, we had also proposed a DA technique based on an eigen-domain transformation to make the distributions of the features obtained from the gallery and probe samples identical. | {
"cite_N": [
"@cite_43",
"@cite_18"
],
"mid": [
"107366105",
"1964857432"
],
"abstract": [
"Face images obtained by an outdoor surveillance camera, are often confronted with severe degradations (e.g., low-resolution, lowcontrast, blur and noise). This significantly limits the performance of face recognition (FR) systems. This paper presents a framework to overcome the degradation in images obtained by an outdoor surveillance camera, to improve the performance of FR. We have defined a measure that is based on the difference in intensity histograms of face images, to estimate the amount of degradation. In the past, super-resolution techniques have been proposed to increase the image resolution for face recognition. In this work, we attempt a combination of partial restoration (using superresolution, interpolation etc.) of probe samples (long distance shots of outdoor) and simulated degradation of gallery samples (indoor shots). Due to the unavailability of any benchmark face database with gallery and probe images, we have built our own database and conducted experiments on a realistic surveillance face database. PCA and FLDA have been used as baseline face recognition classifiers. The aim is to illustrate the effectiveness of our proposed method of compensating the degradation in surveillance data, rather than designing a specific classifier space suited for degraded test probes. The efficiency of the method is shown by improvement in the face classification accuracy, while comparing results obtained separately using training with acquired indoor gallery samples and then testing with the outdoor probes.",
"Face Recognition (FR) in surveillance scenarios has attracted the attention of researchers over the last few years. The bottleneck as a large gap in both resolution and contrast between training (high-resolution gallery) and testing (degraded, low quality probes) sets, must be overcome using efficient statistical learning methods. In this paper, we propose a Bag-of-Words (BOW) based approach for face recognition combined with Domain Adaptation (DA), to overcome this challenging task of FR in degraded conditions. The dictionary of BOW is formed using dense-SIFT features, using an adaptive spatially varying density. The sampling of the keypoints is denser in the discriminative parts of the face, while it is loosely sampled at some less-interesting (pre-decided) zones of the face. FR using BOW-based face representation is made more efficient using an unsupervised method of DA. Proposed method of DA considers the training set to be the source and the test set to be the target domains. Transformation from source to target is estimated using eigen-analysis of the BOW-based features, which is the novelty and contribution of our proposed work on FR for surveillance applications. Results on the two-real world surveillance face datasets shows the efficiency of the proposed method using ROC and CMC measures."
]
} |
1610.00030 | 2953029562 | This paper presents a number of experiments to model changes in a historical Portuguese corpus composed of literary texts for the purpose of temporal text classification. Algorithms were trained to classify texts with respect to their publication date taking into account lexical variation represented as word n-grams, and morphosyntactic variation represented by part-of-speech (POS) distribution. We report results of 99.8 accuracy using word unigram features with a Support Vector Machines classifier to predict the publication date of documents in time intervals of both one century and half a century. A feature analysis is performed to investigate the most informative features for this task and how they are linked to language change. | The most recent initiative on temporal text classification is the Semeval 2015 Task 7 Diachronic Text Evaluation' (DTE). Results and methods are described in detail in the shared task report @cite_0 . In this shared task the organizers proposed three sub-tasks, two of them consisted of temporal text classification, and a third one dealing with the recognition of time specific phrases. For this task, the organizers compiled and released a test set containing English journalistic texts from 1700 to 2010. Texts were labeled with their approximate publication date in coarse, medium and fine-grained intervals representing six, twelve and twenty years respectively. The task proved to be a very challenging one and the only team to participate in all three sub-tasks was the IXA team @cite_23 who used external resources such as Google N-grams and Wikipedia Entity Linking to accomplish the task. The best performing system in the DTE task was the UCD team @cite_16 who achieved 54.2 | {
"cite_N": [
"@cite_0",
"@cite_16",
"@cite_23"
],
"mid": [
"2105286768",
"2148763605",
"2150954628"
],
"abstract": [
"In this paper we describe a novel task, namely the Diachronic Text Evaluation task. A corpus of snippets which contain relevant information for the time when the text was created is extracted from a large collection of newspapers published between 1700 and 2010. The task, subdivided in three subtasks, requires the automatic system to identify the time interval when the piece of news was written. The subtasks concern specific type of information that might be available in news. The intervals come in three grades: fine, medium and coarse according to their length. The systems participating in the tasks have proved that this a doable task with very interesting possible continuations.",
"In this paper we introduce the novel task of \"word epoch disambiguation,\" defined as the problem of identifying changes in word usage over time. Through experiments run using word usage examples collected from three major periods of time (1800, 1900, 2000), we show that the task is feasible, and significant differences can be observed between occurrences of words in different periods of time.",
"This paper presents our contribution to the SemEval-2015 Task 7. The task was subdivided into three subtasks that consisted of automatically identifying the time period when a piece of news was written (1,2) as well as automatically determining whether a specific phrase in a sentence is relevant or not for a given period of time (3). Our system tackles the resolution of all three subtasks. With this purpose in mind multiple approaches are undertaken that use resources such as Wikipedia or Google NGrams. Final results are obtained by combining the output from all approaches. The texts used for the task are written in English and range from the years 1700 to 2000."
]
} |
1610.00031 | 2949952663 | We present an analysis of the performance of machine learning classifiers on discriminating between similar languages and language varieties. We carried out a number of experiments using the results of the two editions of the Discriminating between Similar Languages (DSL) shared task. We investigate the progress made between the two tasks, estimate an upper bound on possible performance using ensemble and oracle combination, and provide learning curves to help us understand which languages are more challenging. A number of difficult sentences are identified and investigated further with human annotation. | The first edition of the DSL task was organized in 2014 within the scope of the workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects (VarDial) co-located with COLING. The organizers compiled and released a new dataset for this purpose, which they claim to be the first resource of its kind @cite_32 . The dataset is entitled , or DSLCC, and it includes short excerpts from journalistic texts from previously released corpora and repository. See for a complete list of sources. Texts in the DSLCC v. 1.0 were written in thirteen languages or language varieties and divided into the following six groups: Group A (Bosnian, Croatian, Serbian), Group B (Indonesian, Malay), Group C (Czech, Slovak), Group D (Brazilian Portuguese, European Portuguese), Group E (Peninsular Spanish, Argentine Spanish), and Group F There were many cases of republication (e.g. British texts republished by an American newspaper and tagged as American by the original sources) that made the task for this language group unfeasible. @cite_17 (American English, British English). | {
"cite_N": [
"@cite_32",
"@cite_17"
],
"mid": [
"244375653",
"2162019804"
],
"abstract": [
"This paper presents the compilation of the DSL corpus collection created for the DSL (Discriminating Similar Languages) shared task to be held at the VarDial workshop at COLING 2014. The DSL corpus collection were merged from three comparable corpora to provide a suitable dataset for automatic classification to discriminate similar languages and language varieties. Along with the description of the DSL corpus collection we also present results of baseline discrimination experiments reporting performance of up to 87.4 accuracy.",
"This paper summarizes the methods, results and findings of the Discriminating between Similar Languages (DSL) shared task 2014. The shared task provided data from 13 different languages and varieties divided into 6 groups. Participants were required to train their systems to discriminate between languages on a training and development set containing 20,000 sentences from each language (closed submission) and or any other dataset (open submission). One month later, a test set containing 1,000 unidentified instances per language was released for evaluation. The DSL shared task received 22 inscriptions and 8 final submissions. The best system obtained 95.7 average accuracy."
]
} |
1610.00031 | 2949952663 | We present an analysis of the performance of machine learning classifiers on discriminating between similar languages and language varieties. We carried out a number of experiments using the results of the two editions of the Discriminating between Similar Languages (DSL) shared task. We investigate the progress made between the two tasks, estimate an upper bound on possible performance using ensemble and oracle combination, and provide learning curves to help us understand which languages are more challenging. A number of difficult sentences are identified and investigated further with human annotation. | In the closed submission track the best performance was obtained by the NRC-CNRC @cite_8 team, which used a two-step classification approach to predict first the language group of the text, and subsequently the language. Both NRC-CNRC @cite_8 and QMUL @cite_6 , ranked 5 th used linear support vector machines (SVM) classifiers with words and characters as features. | {
"cite_N": [
"@cite_6",
"@cite_8"
],
"mid": [
"2252123100",
"2251071163"
],
"abstract": [
"This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http: creativecommons.org licenses by 4.0",
"We describe the system built by the National Research Council Canada for the ”Discriminating between similar languages” (DSL) shared task. Our system uses various statistical classifiers and makes predictions based on a two-stage process: we first predict the language group, then discriminate between languages or variants within the group. Language groups are predicted using a generative classifier with 99.99 accuracy on the five target groups. Within each group (except English), we use a voting combination of discriminative classifiers trained on a variety of feature spaces, achieving an average accuracy of 95.71 , with per-group accuracy between 90.95 and 100 depending on the group. This approach turns out to reach the best performance among all systems submitted to the open and closed tasks."
]
} |
1610.00031 | 2949952663 | We present an analysis of the performance of machine learning classifiers on discriminating between similar languages and language varieties. We carried out a number of experiments using the results of the two editions of the Discriminating between Similar Languages (DSL) shared task. We investigate the progress made between the two tasks, estimate an upper bound on possible performance using ensemble and oracle combination, and provide learning curves to help us understand which languages are more challenging. A number of difficult sentences are identified and investigated further with human annotation. | Two teams used information gain to estimate the best features for classification, UMich @cite_24 and UniMelb-NLP @cite_25 . These two teams were also the only ones teams which compiled and used additional training material to compete in the open submission track. As can be seen in Table 1, the performance of open submissions were worse than the closed submissions. Accuracy dropped from 93.2 The RAE team @cite_13 proposed an approached based on white lists' of words used exclusively in a given language or language variety and their closed submission ranked 2 nd . | {
"cite_N": [
"@cite_24",
"@cite_13",
"@cite_25"
],
"mid": [
"2097227294",
"2250662591",
"2250877327"
],
"abstract": [
"Language identification is a simple problem that becomes much more difficult when its usual assumptions are broken. In this paper we consider the task of classifying short segments of text in closely-related languages for the Discriminating Similar Languages shared task, which is broken into six subtasks, (A) Bosnian, Croatian, and Serbian, (B) Indonesian and Malay, (C) Czech and Slovak, (D) Brazilian and European Portuguese, (E) Argentinian and Peninsular Spanish, and (F) American and British English. We consider a number of different methods to boost classification performance, such as feature selection and data filtering, but we ultimately find that",
"DSLRAE is a hierarchical classifier for similar written languages and varieties based on maximum-entropy (maxent) classifiers. In the first level, the text is classified into a language group using a simple token-based maxent classifier. At the second level, a group-specific maxent classifier is applied to classify the text as one of the languages or varieties within the previously identified group. For each group of languages, the classifier uses a different kind and combination of knowledge-poor features: token or character n-grams and ‘white lists’ of tokens. Features were selected according to the results of applying ten-fold cross-validation over the training dataset. The system presented in this article1 has been ranked second in the Discriminating Similar Language (DSL) shared task co-located within the VarDial Workshop at COLING 2014 (, 2014).",
"The Discriminating between Similar Languages (DSL) shared task at VarDial challenged participants to build an automatic language identification system to discriminate between 13 languages in 6 groups of highly-similar languages (or national varieties of the same language). In this paper, we describe the submissions made by team UniMelb-NLP, which took part in both the closed and open categories. We present the text representations and modeling techniques used, including cross-lingual POS tagging as well as fine-grained tags extracted from a deep grammar of English, and discuss additional data we collected for the open submissions, utilizing custombuilt web corpora based on top-level domains as well as existing corpora."
]
} |
1609.09756 | 2529461036 | Individual neighborhoods within large cities can benefit from independent analysis of public data in the context of ongoing efforts to improve the community. Yet existing tools for public data analysis and visualization are often mismatched to community needs, for reasons including geographic granularity that does not correspond to community boundaries, siloed data sets, inaccurate assumptions about data literacy, and limited user input in design and implementation phases. In Atlanta this need is being addressed through a Data Dashboard developed under the auspices of the Westside Communities Alliance (WCA), a partnership between Georgia Tech and community stakeholders. In this paper we present an interactive analytic and visualization tool for public safety data within the WCA Data Dashboard. We describe a human-centered approach to understand the needs of users and to build accessible mapping tools for visualization and analysis. The tools include a variety of overlays that allow users to spatially correlate features of the built environment, such as vacant properties with criminal activity as well as crime prevention efforts. We are in the final stages of developing the first version of the tool, with plans for a public release in fall of 2016. | The idea of talking to users to elicit their needs is not a new one, and has been formalized into a general set of principles under Human User Centered Design, User Driven Development, and Participatory Design @cite_2 . While the specific methods employed depend on the domain and the kinds of users, the general idea is to talk with community members early in the design process to determine their needs via focus groups, interviews, or site visits. User-centered design is well-suited for PPGIS, and provides a framework for establishing the needs of community members in learning about and utilizing GIS within the context of neighborhood planning @cite_3 . The WCA has been working with the Westside community for years, and is well versed with the needs of the community. Our weekly meetings with them helped us better understand the people and data we were working with and also prioritize the features we hoped to include. | {
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"2054806307",
"1274426204"
],
"abstract": [
"Public Participation GIS (PPGIS) is a field of research that, among other things, focuses on the use of GIS by non-experts and occasional users. These users tend to have a diverse range of computer literacy, world views, cultural backgrounds and knowledge. These aspects require that the systems used within PPGIS are accessible and easy to use. Human-Computer Interaction (HCI) and the related usability evaluation techniques focus on how to make computer systems more accessible, while focusing on user needs and requirements. Thus, the synergy between PPGIS and HCI seems natural. In this paper, we discuss the aspects of this synergy, building on our experience from three workshops. We demonstrate how usability evaluation can contribute to PPGIS research, and how PPGIS research can contribute to the HCI aspects of GIS in general. We conclude this paper with a call for a user-centred design approach to PPGIS projects.",
"Preface. 1. Participatory Design: An introduction by Toni Robertson and Jesper Simonsen Section I: Participatory Design - Contributions and Challenges 2. Heritage: Having a Say by Finn Kensing and Joan Greenbaum 3. Design: Design Matters in Participatory Design by Liam Bannon and Pelle Ehn 4. Ethics: Engagement, Representation and Politics-In-Action by Toni Robertson and Ina Wagner 5. Ethnography: Positioning Ethnographic within Participatory Design by Jeanette Blomberg and Helena Karasti 6. Methods: Organizing Principles and General Guidelines for Participatory Design Projects by Tone Bratteteig, Keld Bodker, Yvonne Dittrich, Preben Mogensen, and Jesper Simonsen 7. Tools and Techniques: Ways to Engage Telling, Making and Enacting by Eva Brandt, Thomas Binder and Elizabeth Sanders 8. Communities: Participatory Design For, With, and By Communities by Carl DiSalvo, Andrew Clement and Volkmar Pipek Section II: Outstanding Applications of Participatory Design 9. Global Fund for Women: Integrating Participatory Design into everyday work at a global non-profit by Randy Trigg and Karen Ishimaru 10. Health Information Systems Program: Participatory design within the HISP network by Jorn Braa and Sundeep Sahay 11. ACTION for Health: Influencing Technology Design, Practice and Policy Through Participatory Design by Ellen Balka"
]
} |
1609.09864 | 2529685980 | Structured sparse optimization is an important and challenging problem for analyzing high-dimensional data in a variety of applications such as bioinformatics, medical imaging, social networks, and astronomy. Although a number of structured sparsity models have been explored, such as trees, groups, clusters, and paths, connected subgraphs have been rarely explored in the current literature. One of the main technical challenges is that there is no structured sparsity-inducing norm that can directly model the space of connected subgraphs, and there is no exact implementation of a projection oracle for connected subgraphs due to its NP-hardness. In this paper, we explore efficient approximate projection oracles for connected subgraphs, and propose two new efficient algorithms, namely, Graph-IHT and Graph-GHTP, to optimize a generic nonlinear objective function subject to connectivity constraint on the support of the variables. Our proposed algorithms enjoy strong guarantees analogous to several current methods for sparsity-constrained optimization, such as Projected Gradient Descent (PGD), Approximate Model Iterative Hard Thresholding (AM-IHT), and Gradient Hard Thresholding Pursuit (GHTP) with respect to convergence rate and approximation accuracy. We apply our proposed algorithms to optimize several well-known graph scan statistics in several applications of connected subgraph detection as a case study, and the experimental results demonstrate that our proposed algorithms outperform state-of-the-art methods. | The methods in this category have been briefly reviewed in the introduction section. The most relevant work is by @cite_6 . The authors present -, a variant of @cite_14 , for compressive sensing and linear regression problems based on head and tail approximations of @math . | {
"cite_N": [
"@cite_14",
"@cite_6"
],
"mid": [
"2289917018",
"2577904587"
],
"abstract": [
"Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix–vector multiplies with the sampling matrix. For compressible signals, the running time is just O ( N log 2 N ) , where N is the length of the signal.",
"We introduce a framework for sparsity structures defined via graphs. Our approach is flexible and generalizes several previously studied sparsity models. Moreover, we provide efficient projection algorithms for our sparsity model that run in nearly-linear time. In the context of sparse recovery, our framework achieves an information-theoretically optimal sample complexity for a wide range of parameters. We complement our theoretical analysis with experiments showing that our algorithms also improve on prior work in practice."
]
} |
1609.09864 | 2529685980 | Structured sparse optimization is an important and challenging problem for analyzing high-dimensional data in a variety of applications such as bioinformatics, medical imaging, social networks, and astronomy. Although a number of structured sparsity models have been explored, such as trees, groups, clusters, and paths, connected subgraphs have been rarely explored in the current literature. One of the main technical challenges is that there is no structured sparsity-inducing norm that can directly model the space of connected subgraphs, and there is no exact implementation of a projection oracle for connected subgraphs due to its NP-hardness. In this paper, we explore efficient approximate projection oracles for connected subgraphs, and propose two new efficient algorithms, namely, Graph-IHT and Graph-GHTP, to optimize a generic nonlinear objective function subject to connectivity constraint on the support of the variables. Our proposed algorithms enjoy strong guarantees analogous to several current methods for sparsity-constrained optimization, such as Projected Gradient Descent (PGD), Approximate Model Iterative Hard Thresholding (AM-IHT), and Gradient Hard Thresholding Pursuit (GHTP) with respect to convergence rate and approximation accuracy. We apply our proposed algorithms to optimize several well-known graph scan statistics in several applications of connected subgraph detection as a case study, and the experimental results demonstrate that our proposed algorithms outperform state-of-the-art methods. | Existing methods in this category fall into three major categories:1) Exact algorithms. The most recent method is a brunch-and-bounding algorithm DepthFirstGraphScan @cite_9 that runs in exponential time in the worst case; 2) Heuristic algorithms. The most recent methods in this category include EventTree @cite_27 , NPHGS @cite_37 , AdditiveScan @cite_29 , GraphLaplacian @cite_44 , and EdgeLasso @cite_42 ; 3) Approximation algorithms that provide performance bounds. The most recent method is presented by . The authors reformulate the connectivity constraint as linear matrix inequalities (LMI) and present a semi-definite programming algorithm based on convex relaxation of the LMI [18, 19] with a performance bound. However, this method is not scalable to large graphs ( @math 1000 nodes). Most of the above methods are considered as baseline methods in our experiments and are briefly summarized in . | {
"cite_N": [
"@cite_37",
"@cite_9",
"@cite_29",
"@cite_42",
"@cite_44",
"@cite_27"
],
"mid": [
"2038943544",
"1970768703",
"2069455711",
"2104346315",
"2169357279",
"1983481354"
],
"abstract": [
"Event detection in social media is an important but challenging problem. Most existing approaches are based on burst detection, topic modeling, or clustering techniques, which cannot naturally model the implicit heterogeneous network structure in social media. As a result, only limited information, such as terms and geographic locations, can be used. This paper presents Non-Parametric Heterogeneous Graph Scan (NPHGS), a new approach that considers the entire heterogeneous network for event detection: we first model the network as a \"sensor\" network, in which each node senses its \"neighborhood environment\" and reports an empirical p-value measuring its current level of anomalousness for each time interval (e.g., hour or day). Then, we efficiently maximize a nonparametric scan statistic over connected subgraphs to identify the most anomalous network clusters. Finally, the event represented by each cluster is summarized with information such as type of event, geographical locations, time, and participants. As a case study, we consider two applications using Twitter data, civil unrest event detection and rare disease outbreak detection, and present empirical evaluations illustrating the effectiveness and efficiency of our proposed approach.",
"We present GraphScan, a novel method for detecting arbitrarily shaped connected clusters in graph or network data. Given a graph structure, data observed at each node, and a score function defining the anomalousness of a set of nodes, GraphScan can efficiently and exactly identify the most anomalous (highest-scoring) connected subgraph. Kulldorff’s spatial scan, which searches over circles consisting of a center location and its k − 1 nearest neighbors, has been extended to include connectivity constraints by FlexScan. However, FlexScan performs an exhaustive search over connected subsets and is computationally infeasible for k > 30. Alternatively, the upper level set (ULS) scan scales well to large graphs but is not guaranteed to find the highest-scoring subset. We demonstrate that GraphScan is able to scale to graphs an order of magnitude larger than FlexScan, while guaranteeing that the highest-scoring subgraph will be identified. We evaluate GraphScan, Kulldorff’s spatial scan (searching over circles)...",
"We explore scalable and accurate dynamic pattern detection methods in graph-based data sets. We apply our proposed Dynamic Subset Scan method to the task of detecting, tracking, and source-tracing contaminant plumes spreading through a water distribution system equipped with noisy, binary sensors. While static patterns affect the same subset of data over a period of time, dynamic patterns may affect different subsets of the data at each time step. These dynamic patterns require a new approach to define and optimize penalized likelihood ratio statistics in the subset scan framework, as well as new computational techniques that scale to large, real-world networks. To address the first concern, we develop new subset scan methods that allow the detected subset of nodes to change over time, while incorporating temporal consistency constraints to reward patterns that do not dramatically change between adjacent time steps. Second, our Additive Graph Scan algorithm allows our novel scan statistic to process small graphs (500 nodes) in 4.1 seconds on average while maintaining an approximation ratio over 99 compared to an exact optimization method, and to scale to large graphs with over 12,000 nodes in 30 minutes on average. Evaluation results across multiple detection, tracking, and source-tracing tasks demonstrate substantial performance gains achieved by the Dynamic Subset Scan approach.",
"The fused lasso was proposed recently to enable recovery of high-dimensional patterns which are piece-wise constant on a graph, by penalizing the ‘1-norm of dierences of measurements at vertices that share an edge. While there have been some attempts at coming up with ecient algorithms for solving the fused lasso optimization, a theoretical analysis of its performance is mostly lacking except for the simple linear graph topology. In this paper, we investigate sparsistency of fused lasso for general graph structures, i.e. its ability to correctly recover the exact support of piece-wise constant graphstructured patterns asymptotically (for largescale graphs). To emphasize this distinction over previous work, we will refer to it as Edge Lasso. We focus on the (structured) normal means setting, and our results provide necessary and sucient conditions on the graph properties as well as the signal-to-noise ratio needed to ensure sparsistency. We examplify our results using simple graph-structured patterns, and demonstrate that in some cases fused lasso is sparsistent at very weak signal-to-noise ratios (scaling as p (logn)=jAj, where n is the number of vertices in the graph and A is the smallest set of vertices with constant activation). In other cases, it performs no better than thresholding the dierence",
"We consider the change-point detection problem of deciding, based on noisy measurements, whether an unknown signal over a given graph is constant or is instead piecewise constant over two connected induced subgraphs of relatively low cut size. We analyze the corresponding generalized likelihood ratio (GLR) statistics and relate it to the problem of finding a sparsest cut in a graph. We develop a tractable relaxation of the GLR statistic based on the combinatorial Laplacian of the graph, which we call the spectral scan statistic, and analyze its properties. We show how its performance as a testing procedure depends directly on the spectrum of the graph, and use this result to explicitly derive its asymptotic properties on few significant graph topologies. Finally, we demonstrate both theoretically and by simulations that the spectral scan statistic can outperform naive testing procedures based on edge thresholding and @math testing.",
"With the fast growth of smart devices and social networks, a lot of computing systems collect data that record different types of activities. An important computational challenge is to analyze these data, extract patterns, and understand activity trends. We consider the problem of mining activity networks to identify interesting events, such as a big concert or a demonstration in a city, or a trending keyword in a user community in a social network. We define an event to be a subset of nodes in the network that are close to each other and have high activity levels. We formalize the problem of event detection using two graph-theoretic formulations. The first one captures the compactness of an event using the sum of distances among all pairs of the event nodes. We show that this formulation can be mapped to the maxcut problem, and thus, it can be solved by applying standard semidefinite programming techniques. The second formulation captures compactness using a minimum-distance tree. This formulation leads to the prize-collecting Steiner-tree problem, which we solve by adapting existing approximation algorithms. For the two problems we introduce, we also propose efficient and effective greedy approaches and we prove performance guarantees for one of them. We experiment with the proposed algorithms on real datasets from a public bicycling system and a geolocation-enabled social network dataset collected from twitter. The results show that our methods are able to detect meaningful events."
]
} |
1609.09642 | 2952474439 | This paper proposes a CNN cascade for semantic part segmentation guided by pose-specific information encoded in terms of a set of landmarks (or keypoints). There is large amount of prior work on each of these tasks separately, yet, to the best of our knowledge, this is the first time in literature that the interplay between pose estimation and semantic part segmentation is investigated. To address this limitation of prior work, in this paper, we propose a CNN cascade of tasks that firstly performs landmark localisation and then uses this information as input for guiding semantic part segmentation. We applied our architecture to the problem of facial part segmentation and report large performance improvement over the standard unguided network on the most challenging face datasets. Testing code and models will be published online at this http URL. | State-of-the-art techniques in face alignment are based on the so-called cascaded regression @cite_24 . Given a facial image, such methods estimate the landmarks' location by applying a sequence of regressors usually learnt from SIFT @cite_7 or other hand-crafted features. The regressors are learnt in a cascaded manner such that the input to regressor @math is the estimate of the landmarks' location provided by regressor @math , see also @cite_28 @cite_29 @cite_17 @cite_15 @cite_9 . The first component in the proposed CNN cascade is a CNN landmark detector based on VGG-16 @cite_8 converted to a fully convolutional network @cite_20 . Although the main contribution of our paper is not to propose a method for landmark localisation, our CNN landmark localisation method performs comparably with all aforementioned methods. One advantage of our method over cascaded regression approaches is that it is not sensitive to initialisation and hence it does not rely on accurate face detection. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_9",
"@cite_24",
"@cite_15",
"@cite_20",
"@cite_17"
],
"mid": [
"2151103935",
"1686810756",
"2506426544",
"2138406903",
"1915668717",
"",
"",
"1903029394",
"2157285372"
],
"abstract": [
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"This paper introduces a novel real-time algorithm for facial landmark tracking. Compared to detection, tracking has both additional challenges and opportunities. Arguably the most important aspect in this domain is updating a tracker’s models as tracking progresses, also known as incremental (face) tracking. While this should result in more accurate localisation, how to do this online and in real time without causing a tracker to drift is still an important open research question. We address this question in the cascaded regression framework, the state-of-the-art approach for facial landmark localisation. Because incremental learning for cascaded regression is costly, we propose a much more efficient yet equally accurate alternative using continuous regression. More specifically, we first propose cascaded continuous regression (CCR) and show its accuracy is equivalent to the Supervised Descent Method. We then derive the incremental learning updates for CCR (iCCR) and show that it is an order of magnitude faster than standard incremental learning for cascaded regression, bringing the time required for the update from seconds down to a fraction of a second, thus enabling real-time tracking. Finally, we evaluate iCCR and show the importance of incremental learning in achieving state-of-the-art performance. Code for our iCCR is available from http: www.cs.nott.ac.uk psxes1.",
"We present a very efficient, highly accurate, “Explicit Shape Regression” approach for face alignment. Unlike previous regression-based approaches, we directly learn a vectorial regression function to infer the whole facial shape (a set of facial landmarks) from the image and explicitly minimize the alignment errors over the training data. The inherent shape constraint is naturally encoded into the regressor in a cascaded learning framework and applied from coarse to fine during the test, without using a fixed parametric shape model as in most previous methods. To make the regression more effective and efficient, we design a two-level boosted regression, shape-indexed features and a correlation-based feature selection method. This combination enables us to learn accurate models from large training data in a short time (20 minutes for 2,000 training images), and run regression extremely fast in test (15 ms for a 87 landmarks shape). Experiments on challenging data show that our approach significantly outperforms the state-of-the-art in terms of both accuracy and efficiency.",
"Cascaded regression approaches have been recently shown to achieve state-of-the-art performance for many computer vision tasks. Beyond its connection to boosting, cascaded regression has been interpreted as a learning-based approach to iterative optimization methods like the Newton's method. However, in prior work, the connection to optimization theory is limited only in learning a mapping from image features to problem parameters. In this paper, we consider the problem of facial deformable model fitting using cascaded regression and make the following contributions: (a) We propose regression to learn a sequence of averaged Jacobian and Hessian matrices from data, and from them descent directions in a fashion inspired by Gauss-Newton optimization. (b) We show that the optimization problem in hand has structure and devise a learning strategy for a cascaded regression approach that takes the problem structure into account. By doing so, the proposed method learns and employs a sequence of averaged Jacobians and descent directions in a subspace orthogonal to the facial appearance variation; hence, we call it Project-Out Cascaded Regression (PO-CR). (c) Based on the principles of PO-CR, we built a face alignment system that produces remarkably accurate results on the challenging iBUG data set outperforming previously proposed systems by a large margin. Code for our system is available from http: www.cs.nott.ac.uk ∼yzt .",
"",
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu intraface."
]
} |
1609.09642 | 2952474439 | This paper proposes a CNN cascade for semantic part segmentation guided by pose-specific information encoded in terms of a set of landmarks (or keypoints). There is large amount of prior work on each of these tasks separately, yet, to the best of our knowledge, this is the first time in literature that the interplay between pose estimation and semantic part segmentation is investigated. To address this limitation of prior work, in this paper, we propose a CNN cascade of tasks that firstly performs landmark localisation and then uses this information as input for guiding semantic part segmentation. We applied our architecture to the problem of facial part segmentation and report large performance improvement over the standard unguided network on the most challenging face datasets. Testing code and models will be published online at this http URL. | There have been also a few works that extend semantic segmentation to part segmentation with perhaps the most well-known being the Shape Boltzman Machine @cite_6 @cite_11 . This work has been recently extended to incorporate CNN refined by CRF features (as in @cite_0 ) in @cite_21 . Note that this work aims to refine the CNN output by applying a Restricted Boltzmann Machine on top of it and does not make use of pose information as provided by landmarks. In contrast, we propose an enhanced CNN architecture which is landmark-guided, can be trained end-to-end and yields large performance improvement without the need of further refinement. | {
"cite_N": [
"@cite_0",
"@cite_21",
"@cite_6",
"@cite_11"
],
"mid": [
"1923697677",
"2226771013",
"2138813262",
"2075505763"
],
"abstract": [
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"In this work we address the task of segmenting an object into its parts, or semantic part segmentation. We start by adapting a state-of-the-art semantic segmentation system to this task, and show that a combination of a fully-convolutional Deep CNN system coupled with Dense CRF labelling provides excellent results for a broad range of object categories. Still, this approach remains agnostic to high-level constraints between object parts. We introduce such prior information by means of the Restricted Boltzmann Machine, adapted to our task and train our model in an discriminative fashion, as a hidden CRF, demonstrating that prior information can yield additional improvements. We also investigate the performance of our approach in the wild'', without information concerning the objects' bounding boxes, using an object detector to guide a multi-scale segmentation scheme. We evaluate the performance of our approach on the Penn-Fudan and LFW datasets for the tasks of pedestrian parsing and face labelling respectively. We show superior performance with respect to competitive methods that have been extensively engineered on these benchmarks, as well as realistic qualitative results on part segmentation, even for occluded or deformable objects. We also provide quantitative and extensive qualitative results on three classes from the PASCAL Parts dataset. Finally, we show that our multi-scale segmentation scheme can boost accuracy, recovering segmentations for finer parts.",
"The Shape Boltzmann Machine (SBM) [1] has recently been introduced as a state-of-the-art model of foreground background object shape. We extend the SBM to account for the foreground object's parts. Our new model, the Multinomial SBM (MSBM), can capture both local and global statistics of part shapes accurately. We combine the MSBM with an appearance model to form a fully generative model of images of objects. Parts-based object segmentations are obtained simply by performing probabilistic inference in the model. We apply the model to two challenging datasets which exhibit significant shape and appearance variability, and find that it obtains results that are comparable to the state-of-the-art.",
"A good model of object shape is essential in applications such as segmentation, detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shapes can help where object boundaries are noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to parts of the objects. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of deep Boltzmann machine (Salakhutdinov and Hinton, International Conference on Artificial Intelligence and Statistics, 2009) that we call a Shape Boltzmann Machine (SBM) for the task of modeling foreground background (binary) and parts-based (categorical) shape images. We show that the SBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the SBM learns distributions that are qualitatively and quantitatively better than existing models for this task."
]
} |
1609.09642 | 2952474439 | This paper proposes a CNN cascade for semantic part segmentation guided by pose-specific information encoded in terms of a set of landmarks (or keypoints). There is large amount of prior work on each of these tasks separately, yet, to the best of our knowledge, this is the first time in literature that the interplay between pose estimation and semantic part segmentation is investigated. To address this limitation of prior work, in this paper, we propose a CNN cascade of tasks that firstly performs landmark localisation and then uses this information as input for guiding semantic part segmentation. We applied our architecture to the problem of facial part segmentation and report large performance improvement over the standard unguided network on the most challenging face datasets. Testing code and models will be published online at this http URL. | One of the first face segmentation methods prior to deep learning is known as LabelFaces @cite_23 which is based on patch classification and further refinement via a hierarchical face model. Another hierarchical approach to face segmentation based on Restricted Boltzmann Machines was proposed in @cite_3 . More recently, a multi-objective CNN has been shown to perform well for the task of face segmentation in @cite_27 . The method is based on a CRF the unary and pairwise potentials of which are learnt via a CNN. Softmax loss is used for the segmentation masks, and a logistic loss is used to learn the edges. Additionally, the network makes use of a non-parametric segmentation prior which is obtained as follows: first facial landmarks on the test image are detected and then all training images with most similar shapes are used to calculate an average segmentation mask. This mask is finally used to augment RGB. This segmentation mask might be blurry, does not encode pose information and results in little performance improvement. | {
"cite_N": [
"@cite_27",
"@cite_3",
"@cite_23"
],
"mid": [
"1905033729",
"1980163762",
"2151971489"
],
"abstract": [
"This paper formulates face labeling as a conditional random field with unary and pairwise classifiers. We develop a novel multi-objective learning method that optimizes a single unified deep convolutional network with two distinct non-structured loss functions: one encoding the unary label likelihoods and the other encoding the pairwise label dependencies. Moreover, we regularize the network by using a nonparametric prior as new input channels in addition to the RGB image, and show that significant performance improvements can be achieved with a much smaller network size. Experiments on both the LFW and Helen datasets demonstrate state-of-the-art results of the proposed algorithm, and accurate labeling results on challenging images can be obtained by the proposed algorithm for real-world applications.",
"This paper investigates how to parse (segment) facial components from face images which may be partially occluded. We propose a novel face parser, which recasts segmentation of face components as a cross-modality data transformation problem, i.e., transforming an image patch to a label map. Specifically, a face is represented hierarchically by parts, components, and pixel-wise labels. With this representation, our approach first detects faces at both the part- and component-levels, and then computes the pixel-wise label maps (Fig.1). Our part-based and component-based detectors are generatively trained with the deep belief network (DBN), and are discriminatively tuned by logistic regression. The segmentators transform the detected face components to label maps, which are obtained by learning a highly nonlinear mapping with the deep autoencoder. The proposed hierarchical face parsing is not only robust to partial occlusions but also provide richer information for face analysis and face synthesis compared with face keypoint detection and face alignment. The effectiveness of our algorithm is shown through several tasks on 2, 239 images selected from three datasets (e.g., LFW [12], BioID [13] and CUFSF [29]).",
"We consider the problem of parsing facial features from an image labeling perspective. We learn a per-pixel unary classifier, and a prior over expected label configurations, allowing us to estimate a dense labeling of facial images by part (e.g. hair, mouth, moustache, hat). This approach deals naturally with large variations in shape and appearance characteristic of unconstrained facial images, and also the problem of detecting classes that may be present or absent. We use an Adaboost-based unary classifier, and develop a family of priors based on ‘epitomes’ which are shown to be particularly effective in capturing the non-stationary aspects of face label distributions."
]
} |
1609.09382 | 2526154493 | This work focuses on the rapid development of linguistic annotation tools for resource-poor languages. We experiment several cross-lingual annotation projection methods using Recurrent Neural Networks (RNN) models. The distinctive feature of our approach is that our multilingual word representation requires only a parallel corpus between the source and target language. More precisely, our method has the following characteristics: (a) it does not use word alignment information, (b) it does not assume any knowledge about foreign languages, which makes it applicable to a wide range of resource-poor languages, (c) it provides truly multilingual taggers. We investigate both uni- and bi-directional RNN models and propose a method to include external information (for instance low level information from POS) in the RNN to train higher level taggers (for instance, super sense taggers). We demonstrate the validity and genericity of our model by using parallel corpora (obtained by manual or automatic translation). Our experiments are conducted to induce cross-lingual POS and super sense taggers. | Cross-lingual projection of linguistic annotations was pioneered by who created new monolingual resources by transferring annotations from resource-rich languages onto resource-poor languages through the use of word alignments. The resulting (noisy) annotations are used in conjunction with robust learning algorithms to build cheap unsupervised NLP tools @cite_40 . This approach has been successfully used to transfer several linguistic annotations between languages (efficient learning of POS taggers @cite_1 @cite_4 and accurate projection of word senses @cite_14 ). Cross-lingual projection requires a parallel corpus and word alignment between source and target languages. Many automatic word alignment tools are available, such as GIZA++ which implements IBM models @cite_27 . However, the noisy (non perfect) outputs of these methods is a serious limitation for the annotation projection based on word alignments @cite_34 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_1",
"@cite_27",
"@cite_40",
"@cite_34"
],
"mid": [
"1989850950",
"2251113547",
"2142523187",
"1973923101",
"2115057736",
"2086202918"
],
"abstract": [
"In this paper we illustrate and evaluate an approach to the creation of high quality linguistically annotated resources based on the exploitation of aligned parallel corpora. This approach is based on the assumption that if a text in one language has been annotated and its translation has not, annotations can be transferred from the source text to the target using word alignment as a bridge. The transfer approach has been tested in the creation of the MultiSemCor corpus, an English Italian parallel corpus created on the basis of the English SemCor corpus. In MultiSemCor texts are aligned at the word level and semantically annotated with a shared inventory of senses. We present some experiments carried out to evaluate the different steps involved in the methodology. The results of the evaluation suggest that the cross-language annotation transfer methodology is a promising solution allowing for the exploitation of existing (mostly English) annotated resources to bootstrap the creation of annotated corpora in new (resource-poor) languages with greatly reduced human effort.",
"We present an unsupervised approach to part-of-speech tagging based on projections of tags in a word-aligned bilingual parallel corpus. In contrast to the existing state-of-the-art approach of Das and Petrov, we have developed a substantially simpler method by automatically identifying “good” training sentences from the parallel corpus and applying self-training. In experimental results on eight languages, our method achieves state-of-the-art results.",
"We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language. Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages. We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg-, 2010). Across eight European languages, our approach results in an average absolute improvement of 10.4 over a state-of-the-art baseline, and 16.7 over vanilla hidden Markov models induced with the Expectation Maximization algorithm.",
"In this paper, we present and compare various single-word based alignment models for statistical machine translation. We discuss the five IBM alignment models, the Hidden-Markov alignment model, smoothing techniques and various modifications. We present different methods to combine alignments. As evaluation criterion we use the quality of the resulting Viterbi alignment compared to a manually produced reference alignment. We show that models with a first-order dependence and a fertility model lead to significantly better results than the simple models IBM-1 or IBM-2, which are not able to go beyond zero-order dependencies.",
"This article considers the task of automatically inducing role-semantic annotations in the FrameNet paradigm for new languages. We propose a general framework that is based on annotation projection, phrased as a graph optimization problem. It is relatively inexpensive and has the potential to reduce the human effort involved in creating role-semantic resources. Within this framework, we present projection models that exploit lexical and syntactic information. We provide an experimental evaluation on an English-German parallel corpus which demonstrates the feasibility of inducing high-precision German semantic role annotation both for manually and automatically annotated English data.",
"Automatic word alignment plays a critical role in statistical machine translation. Unfortunately, the relationship between alignment quality and statistical machine translation performance has not been well understood. In the recent literature, the alignment task has frequently been decoupled from the translation task and assumptions have been made about measuring alignment quality for machine translation which, it turns out, are not justified. In particular, none of the tens of papers published over the last five years has shown that significant decreases in alignment error rate (AER) result in significant increases in translation performance. This paper explains this state of affairs and presents steps towards measuring alignment quality in a way which is predictive of statistical machine translation performance."
]
} |
1609.09382 | 2526154493 | This work focuses on the rapid development of linguistic annotation tools for resource-poor languages. We experiment several cross-lingual annotation projection methods using Recurrent Neural Networks (RNN) models. The distinctive feature of our approach is that our multilingual word representation requires only a parallel corpus between the source and target language. More precisely, our method has the following characteristics: (a) it does not use word alignment information, (b) it does not assume any knowledge about foreign languages, which makes it applicable to a wide range of resource-poor languages, (c) it provides truly multilingual taggers. We investigate both uni- and bi-directional RNN models and propose a method to include external information (for instance low level information from POS) in the RNN to train higher level taggers (for instance, super sense taggers). We demonstrate the validity and genericity of our model by using parallel corpora (obtained by manual or automatic translation). Our experiments are conducted to induce cross-lingual POS and super sense taggers. | To deal with this limitation, recent studies based on cross-lingual representation learning methods have been proposed to avoid using such pre-processed and noisy alignments for label projection. First, these approaches learn language-independent features, across many different languages @cite_19 @cite_12 @cite_37 @cite_39 @cite_41 @cite_31 . Then, the induced representation space is used to train NLP tools by exploiting labeled data from the source language and apply them in the target language. Cross-lingual representation learning approaches have achieved good results in different NLP applications such as cross-language SST and POS tagging @cite_41 , cross-language named entity recognition @cite_3 , cross-lingual document classification and lexical translation task @cite_31 , cross language dependency parsing @cite_19 @cite_37 and cross-language semantic role labeling @cite_17 . | {
"cite_N": [
"@cite_37",
"@cite_41",
"@cite_3",
"@cite_39",
"@cite_19",
"@cite_31",
"@cite_12",
"@cite_17"
],
"mid": [
"2156064360",
"",
"1818534184",
"2251765408",
"2165256480",
"1828724394",
"1523296404",
"2171860752"
],
"abstract": [
"We study multi-source transfer parsing for resource-poor target languages; specifically methods for target language adaptation of delexicalized discriminative graph-based dependency parsers. We first show how recent insights on selective parameter sharing, based on typological and language-family features, can be applied to a discriminative parser by carefully decomposing its model features. We then show how the parser can be relexicalized and adapted using unlabeled target language data and a learning method that can incorporate diverse knowledge sources through ambiguous labelings. In the latter scenario, we exploit two sources of knowledge: arc marginals derived from the base parser in a self-training algorithm, and arc predictions from multiple transfer parsers in an ensemble-training algorithm. Our final model outperforms the state of the art in multi-source transfer parsing on 15 out of 16 evaluated languages.",
"",
"It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13 . When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26 .",
"Recent work in learning bilingual representations tend to tailor towards achieving good performance on bilingual tasks, most often the crosslingual document classification (CLDC) evaluation, but to the detriment of preserving clustering structures of word representations monolingually. In this work, we propose a joint model to learn word representations from scratch that utilizes both the context coocurrence information through the monolingual component and the meaning equivalent signals from the bilingual constraint. Specifically, we extend the recently popular skipgram model to learn high quality bilingual representations efficiently. Our learned embeddings achieve a new state-of-the-art accuracy of 80.3 for the German to English CLDC task and a highly competitive performance of 90.7 for the other classification direction. At the same time, our models outperform best embeddings from past bilingual representation work by a large margin in the monolingual word similarity evaluation. 1",
"We consider the problem of using a bilingual dictionary to transfer lexico-syntactic information from a resource-rich source language to a resource-poor target language. In contrast to past work that used bitexts to transfer analyses of specific sentences at the token level, we instead use features to transfer the behavior of words at a type level. In a discriminative dependency parsing framework, our approach produces gains across a range of target languages, using two different low-resource training methodologies (one weakly supervised and one indirectly supervised) and two different dictionary sources (one manually constructed and one automatically constructed).",
"We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large monolingual datasets and does not require word-aligned parallel training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw-text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient cross-lingual feature learning. We show that bilingual embeddings learned using the proposed model outperform state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on WMT11 data.",
"Distributed word representations (word embeddings) have recently contributed to competitive performance in language modeling and several NLP tasks. In this work, we train word embeddings for more than 100 languages using their corresponding Wikipedias. We quantitatively demonstrate the utility of our word embeddings by using them as the sole features for training a part of speech tagger for a subset of these languages. We find their performance to be competitive with near state-of-art methods in English, Danish and Swedish. Moreover, we investigate the semantic features captured by these embeddings through the proximity of word groupings. We will release these embeddings publicly to help researchers in the development and enhancement of multilingual applications.",
"We argue that multilingual parallel data provides a valuable source of indirect supervision for induction of shallow semantic representations. Specifically, we consider unsupervised induction of semantic roles from sentences annotated with automatically-predicted syntactic dependency representations and use a state-of-the-art generative Bayesian non-parametric model. At inference time, instead of only seeking the model which explains the monolingual data available for each language, we regularize the objective by introducing a soft constraint penalizing for disagreement in argument labeling on aligned sentences. We propose a simple approximate learning algorithm for our set-up which results in efficient inference. When applied to German-English parallel data, our method obtains a substantial improvement over a model trained without using the agreement signal, when both are tested on non-parallel sentences."
]
} |
1609.09580 | 2527696629 | This paper discusses lexicon word learning in high-dimensional meaning spaces from the viewpoint of referential uncertainty. We investigate various state-of-the-art Machine Learning algorithms and discuss the impact of scaling, representation and meaning space structure. We demonstrate that current Machine Learning techniques successfully deal with high-dimensional meaning spaces. In particular, we show that exponentially increasing dimensions linearly impact learner performance and that referential uncertainty from word sensitivity has no impact. | There is an important difference between the setup here and other studies. Many studies are concerned with enumerable objects in context and how this leads to referential uncertainty (see @cite_22 @cite_12 @cite_20 among others). In this paper, we use referential uncertainty closer to Quine's formulation and early studies by Siskind @cite_5 . Quine focusses on aspects of a situation and not on enumerable objects as the source of referential uncertainty. In that sense the problem in Quine is larger. Even if you know the referent, the learner still knows nothing about the aspect of the referent the word refers to (color, shape etc). The question remains which referential uncertainty problem is solved by children (possibly both). | {
"cite_N": [
"@cite_5",
"@cite_20",
"@cite_22",
"@cite_12"
],
"mid": [
"2160783091",
"",
"2127438782",
"1977666848"
],
"abstract": [
"This paper presents a computational study of part of the lexical-acquisition task faced by children, namely the acquisition of word-to-meaning mappings. It first approximates this task as a formal mathematical problem. It then presents an implemented algorithm for solving this problem, illustrating its operation on a small example. This algorithm offers one precise interpretation of the intuitive notions of cross-situational learning and the principle of contrast applied between words in an utterance. It robustly learns a homonymous lexicon despite noisy multi-word input, in the presence of referential uncertainty, with no prior knowledge that is specific to the language being learned. Computational simulations demonstrate the robustness of this algorithm and illustrate how algorithms based on cross-situational learning and the principle of contrast might be able to solve lexicalacquisition problems of the size faced by children, under weak, worst-case assumptions about the type and quantity of data available.",
"",
"For infants, early word learning is a chicken-and-egg problem. One way to learn a word is to observe that it co-occurs with a particular referent across different situations. Another way is to use the social context of an utterance to infer the intended referent of a word. Here we present a Bayesian model of cross-situational word learning, and an extension of this model that also learns which social cues are relevant to determining reference. We test our model on a small corpus of mother-infant interaction and find it performs better than competing models. Finally, we show that our model accounts for experimental phenomena including mutual exclusivity, fast-mapping, and generalization from social cues.",
"The problem of how young learners acquire the meaning of words is fundamental to language development and cognition. A host of computational models exist which demonstrate various mechanisms in which words and their meanings can be transferred between a teacher and learner. However these models often assume that the learner can easily distinguish between the referents of words, and do not show if the learning mechanisms still function when there is perceptual ambiguity about the referent of a word. This paper presents two models that acquire meaning-word mappings in a continuous semantic space. The first model is a cross-situational learning model in which the learner induces word-meaning mappings through statistical learning from repeated exposures. The second model is a social model, in which the learner and teacher engage in a dyadic learning interaction to transfer word-meaning mappings. We show how cross-situational learning, despite there being no information to the learner as to the exact referent of a word during learning, still can learn successfully. However, social learning outperforms cross-situational strategies both in speed of acquisition and performance. The results suggest that cross-situational learning is efficient for situations where referential ambiguity is limited, but in more complex situations social learning is the more optimal strategy."
]
} |
1609.09560 | 2749720323 | Distributed Denial of Service (DDoS) is a common type of Cybercrime. It can strongly damage a company reputation and increase its costs. Attackers improve continuously their strategies. They doubled the amount of unleashed communication requests in volume, size, and frequency in the last few years. This occurs against different hosts, causing resource exhaustion. Previous studies focused on detecting or mitigating ongoing DDoS attacks. Yet, addressing DDoS attacks when they are already in place may be too late. In this article, we consider network resilience by early prediction of attack trends. We show empirically the advantage of using non-parametric leading indicators for early prediction of volumetric DDoS attacks. We report promising results over a real dataset from CAIDA. Our results raise new questions and opportunities for further research in early predicting trends of DDoS attacks. | There has been several papers published in early warning systems in the last few years. @cite_2 , Ramaki and Atani presented a survey of architectures and techniques for early warning threats in Information Technology (IT). The authors classify the early warning systems (EWS) in commercial or under research and development. They also point out to a set of current challenges, such as data collection, data correlation, and post-event data correlation @cite_2 . The authors reinforced the need for designing proactive solutions to predict threats and attacks before they occur in the system by using data analytics. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2543289462"
],
"abstract": [
"With the advent of new technologies and various services provided in the context of computer networks, a large volume of data is being generated. The main challenge in this area is providing network protection services against various threats and vulnerabilities. So far, many techniques have been proposed to deal with these threats. All of these techniques pursue the same goal, preventing attackers from reaching their objectives. A solution based on early warning systems EWSs is what exactly security teams need to manage the threats properly. EWS, as a complement to Intrusion Detection System, is a proactive approach against security threats. This is carried out through the early detection of potential behavior of a system, evaluating the scope of malicious behavior, and finally, using suitable response against any kind of detectable security event. This paper presents a comprehensive review on EWSs including definitions, applications, architectures, alert correlation aspects, and other technical requirements. Furthermore, previous studies and existing EWSs have been described and analyzed here. A classification of EWSs has been presented: commercial systems and systems under research and development. Finally, from the studies about EWSs, we conclude some challenges and research issues are still remain open. Copyright © 2016 John Wiley & Sons, Ltd."
]
} |
1609.09560 | 2749720323 | Distributed Denial of Service (DDoS) is a common type of Cybercrime. It can strongly damage a company reputation and increase its costs. Attackers improve continuously their strategies. They doubled the amount of unleashed communication requests in volume, size, and frequency in the last few years. This occurs against different hosts, causing resource exhaustion. Previous studies focused on detecting or mitigating ongoing DDoS attacks. Yet, addressing DDoS attacks when they are already in place may be too late. In this article, we consider network resilience by early prediction of attack trends. We show empirically the advantage of using non-parametric leading indicators for early prediction of volumetric DDoS attacks. We report promising results over a real dataset from CAIDA. Our results raise new questions and opportunities for further research in early predicting trends of DDoS attacks. | Different studies have particularly tackled the problem of developing early warning techniques for DDoS attacks @cite_8 @cite_7 @cite_4 . @cite_7 , Xiao, Chen and He proposed a cooperative system to produce warning signals. The system is based on a Bloom filter technique. The authors' goal is to reduce storage and computational resources consumption. @cite_8 , Tsai, Chang and Huang presented a multilayer system based on time delay neural networks. Their system is cooperative with each device in the network monitoring its neighbors. At a certain point, the device sends the collected data to an expert module. The module then analyzes all collected data and attempts to match the received data with known DDoS patterns. @cite_4 , Korczy 'nski, Hamieh, Huh, Holm, Rajagopalan and Fefferman presented a cooperative and self-organized anomaly detection system inspired by colonies of honey bees. Their goal was to provide dynamic thresholds to detect anomalous patterns in network traffic. Also, they intended to improve early intrusion detection in order to assist in mitigation of attacks. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8"
],
"mid": [
"2438105941",
"2045319758",
"2015129944"
],
"abstract": [
"Social insect colonies have survived over evolutionary time in part due to the success of their collaborative methods: using local information and distributed decision making algorithms to detect and exploit critical resources in their environment. These methods have the unusual and useful ability to detect anomalies rapidly, with very little memory, and using only very local information. Our research investigates the potential for a self-organizing anomaly detection system inspired by those observed naturally in colonies of honey bees. We provide a summary of findings from a recently presented algorithm for a nonparametric, fully distributed coordination framework that translates the biological success of these methods into analogous operations for use in cyber defense and discuss the features that inspired this translation. We explore the impacts on detection performance of the defined range of distributed communication for each node and of involving only a small percentage of total nodes in the network in the distributed detection communication. We evaluate our algorithm using a software-based testing implementation, and demonstrate up to 20 percent improvement in detection capability over parallel isolated anomaly detectors.",
"Distributed Denial-of-Service (DDoS) attacks pose a serious threat to Internet security. Most current research focuses on detection and prevention methods on the victim server or source side. To date, there has been no work on defenses using valuable information from the innocent client whose IP has been used in attacking packets. In this paper, we propose a novel cooperative system for producing warning of a DDoS attack. The system consists of a client detector and a server detector. The client detector is placed on the innocent client side and uses a Bloom filter-based detection scheme to generate accurate detection results yet consumes minimal storage and computational resources. The server detector can actively assist the warning process by sending requests to innocent hosts. Simulation results show that the cooperative technique presented in this paper can yield accurate DDoS alarms at an early stage. We theoretically show the false alarm probability of the detection scheme, which is insensitive to false alarms when using specially designed evaluation functions.",
"Distributed denial of service (DDoS) is one of the most persecution network attack techniques to be confronted in recent years. From the definition of DDoS, thousands of network attacks must initiate simultaneously and continuously to achieve a successful DDoS attacking. Therefore, almost all of the information system cannot survive as they confront the DDoS attacks. Although there are a lot of intrusion detection system (IDS) developed, preventing DDoS attack is still difficult and perplexing. In this paper, an early warning system for detecting DDoS attacking has been mounted to a traditional IDS to form a completely system. This early warning system is developed based on the rationale of time delay neural network. In the networking topology, each node is monitored with the deployment of detectors to establish a multilayer architecture. In addition, the activities of each node will be monitored by their neighboring nodes to check whether it is still survival or not mutually. After then, all of the attacking information will be collected and transferred to the expert module for integrating analysis. As those nodes dispatched on the DMZ or between the first and second layer of firewall face some attacking similar as the pattern of DDoS, the kernel expert module which dispatched behind the second firewall will take some feasible actions and initiate the defense strategies to protect the kernel information system. In the meanwhile, those failed nodes will be restarted and act as the role of vanguard to assure the networking under normal operation."
]
} |
1609.09247 | 2524091560 | Recently, these has been a surge on studying how to obtain partially annotated data for model supervision. However, there still lacks a systematic study on how to train statistical models with partial annotation (PA). Taking dependency parsing as our case study, this paper describes and compares two straightforward approaches for three mainstream dependency parsers. The first approach is previously proposed to directly train a log-linear graph-based parser (LLGPar) with PA based on a forest-based objective. This work for the first time proposes the second approach to directly training a linear graph-based parse (LGPar) and a linear transition-based parser (LTPar) with PA based on the idea of constrained decoding. We conduct extensive experiments on Penn Treebank under three different settings for simulating PA, i.e., random dependencies, most uncertain dependencies, and dependencies with divergent outputs from the three parsers. The results show that LLGPar is most effective in learning from PA and LTPar lags behind the graph-based counterparts by large margin. Moreover, LGPar and LTPar can achieve best performance by using LLGPar to complete PA into full annotation (FA). | Directly learning from PA based on a forest-based objective in LLGPar is first proposed by , inspired by the idea of . Similar ideas have been extensively explored recently in sequence labeling tasks @cite_11 @cite_4 @cite_28 . | {
"cite_N": [
"@cite_28",
"@cite_4",
"@cite_11"
],
"mid": [
"2252069213",
"2149710647",
""
],
"abstract": [
"Active learning (AL) consists of asking human annotators to annotate automatically selected data that are assumed to bring the most benefit in the creation of a classifier. AL allows to learn accurate systems with much less annotated data than what is required by pure supervised learning algorithms, hence limiting the tedious effort of annotating a large collection of data. We experimentally investigate the behavior of several AL strategies for sequence labeling tasks (in a partially-labeled scenario) tailored on Partially-Labeled Conditional Random Fields, on four sequence labeling tasks: phrase chunking, part-of-speech tagging, named-entity recognition, and bioentity recognition.",
"There is rich knowledge encoded in online web data. For example, punctuation and entity tags in Wikipedia data define some word boundaries in a sentence. In this paper we adopt partial-label learning with conditional random fields to make use of this valuable knowledge for semi-supervised Chinese word segmentation. The basic idea of partial-label learning is to optimize a cost function that marginalizes the probability mass in the constrained space that encodes this knowledge. By integrating some domain adaptation techniques, such as EasyAdapt, our result reaches an F-measure of 95.98 on the CTB-6 corpus, a significant improvement from both the supervised baseline and a previous proposed approach, namely constrained decode.",
""
]
} |
1609.09247 | 2524091560 | Recently, these has been a surge on studying how to obtain partially annotated data for model supervision. However, there still lacks a systematic study on how to train statistical models with partial annotation (PA). Taking dependency parsing as our case study, this paper describes and compares two straightforward approaches for three mainstream dependency parsers. The first approach is previously proposed to directly train a log-linear graph-based parser (LLGPar) with PA based on a forest-based objective. This work for the first time proposes the second approach to directly training a linear graph-based parse (LGPar) and a linear transition-based parser (LTPar) with PA based on the idea of constrained decoding. We conduct extensive experiments on Penn Treebank under three different settings for simulating PA, i.e., random dependencies, most uncertain dependencies, and dependencies with divergent outputs from the three parsers. The results show that LLGPar is most effective in learning from PA and LTPar lags behind the graph-based counterparts by large margin. Moreover, LGPar and LTPar can achieve best performance by using LLGPar to complete PA into full annotation (FA). | pioneers the idea of exploring PA for constituent grammar induction based on a variant Inside-Outside re-estimation algorithm @cite_27 . propose to train a Combinatorial Categorial Grammar parser using partially labeled data only containing predicate-argument dependencies. propose to impute missing dependencies based on Gibbs sampling in order to enable traditional parsers to learn from partial trees. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2089951031"
],
"abstract": [
"The inside-outside algorithm for inferring the parameters of a stochastic context-free grammar is extended to take advantage of constituent information in a partially parsed corpus. Experiments on formal and natural language parsed corpora show that the new algorithm can achieve faster convergence and better modelling of hierarchical structure than the original one. In particular, over 90 of the constituents in the most likely analyses of a test set are compatible with test set constituents for a grammar trained on a corpus of 700 hand-parsed part-of-speech strings for ATIS sentences."
]
} |
1609.09481 | 2527354364 | We study fast learning rates when the losses are not necessarily bounded and may have a distribution with heavy tails. To enable such analyses, we introduce two new conditions: (i) the envelope function @math , where @math is the loss function and @math is the hypothesis class, exists and is @math -integrable, and (ii) @math satisfies the multi-scale Bernstein's condition on @math . Under these assumptions, we prove that learning rate faster than @math can be obtained and, depending on @math and the multi-scale Bernstein's powers, can be arbitrarily close to @math . We then verify these assumptions and derive fast learning rates for the problem of vector quantization by @math -means clustering with heavy-tailed distributions. The analyses enable us to obtain novel learning rates that extend and complement existing results in the literature from both theoretical and practical viewpoints. | Theorem @math dictates that under our settings, the problem of learning with heavy-tailed losses can obtain convergence rates up to order where @math is the multi-scale Bernstein's order and @math is the degree of integrability of the loss. We recall that convergence rate of @math is obtained in @cite_10 under the same setting but for bounded losses. (The analysis there was done under the @math -weakly stochastic mixability condition, which is equivalent with the standard @math -Bernstein's condition for bounded losses ). We note that if the loss is bounded, @math and @math reduces to the convergence rate obtained in @cite_10 . | {
"cite_N": [
"@cite_10"
],
"mid": [
"2132046078"
],
"abstract": [
"Empirical risk minimization (ERM) is a fundamental learning rule for statistical learning problems where the data is generated according to some unknown distribution P and returns a hypothesis f chosen from a fixed class ℱ with small loss l. In the parametric setting, depending upon (l, ℱ, P) ERM can have slow (1 √n) or fast (1 n) rates of convergence of the excess risk as a function of the sample size n. There exist several results that give sufficient conditions for fast rates in terms of joint properties of l, ℱ, and P, such as the margin condition and the Bernstein condition. In the non-statistical prediction with expert advice setting, there is an analogous slow and fast rate phenomenon, and it is entirely characterized in terms of the mixability of the loss l (there being no role there for ℱ or P). The notion of stochastic mixability builds a bridge between these two models of learning, reducing to classical mixability in a special case. The present paper presents a direct proof of fast rates for ERM in terms of stochastic mixability of (l, ℱ, P), and in so doing provides new insight into the fast-rates phenomenon. The proof exploits an old result of Kemperman on the solution to the general moment problem. We also show a partial converse that suggests a characterization of fast rates for ERM in terms of stochastic mixability is possible."
]
} |
1609.09481 | 2527354364 | We study fast learning rates when the losses are not necessarily bounded and may have a distribution with heavy tails. To enable such analyses, we introduce two new conditions: (i) the envelope function @math , where @math is the loss function and @math is the hypothesis class, exists and is @math -integrable, and (ii) @math satisfies the multi-scale Bernstein's condition on @math . Under these assumptions, we prove that learning rate faster than @math can be obtained and, depending on @math and the multi-scale Bernstein's powers, can be arbitrarily close to @math . We then verify these assumptions and derive fast learning rates for the problem of vector quantization by @math -means clustering with heavy-tailed distributions. The analyses enable us to obtain novel learning rates that extend and complement existing results in the literature from both theoretical and practical viewpoints. | Fast learning rates for unbounded loses are previously derived in @cite_0 for sub-Gaussian losses and in @cite_8 for hypothesis classes that have sub-exponential envelope functions. In @cite_0 , the Bernstein's condition is not directly imposed, but is replaced by condition (ii) of Lemma @math with @math on the whole hypothesis class, while the assumption of sub-Gaussian hypothesis class validates condition (i). This implies the standard Bernstein's condition with @math and makes the convergence rate @math consistent with our result (note that for sub-Gaussian losses, @math can be chosen arbitrary large). The analysis of @cite_8 concerns about (rather than the we investigate in this paper) and can not be directly compared with our results. | {
"cite_N": [
"@cite_0",
"@cite_8"
],
"mid": [
"2522078129",
"2067462817"
],
"abstract": [
"We obtain sharp oracle inequalities for the empirical risk minimization procedure in the regression model under the assumption that the target Y and the model F are subgaussian. The bound we obtain is sharp in the minimax sense if F is convex. Moreover, under mild assumptions on F, the error rate of ERM remains optimal even if the procedure is allowed to perform with constant probability. A part of our analysis is a new proof of minimax results for the gaussian regression model.",
"We consider a framework of sample-based clustering. In this setting, the input to a clustering algorithm is a sample generated i.i.d by some unknown arbitrary distribution. Based on such a sample, the algorithm has to output a clustering of the full domain set, that is evaluated with respect to the underlying distribution. We provide general conditions on clustering problems that imply the existence of sampling based clustering algorithms that approximate the optimal clustering. We show that the K-median clustering, as well as K-means and the Vector Quantization problems, satisfy these conditions. Our results apply to the combinatorial optimization setting where, assuming that sampling uniformly over an input set can be done in constant time, we get a sampling-based algorithm for the K-median and K-means clustering problems that finds an almost optimal set of centers in time depending only on the confidence and accuracy parameters of the approximation, but independent of the input size. Furthermore, in the Euclidean input case, the dependence of the running time of our algorithm on the Euclidean dimension is only linear. Our main technical tool is a uniform convergence result for center based clustering that can be viewed as showing that the effective VC-dimension of k-center clustering equals k."
]
} |
1609.09475 | 2524923337 | Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC). A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multi-view RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MIT-Princeton Team system that took 3rd- and 4th- place in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at this http URL | Pose estimation. There are two primary approaches for estimating the 6D pose of an object. The first aligns 3D CAD models to 3D point clouds with algorithms such as iterative closest point @cite_1 . The second uses more elaborated local descriptors such as SIFT keypoints @cite_7 for color data or 3DMatch @cite_3 for 3D data. The former approach is mainly used with depth-only sensors, in scenarios where lighting changes significantly, or on textureless objects. Highly textured and rigid objects, on the other hand, benefit from local descriptors. Existing frameworks such as LINEMOD @cite_9 or MOPED @cite_20 work well under certain assumptions such as objects sitting on a table top with good illumination, but underperform when confronted with the limited visibility, shadows, and clutter imposed by the APC scenario @cite_19 . | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_20"
],
"mid": [
"",
"1969868017",
"2049981393",
"",
"2529170537",
"2058761328"
],
"abstract": [
"",
"We present a method for detecting 3D objects using multi-modalities. While it is generic, we demonstrate it on the combination of an image and a dense depth map which give complementary object information. It works in real-time, under heavy clutter, does not require a time consuming training stage, and can handle untextured objects. It is based on an efficient representation of templates that capture the different modalities, and we show in many experiments on commodity hardware that our approach significantly outperforms state-of-the-art methods on single modalities.",
"The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >",
"",
"This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge. Note to Practitioners —Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"We present MOPED, a framework for Multiple Object Pose Estimation and Detection that seamlessly integrates single-image and multi-image object recognition and pose estimation in one optimized, robust, and scalable framework. We address two main challenges in computer vision for robotics: robust performance in complex scenes, and low latency for real-time operation. We achieve robust performance with Iterative Clustering Estimation (ICE), a novel algorithm that iteratively combines feature clustering with robust pose estimation. Feature clustering quickly partitions the scene and produces object hypotheses. The hypotheses are used to further refine the feature clusters, and the two steps iterate until convergence. ICE is easy to parallelize, and easily integrates single- and multi-camera object recognition and pose estimation. We also introduce a novel object hypothesis scoring function based on M-estimator theory, and a novel pose clustering algorithm that robustly handles recognition outliers. We achieve scalability and low latency with an improved feature matching algorithm for large databases, a GPU CPU hybrid architecture that exploits parallelism at all levels, and an optimized resource scheduler. We provide extensive experimental results demonstrating state-of-the-art performance in terms of recognition, scalability, and latency in real-world robotic applications."
]
} |
1609.09444 | 2524803249 | Understanding, predicting, and generating object motions and transformations is a core problem in artificial intelligence. Modeling sequences of evolving images may provide better representations and models of motion and may ultimately be used for forecasting, simulation, or video generation. Diagrammatic Abstract Reasoning is an avenue in which diagrams evolve in complex patterns and one needs to infer the underlying pattern sequence and generate the next image in the sequence. For this, we develop a novel Contextual Generative Adversarial Network based on Recurrent Neural Networks (Context-RNN-GANs), where both the generator and the discriminator modules are based on contextual history (modeled as RNNs) and the adversarial discriminator guides the generator to produce realistic images for the particular time step in the image sequence. We evaluate the Context-RNN-GAN model (and its variants) on a novel dataset of Diagrammatic Abstract Reasoning, where it performs competitively with 10th-grade human performance but there is still scope for interesting improvements as compared to college-grade human performance. We also evaluate our model on a standard video next-frame prediction task, achieving improved performance over comparable state-of-the-art. | @cite_31 introduced IQ Tests to measure the success of an individual at adapting to a specific situation under a specific condition. Visual problems in intelligence tests have been among the earliest and continuously researched problems in AI. It has been looked at in terms of propositional logic beginning with @cite_7 and more recently by @cite_30 . Recently, there has also been significant interest in building systems that compete with humans on a variety of tasks such as geometry-based problems @cite_5 , physics-based problems @cite_22 , repetition and symmetry detection @cite_22 , visual question answering @cite_40 , and verbal reasoning and analogy @cite_36 @cite_23 . | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_7",
"@cite_36",
"@cite_40",
"@cite_23",
"@cite_5",
"@cite_31"
],
"mid": [
"87318396",
"2130883015",
"1490981355",
"1614298861",
"2950761309",
"1931621935",
"2250564385",
"618225313"
],
"abstract": [
"Among the diverse processes at work in human cognition, the ability to establish analogies plays a crucial role and is often evaluated via IQ tests where an incomplete sequence has to be completed with a suitable item. This has motivated the AI community for developing various computational models of analogy-making. A Boolean logic view of analogical proportions (a basic form of analogical statements of the form \"a is to b as c is to d\") has been recently proposed and extended to another logical proportion, namely paralogical proportion (stating that \"what a and b have in common, c and d have it also\"). When used in combination, these 2 proportions provide an enhanced power to complete IQ tests. This Boolean modeling essentially relies on the assessment of the differences and similarities between the items involved, and in the case of analogy, satisfies the expected properties of an analogical proportion. An extension to multiple-valued features has also been defined, reinforcing their scope of applications. It is then possible to complete, in a deterministic manner, some incomplete proportions where the last item d is missing. In this paper, we show how this can be the basis of a simple inference paradigm that provides a rigorous way to solve representative analogy-based IQ tests by computing the missing items rather than by choosing in a list of options. The result of the analogical paralogical inference depends on the way the items are represented. The paper discusses how this approach can be used in analogy-making for both determining missing items in proportions and laying bare the relation linking the components of such proportions. The novelty of the approach is stressed w.r.t. other proposals existing in the literature.",
"Solving physics problems usually requires geometric reasoning; a computer problem solver must use a representation that is in some respects equivalent to the use of diagrams by human problems solves. In this paper, we review the uses of diagrams and geometric reasoning in physics problem solving programs. Next, we consider the many roles that diagrams seem to play in human problem solving, the relative strengths and weaknesses of computers and human problem solvers for reasoning with diagram-like representations, and ways in which some of the uses of diagrams by humans might be implemented in computer programs.",
"13,072. Brandenberger, J. E. May 31. 1913, [Convention date]. Compound fabrics and sheet materials.-Films for photographic and like purposes consist of two previously prepared outer films of regenerated cellulose enclosing between them a perfectly uniform layer of plastic material, such as gelatine, resin, paraffin, rubber, or mixtures containing oil. In preparing the film, the cellulose films are unwound from rolls D, D over cylinders A, A placed close together and heated by steam &c., the distance between the cylinders being adjustable. The plastic material is fed into the gap E between the films, and the composite film passes over a series of cylinders B, which may be heated when heat is necessary to set the plastic material. One of the two outer films may be glued to a strip of paper, fabric, met al, or other thin sheet material, and powder, fibres, blades, spangles, and other solid substances may be mixed with the plastic material.",
"",
"We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).",
"Intelligence Quotient (IQ) Test is a set of standardized questions designed to evaluate human intelligence. Verbal comprehension questions appear very frequently in IQ tests, which measure human's verbal ability including the understanding of the words with multiple senses, the synonyms and antonyms, and the analogies among words. In this work, we explore whether such tests can be solved automatically by artificial intelligence technologies, especially the deep learning technologies that are recently developed and successfully applied in a number of fields. However, we found that the task was quite challenging, and simply applying existing technologies (e.g., word embedding) could not achieve a good performance, mainly due to the multiple senses of words and the complex relations among words. To tackle these challenges, we propose a novel framework consisting of three components. First, we build a classifier to recognize the specific type of a verbal question (e.g., analogy, classification, synonym, or antonym). Second, we obtain distributed representations of words and relations by leveraging a novel word embedding method that considers the multi-sense nature of words and the relational knowledge among words (or their senses) contained in dictionaries. Third, for each type of questions, we propose a specific solver based on the obtained distributed word representations and relation representations. Experimental results have shown that the proposed framework can not only outperform existing methods for solving verbal comprehension questions but also exceed the average performance of the Amazon Mechanical Turk workers involved in the study. The results indicate that with appropriate uses of the deep learning technologies we might be a further step closer to the human intelligence.",
"This paper introduces GEOS, the first automated system to solve unaltered SAT geometry questions by combining text understanding and diagram interpretation. We model the problem of understanding geometry questions as submodular optimization, and identify a formal problem description likely to be compatible with both the question text and diagram. GEOS then feeds the description to a geometric solver that attempts to determine the correct answer. In our experiments, GEOS achieves a 49 score on official SAT questions, and a score of 61 on practice questions. 1 Finally, we show that by integrating textual and visual information, GEOS boosts the accuracy of dependency and semantic parsing of the question text.",
""
]
} |
1609.09444 | 2524803249 | Understanding, predicting, and generating object motions and transformations is a core problem in artificial intelligence. Modeling sequences of evolving images may provide better representations and models of motion and may ultimately be used for forecasting, simulation, or video generation. Diagrammatic Abstract Reasoning is an avenue in which diagrams evolve in complex patterns and one needs to infer the underlying pattern sequence and generate the next image in the sequence. For this, we develop a novel Contextual Generative Adversarial Network based on Recurrent Neural Networks (Context-RNN-GANs), where both the generator and the discriminator modules are based on contextual history (modeled as RNNs) and the adversarial discriminator guides the generator to produce realistic images for the particular time step in the image sequence. We evaluate the Context-RNN-GAN model (and its variants) on a novel dataset of Diagrammatic Abstract Reasoning, where it performs competitively with 10th-grade human performance but there is still scope for interesting improvements as compared to college-grade human performance. We also evaluate our model on a standard video next-frame prediction task, achieving improved performance over comparable state-of-the-art. | Our task is also closely related to the task of next frame prediction in videos @cite_6 which involves predicting the next frame based on previous frames. However, the change across consecutive real-world video frames is extremely small as compared to the evolving shapes and changing spatial dynamics in our diagrammatic reasoning task. Hence, this poses several challenges to us different from the task of video next-frame generation, in which modeling optical flows plays a major role in producing better-looking next frames. Other related work in the video prediction direction include language modeling based approaches @cite_1 , convolution-based LSTMs @cite_37 , adversarial CNNs @cite_17 , context encoders @cite_24 and data-conditioned GANs @cite_2 . | {
"cite_N": [
"@cite_37",
"@cite_1",
"@cite_6",
"@cite_24",
"@cite_2",
"@cite_17"
],
"mid": [
"2175030374",
"1568514080",
"2156590040",
"2342877626",
"2125389028",
"2248556341"
],
"abstract": [
"We describe a new spatio-temporal video autoencoder, based on a classic spatial image autoencoder and a novel nested temporal autoencoder. The temporal encoder is represented by a differentiable visual memory composed of convolutional long short-term memory (LSTM) cells that integrate changes over time. Here we target motion changes and use as temporal decoder a robust optical flow prediction module together with an image sampler serving as built-in feedback loop. The architecture is end-to-end differentiable. At each time step, the system receives as input a video frame, predicts the optical flow based on the current observation and the LSTM memory state as a dense transformation map, and applies it to the current frame to generate the next frame. By minimising the reconstruction error between the predicted next frame and the corresponding ground truth next frame, we train the whole system to extract features useful for motion estimation without any supervision effort. We present one direct application of the proposed framework in weakly-supervised semantic segmentation of videos through label propagation using optical flow.",
"We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences.",
"In this paper we present a generative model and learning procedure for unsupervised video clustering into scenes. The work addresses two important problems: realistic modeling of the sources of variability in the video and fast transformation invariant frame clustering. We suggest a solution to the problem of computationally intensive learning in this model by combining the recursive model estimation, fast inference, and on-line learning. Thus, we achieve real time frame clustering performance. Novel aspects of this method include an algorithm for the clustering of Gaussian mixtures, and the fast computation of the KL divergence between two mixtures of Gaussians. The efficiency and the performance of clustering and KL approximation methods are demonstrated. We also present novel video browsing tool based on the visualization of the variables in the generative model.",
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset"
]
} |
1609.09444 | 2524803249 | Understanding, predicting, and generating object motions and transformations is a core problem in artificial intelligence. Modeling sequences of evolving images may provide better representations and models of motion and may ultimately be used for forecasting, simulation, or video generation. Diagrammatic Abstract Reasoning is an avenue in which diagrams evolve in complex patterns and one needs to infer the underlying pattern sequence and generate the next image in the sequence. For this, we develop a novel Contextual Generative Adversarial Network based on Recurrent Neural Networks (Context-RNN-GANs), where both the generator and the discriminator modules are based on contextual history (modeled as RNNs) and the adversarial discriminator guides the generator to produce realistic images for the particular time step in the image sequence. We evaluate the Context-RNN-GAN model (and its variants) on a novel dataset of Diagrammatic Abstract Reasoning, where it performs competitively with 10th-grade human performance but there is still scope for interesting improvements as compared to college-grade human performance. We also evaluate our model on a standard video next-frame prediction task, achieving improved performance over comparable state-of-the-art. | Lastly, generative adversarial networks (GANs) @cite_16 have also been extended with (and spatio-temporal) recurrence, attention, and structure @cite_39 @cite_21 @cite_25 @cite_35 , whereas we specifically focus on recurrence constraints for frames of a video via RNNs. GANs have also been used for high resolution image generation @cite_32 , image manipulation @cite_34 , and text-to-image synthesis @cite_26 . | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_21",
"@cite_32",
"@cite_34",
"@cite_39",
"@cite_16",
"@cite_25"
],
"mid": [
"2520707650",
"2949999304",
"1850742715",
"2173520492",
"2951021768",
"2949933669",
"2099471712",
"2298992465"
],
"abstract": [
"We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to \"fall off\" the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user's scribbles.",
"(2015) showed that optimizing pixels to match features in a convolutional network with respect reference image features is a way to render images of high visual quality. We show that unrolling this gradient-based optimization yields a recurrent computation that creates images by incrementally adding onto a visual \"canvas\". We propose a recurrent generative model inspired by this view, and show that it can be trained using adversarial training to generate very good image samples. We also propose a way to quantitatively compare adversarial networks by having the generators and discriminators of these networks compete against each other.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network ( ( S ^2 )-GAN). Our ( S ^2 )-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our ( S ^2 )-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations."
]
} |
1609.09444 | 2524803249 | Understanding, predicting, and generating object motions and transformations is a core problem in artificial intelligence. Modeling sequences of evolving images may provide better representations and models of motion and may ultimately be used for forecasting, simulation, or video generation. Diagrammatic Abstract Reasoning is an avenue in which diagrams evolve in complex patterns and one needs to infer the underlying pattern sequence and generate the next image in the sequence. For this, we develop a novel Contextual Generative Adversarial Network based on Recurrent Neural Networks (Context-RNN-GANs), where both the generator and the discriminator modules are based on contextual history (modeled as RNNs) and the adversarial discriminator guides the generator to produce realistic images for the particular time step in the image sequence. We evaluate the Context-RNN-GAN model (and its variants) on a novel dataset of Diagrammatic Abstract Reasoning, where it performs competitively with 10th-grade human performance but there is still scope for interesting improvements as compared to college-grade human performance. We also evaluate our model on a standard video next-frame prediction task, achieving improved performance over comparable state-of-the-art. | @cite_35 generates videos from a single image using GANs but the discriminator judges the entire video rather than individual frames conditioned on previous frames. @cite_37 use Convolutional-LSTMs, similar to our GRU-RNN (with Shallow-CNN features) baseline but our final Context-RNN-GAN model with an adversarial loss gives better results. @cite_17 predict multiscale videos using CNNs but only provide fixed number of previous frames as context to the discriminator which is not helpful for modeling short sequences. @cite_18 can generate long-term future frames in action dependent games but the transition of frames is mostly smooth with very similar consecutive frames, unlike our DAR task which involves discontinuous movements in an evolving diagrammatic pattern. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_18",
"@cite_17"
],
"mid": [
"2520707650",
"2175030374",
"2118688707",
"2248556341"
],
"abstract": [
"We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"We describe a new spatio-temporal video autoencoder, based on a classic spatial image autoencoder and a novel nested temporal autoencoder. The temporal encoder is represented by a differentiable visual memory composed of convolutional long short-term memory (LSTM) cells that integrate changes over time. Here we target motion changes and use as temporal decoder a robust optical flow prediction module together with an image sampler serving as built-in feedback loop. The architecture is end-to-end differentiable. At each time step, the system receives as input a video frame, predicts the optical flow based on the current observation and the LSTM memory state as a dense transformation map, and applies it to the current frame to generate the next frame. By minimising the reconstruction error between the predicted next frame and the corresponding ground truth next frame, we train the whole system to extract features useful for motion estimation without any supervision effort. We present one direct application of the proposed framework in weakly-supervised semantic segmentation of videos through label propagation using optical flow.",
"Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future (image-)frames are dependent on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. We propose and evaluate two deep neural network architectures that consist of encoding, action-conditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some games. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional video conditioned by control inputs.",
"Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset"
]
} |
1609.09552 | 2528130257 | Neural encoder-decoder models have shown great success in many sequence generation tasks. However, previous work has not investigated situations in which we would like to control the length of encoder-decoder outputs. This capability is crucial for applications such as text summarization, in which we have to generate concise summaries with a desired length. In this paper, we propose methods for controlling the output sequence length for neural encoder-decoder models: two decoding-based methods and two learning-based methods. Results show that our learning-based methods have the capability to control length without degrading summary quality in a summarization task. | Text summarization is one of the oldest fields of study in natural language processing, and many summarization methods have focused specifically on sentence compression or headline generation. Traditional approaches to this task focus on word deletion using rule-based @cite_32 @cite_38 or statistical @cite_39 @cite_5 @cite_15 @cite_4 @cite_29 methods. There are also several studies of abstractive sentence summarization using syntactic transduction @cite_24 @cite_9 or taking a phrase-based statistical machine translation approach @cite_18 @cite_41 @cite_17 . | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_4",
"@cite_41",
"@cite_29",
"@cite_9",
"@cite_32",
"@cite_39",
"@cite_24",
"@cite_5",
"@cite_15",
"@cite_17"
],
"mid": [
"2103164118",
"2028339364",
"2251656952",
"",
"2251654079",
"2163117351",
"2081265723",
"23242996",
"2083451366",
"2115937944",
"2081388731",
"2104944827"
],
"abstract": [
"This paper reports our results at DUC2004 and describes our approach, implemented in a system called Topiary. We will show that the combination of linguistically motivated sentence compression with statistically selected topic terms performs better than either alone, according to some automatic summary evaluation measures.",
"Extractive summarization techniques cannot generate document summaries shorter than a single sentence, something that is often required. An ideal summarization system would understand each document and generate an appropriate summary directly from the results of that understanding. A more practical approach to this problem results in the use of an approximation: viewing summarization as a problem analogous to statistical machine translation. The issue then becomes one of generating a target document in a more concise language from a source document in a more verbose language. This paper presents results on experiments using this approach, in which statistical models of the term selection and term ordering are jointly applied to produce summaries in a style learned from a training corpus.",
"A major challenge in supervised sentence compression is making use of rich feature representations because of very scarce parallel data. We address this problem and present a method to automatically build a compression corpus with hundreds of thousands of instances on which deletion-based algorithms can be trained. In our corpus, the syntactic trees of the compressions are subtrees of their uncompressed counterparts, and hence supervised systems which require a structural alignment between the input and output can be successfully trained. We also extend an existing unsupervised compression method with a learning module. The new system uses structured prediction to learn from lexical, syntactic and other features. An evaluation with human raters shows that the presented data harvesting method indeed produces a parallel corpus of high quality. Also, the supervised system trained on this corpus gets high scores both from human raters and in an automatic evaluation setting, significantly outperforming a strong baseline.",
"",
"We present an LSTM approach to deletion-based sentence compression where the task is to translate a sentence into a sequence of zeros and ones, corresponding to token deletion decisions. We demonstrate that even the most basic version of the system, which is given no syntactic information (no PoS or NE tags, or dependencies) or desired compression length, performs surprisingly well: around 30 of the compressions from a large test set could be regenerated. We compare the LSTM system with a competitive baseline which is trained on the same amount of data but is additionally provided with all kinds of linguistic features. In an experiment with human raters the LSTMbased model outperforms the baseline achieving 4.5 in readability and 3.8 in informativeness.",
"We present a substitution-only approach to sentence compression which \"tightens\" a sentence by reducing its character length. Replacing phrases with shorter paraphrases yields paraphrastic compressions as short as 60 of the original length. In support of this task, we introduce a novel technique for re-ranking paraphrases extracted from bilingual corpora. At high compression rates paraphrastic compressions outperform a state-of-the-art deletion model in an oracle experiment. For further compression, deleting from oracle paraphrastic compressions preserves more meaning than deletion alone. In either setting, paraphrastic compression shows promise for surpassing deletion-only methods.",
"This paper presents Hedge Trimmer, a HEaDline GEneration system that creates a headline for a newspaper story using linguistically-motivated heuristics to guide the choice of a potential headline. We present feasibility tests used to establish the validity of an approach that constructs a headline by selecting words in order from a story. In addition, we describe experimental results that demonstrate the effectiveness of our linguistically-motivated approach over a HMM-based model, using both human evaluation and automatic metrics for comparing the two approaches.",
"The task of selecting information and rendering it appropriately appears in multiple contexts in summarization. In this paper we present a model that simultaneously optimizes selection and rendering preferences. The model operates over a phrase-based representation of the source document which we obtain by merging PCFG parse trees and dependency graphs. Selection preferences for individual phrases are learned discriminatively, while a quasi-synchronous grammar (Smith and Eisner, 2006) captures rendering preferences such as paraphrases and compressions. Based on an integer linear programming formulation, the model learns to generate summaries that satisfy both types of preferences, while ensuring that length, topic coverage and grammar constraints are met. Experiments on headline and image caption generation show that our method obtains state-of-the-art performance using essentially the same model for both tasks without any major modifications.",
"In this paper we generalise the sentence compression task. Rather than simply shorten a sentence by deleting words or constituents, as in previous work, we rewrite it using additional operations such as substitution, reordering, and insertion. We present a new corpus that is suited to our task and a discriminative tree-to-tree transduction model that can naturally account for structural and lexical mismatches. The model incorporates a novel grammar extraction method, uses a language model for coherent output, and can be easily tuned to a wide range of compression specific loss functions.",
"We present a new method that compresses sentences by removing words. In a first stage, it generates candidate compressions by removing branches from the source sentence's dependency tree using a Maximum Entropy classifier. In a second stage, it chooses the best among the candidate compressions using a Support Vector Machine Regression model. Experimental results show that our method achieves state-of-the-art performance without requiring any manually written rules.",
"We present a novel unsupervised method for sentence compression which relies on a dependency tree representation and shortens sentences by removing subtrees. An automatic evaluation shows that our method obtains result comparable or superior to the state of the art. We demonstrate that the choice of the parser affects the performance of the system. We also apply the method to German and report the results of an evaluation with humans.",
"In this article we generalize the sentence compression task. Rather than simply shorten a sentence by deleting words or constituents, as in previous work, we rewrite it using additional operations such as substitution, reordering, and insertion. We present an experimental study showing that humans can naturally create abstractive sentences using a variety of rewrite operations, not just deletion. We next create a new corpus that is suited to the abstractive compression task and formulate a discriminative tree-to-tree transduction model that can account for structural and lexical mismatches. The model incorporates a grammar extraction method, uses a language model for coherent output, and can be easily tuned to a wide range of compression-specific loss functions."
]
} |
1609.09552 | 2528130257 | Neural encoder-decoder models have shown great success in many sequence generation tasks. However, previous work has not investigated situations in which we would like to control the length of encoder-decoder outputs. This capability is crucial for applications such as text summarization, in which we have to generate concise summaries with a desired length. In this paper, we propose methods for controlling the output sequence length for neural encoder-decoder models: two decoding-based methods and two learning-based methods. Results show that our learning-based methods have the capability to control length without degrading summary quality in a summarization task. | Recent work has adopted techniques such as encoder-decoder @cite_28 @cite_11 @cite_13 and attentional @cite_20 @cite_12 neural network models from the field of machine translation, and tailored them to the sentence summarization task. were the first to pose sentence summarization as a new target task for neural sequence-to-sequence learning. Several studies have used this task as one of the benchmarks of their neural sequence transduction methods @cite_8 @cite_21 @cite_22 . Some studies address the other important phenomena frequently occurred in human-written summaries, such as copying from the source document @cite_31 @cite_3 . investigate a way to solve many important problems capturing keywords, or inputting multiple sentences. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_28",
"@cite_21",
"@cite_3",
"@cite_31",
"@cite_20",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2341718173",
"2176263492",
"1753482797",
"2191070669",
"2333611780",
"2964165364",
"2133564696",
"",
"2949335953",
"2949888546"
],
"abstract": [
"Automatic headline generation is an important research area within text summarization and sentence compression. Recently, neural headline generation models have been proposed to take advantage of well-trained neural networks in learning sentence representations and mapping sequence to sequence. Nevertheless, traditional neural network encoder utilizes maximum likelihood estimation for parameter optimization, which essentially constraints the expected training objective within word level instead of sentence level. Moreover, the performance of model prediction significantly relies on training data distribution. To overcome these drawbacks, we employ minimum risk training strategy in this paper, which directly optimizes model parameters with respect to evaluation metrics and statistically leads to significant improvements for headline generation. Experiment results show that our approach outperforms state-of-the-art systems on both English and Chinese headline generation tasks.",
"Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.",
"We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43 lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations.",
"We describe an application of an encoder-decoder recurrent neural network with LSTM units and attention to generating headlines from the text of news articles. We find that the model is quite effective at concisely paraphrasing news articles. Furthermore, we study how the neural network decides which input words to pay attention to, and specifically we identify the function of the different neurons in a simplified attention mechanism. Interestingly, our simplified attention mechanism performs better that the more complex attention mechanism on a held out set of articles.",
"The problem of rare and unknown words is an important issue that can potentially influence the performance of many NLP systems, including both the traditional count-based and the deep learning models. We propose a novel way to deal with the rare and unseen words for the neural network models using attention. Our model uses two softmax layers in order to predict the next word in conditional language models: one predicts the location of a word in the source sentence, and the other predicts a word in the shortlist vocabulary. At each time-step, the decision of which softmax layer to use choose adaptively made by an MLP which is conditioned on the context. We motivate our work from a psychological evidence that humans naturally have a tendency to point towards objects in the context or the environment when the name of an object is not known. We observe improvements on two tasks, neural machine translation on the Europarl English to French parallel corpora and text summarization on the Gigaword dataset using our proposed model.",
"",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"",
"An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
]
} |
1609.09179 | 2526392488 | A heuristic framework for a class of robust optimization problems is proposed.The heuristic framework explores dual information.The heuristic is successfully applied to solve two robust optimization problems.The heuristic is able to outperform a widely used 2-approximation procedure.A robust optimization version of the restricted shortest path problem is introduced. This work deals with a class of problems under interval data uncertainty, namely interval robust-hard problems, composed of interval data min-max regret generalizations of classical NP-hard combinatorial problems modeled as 0-1 integer linear programming problems. These problems are more challenging than other interval data min-max regret problems, as solely computing the cost of any feasible solution requires solving an instance of an NP-hard problem. The state-of-the-art exact algorithms in the literature are based on the generation of a possibly exponential number of cuts. As each cut separation involves the resolution of an NP-hard classical optimization problem, the size of the instances that can be solved efficiently is relatively small. To smooth this issue, we present a modeling technique for interval robust-hard problems in the context of a heuristic framework. The heuristic obtains feasible solutions by exploring dual information of a linearly relaxed model associated with the classical optimization problem counterpart. Computational experiments for interval data min-max regret versions of the restricted shortest path problem and the set covering problem show that our heuristic is able to find optimal or near-optimal solutions and also improves the primal bounds obtained by a state-of-the-art exact algorithm and a 2-approximation procedure for interval data min-max regret problems. | As far as we know, @cite_43 (also see @cite_23 ) were the first to address an interval robust-hard problem. The authors introduced the interval data min-max regret traveling salesman problem, along with a mathematical formulation containing an exponential number of constraints. Moreover, three exact algorithms were presented and computationally compared at solving the proposed formulation: a , a and a logic-based Benders' decomposition @cite_61 algorithm. The latter algorithm has been widely used to solve robust optimization problems whose classical counterparts are polynomially solvable (see, e.g., @cite_32 @cite_50 @cite_40 ). Computational experiments showed that the logic-based Benders' algorithm outperforms the other exact algorithms for the interval data min-max regret traveling salesman problem. | {
"cite_N": [
"@cite_61",
"@cite_32",
"@cite_43",
"@cite_40",
"@cite_50",
"@cite_23"
],
"mid": [
"1508945008",
"2067999800",
"2035591608",
"2161704166",
"1968866800",
"2180809430"
],
"abstract": [
"Benders decomposition uses a strategy of learning from one's mistakes.'' The aim of this paper is to extend this strategy to a much larger class of problems. The key is to generalize the linear programming dual used in the classical method to an inference dual.'' Solution of the inference dual takes the form of a logical deduction that yields Benders cuts. The dual is therefore very different from other generalized duals that have been proposed. The approach is illustrated by working out the details for propositional satisfiability and 0-1 programming problems. Computational tests are carried out for the latter, but the most promising contribution of logic-based Benders may be to provide a framework for combining optimization and constraint programming methods.",
"Abstract The robust spanning tree problem is a variation, motivated by telecommunications applications, of the classic minimum spanning tree problem. In the robust spanning tree problem edge costs lie in an interval instead of having a fixed value. Interval numbers model uncertainty about the exact cost values. A robust spanning tree is a spanning tree whose total cost minimizes the maximum deviation from the optimal spanning tree over all realizations of the edge costs. This robustness concept is formalized in mathematical terms and is used to drive optimization. This paper describes a new exact method, based on Benders decomposition, for the robust spanning tree problem with interval data. Computational results highlight the efficiency of the new method, which is shown to be very fast on all the benchmarks considered, and in particular on those that were harder to solve for the methods previously known.",
"The traveling salesman problem is one of the most famous combinatorial optimization problems and has been intensively studied. Many extensions to the basic problem have also been proposed, with the aim of making the resulting mathematical models as realistic as possible. We present a new extension to the basic problem, where travel times are specified as a range of possible values. This model reflects the intrinsic difficulties of estimating travel times in reality. We apply the robust deviation criterion to drive optimization over the interval data problem so obtained. Some interesting theoretical properties of the new optimization problems are identified and discussed, together with a new mathematical formulation and some exact and heuristic algorithms. Computational experiments are finally presented.",
"Many real problems can be modelled as robust shortest path problems on digraphs with interval costs, where intervals represent uncertainty about real costs and a robust path is not too far from the shortest path for each possible configuration of the arc costs.",
"We consider the Assignment Problem with interval data, where it is assumed that only upper and lower bounds are known for each cost coefficient. It is required to find a minmax regret assignment. The problem is known to be strongly NP-hard. We present and compare computationally several exact and heuristic methods, including Benders decomposition, using CPLEX, a variable depth neighborhood local search, and two hybrid population-based heuristics. We report results of extensive computational experiments.",
"We study a version of the traveling salesman problem where travel times are specified as a range of possible values. This model reflects the difficulties to estimate travel times exactly in reality. Robustness concepts are used to drive optimization. We propose some efficient heuristic and preprocessing techniques. Computational experiments are presented."
]
} |
1609.09179 | 2526392488 | A heuristic framework for a class of robust optimization problems is proposed.The heuristic framework explores dual information.The heuristic is successfully applied to solve two robust optimization problems.The heuristic is able to outperform a widely used 2-approximation procedure.A robust optimization version of the restricted shortest path problem is introduced. This work deals with a class of problems under interval data uncertainty, namely interval robust-hard problems, composed of interval data min-max regret generalizations of classical NP-hard combinatorial problems modeled as 0-1 integer linear programming problems. These problems are more challenging than other interval data min-max regret problems, as solely computing the cost of any feasible solution requires solving an instance of an NP-hard problem. The state-of-the-art exact algorithms in the literature are based on the generation of a possibly exponential number of cuts. As each cut separation involves the resolution of an NP-hard classical optimization problem, the size of the instances that can be solved efficiently is relatively small. To smooth this issue, we present a modeling technique for interval robust-hard problems in the context of a heuristic framework. The heuristic obtains feasible solutions by exploring dual information of a linearly relaxed model associated with the classical optimization problem counterpart. Computational experiments for interval data min-max regret versions of the restricted shortest path problem and the set covering problem show that our heuristic is able to find optimal or near-optimal solutions and also improves the primal bounds obtained by a state-of-the-art exact algorithm and a 2-approximation procedure for interval data min-max regret problems. | Later on, Pereira and Averbakh @cite_8 introduced the interval data min-max regret set covering problem. The authors proposed a mathematical formulation for the problem that is similar to the one proposed by @cite_43 for the interval data min-max regret traveling salesman problem. As in @cite_43 , the formulation has an exponential number of constraints. They also adapted the logic-based Benders' algorithm of @cite_32 @cite_43 @cite_40 to this problem and presented an extension of the method that aims at generating multiple Benders' cuts per iteration of the algorithm. Moreover, the work presents an exact approach that uses Benders' cuts in the context of a framework. Computational experiments showed that such approach, as well as the extended logic-based Benders' algorithm, outperforms the standard logic-based Benders' algorithm at solving the instances considered. This robust version of the set covering problem was also addressed in @cite_56 , where the authors propose scenario-based heuristics with path-relinking. | {
"cite_N": [
"@cite_8",
"@cite_32",
"@cite_56",
"@cite_43",
"@cite_40"
],
"mid": [
"1972280695",
"2067999800",
"2525653905",
"2035591608",
"2161704166"
],
"abstract": [
"We study the Set Covering Problem with uncertain costs. For each cost coefficient, only an interval estimate is known, and it is assumed that each coefficient can take on any value from the corresponding uncertainty interval, regardless of the values taken by other coefficients. It is required to find a robust deviation (also called minmax regret) solution. For this strongly NP-hard problem, we present and compare computationally three exact algorithms, where two of them are based on Benders decomposition and one uses Benders cuts in the context of a Branch-and-Cut approach, and several heuristic methods, including a scenario-based heuristic, a Genetic Algorithm, and a Hybrid Algorithm that uses a version of Benders decomposition within a Genetic Algorithm framework.",
"Abstract The robust spanning tree problem is a variation, motivated by telecommunications applications, of the classic minimum spanning tree problem. In the robust spanning tree problem edge costs lie in an interval instead of having a fixed value. Interval numbers model uncertainty about the exact cost values. A robust spanning tree is a spanning tree whose total cost minimizes the maximum deviation from the optimal spanning tree over all realizations of the edge costs. This robustness concept is formalized in mathematical terms and is used to drive optimization. This paper describes a new exact method, based on Benders decomposition, for the robust spanning tree problem with interval data. Computational results highlight the efficiency of the new method, which is shown to be very fast on all the benchmarks considered, and in particular on those that were harder to solve for the methods previously known.",
"In this study, we investigate the minmax regret Robust Weighted Set Covering problem with Interval Data (RSCP), which is the robust counterpart of the Weighted Set Covering Problem (WSCP) where uncertain data are modeled using interval data. RSCP is NP-Hard and can provide foundations for solving several minmax regret covering problems. Moreover, RSCP encloses several challenges in terms of computer science and mathematical formulations. The challenges involve the development of innovative algorithms since solving the RSCP for a unique scenario implies to solve an NP-Hard problem. The exact algorithms in the literature rely on decomposition algorithms and cutting plane procedures since the mathematical formulation of RSCP contains an exponential number of constraints. In this study, scenario-based heuristics are used to generate additional cuts in order to reduce the average optimality gaps on the instances not solved to optimality.",
"The traveling salesman problem is one of the most famous combinatorial optimization problems and has been intensively studied. Many extensions to the basic problem have also been proposed, with the aim of making the resulting mathematical models as realistic as possible. We present a new extension to the basic problem, where travel times are specified as a range of possible values. This model reflects the intrinsic difficulties of estimating travel times in reality. We apply the robust deviation criterion to drive optimization over the interval data problem so obtained. Some interesting theoretical properties of the new optimization problems are identified and discussed, together with a new mathematical formulation and some exact and heuristic algorithms. Computational experiments are finally presented.",
"Many real problems can be modelled as robust shortest path problems on digraphs with interval costs, where intervals represent uncertainty about real costs and a robust path is not too far from the shortest path for each possible configuration of the arc costs."
]
} |
1609.09179 | 2526392488 | A heuristic framework for a class of robust optimization problems is proposed.The heuristic framework explores dual information.The heuristic is successfully applied to solve two robust optimization problems.The heuristic is able to outperform a widely used 2-approximation procedure.A robust optimization version of the restricted shortest path problem is introduced. This work deals with a class of problems under interval data uncertainty, namely interval robust-hard problems, composed of interval data min-max regret generalizations of classical NP-hard combinatorial problems modeled as 0-1 integer linear programming problems. These problems are more challenging than other interval data min-max regret problems, as solely computing the cost of any feasible solution requires solving an instance of an NP-hard problem. The state-of-the-art exact algorithms in the literature are based on the generation of a possibly exponential number of cuts. As each cut separation involves the resolution of an NP-hard classical optimization problem, the size of the instances that can be solved efficiently is relatively small. To smooth this issue, we present a modeling technique for interval robust-hard problems in the context of a heuristic framework. The heuristic obtains feasible solutions by exploring dual information of a linearly relaxed model associated with the classical optimization problem counterpart. Computational experiments for interval data min-max regret versions of the restricted shortest path problem and the set covering problem show that our heuristic is able to find optimal or near-optimal solutions and also improves the primal bounds obtained by a state-of-the-art exact algorithm and a 2-approximation procedure for interval data min-max regret problems. | A few works also deal with robust optimization versions of the 0-1 knapsack problem. For instance, the studies @cite_26 and @cite_28 address a version of the problem where the uncertainty over each item profit is represented by a discrete set of possible values. In these works, the absolute robustness criterion is considered. In @cite_28 , the author proved that this version of the problem is strongly NP-hard when the number of possible scenarios is unbounded and pseudo-polynomially solvable for a bounded number of scenarios. @cite_26 also studied a min-max regret version of the problem that considers a discrete set of scenarios of item profits. They provided a pseudo-polynomial algorithm for solving the problem when the number of possible scenarios is bounded. When the number os scenarios is unbounded, the problem becomes strongly NP-hard and there is no approximation scheme for it @cite_38 . | {
"cite_N": [
"@cite_28",
"@cite_38",
"@cite_26"
],
"mid": [
"2076163541",
"2071335627",
"1963888335"
],
"abstract": [
"Given a set of items, a set of scenarios, and a knapsack of fixed capacity, a nonnegative weight is associated with each item; and a value is associated with each item under each scenario. The max-min Knapsack MNK problem is defined as filling the knapsack with a selected set of items so that the minimum total value gained under all scenarios is maximized. The MNK problem is a generalization of the conventional knapsack problem to situations with multiple scenarios. This extension significantly enlarges its scope of applications, especially in the application of recent robust optimization developments. In this paper, the MNK problem is shown to be strongly NP-hard for an unbounded number of scenarios and pseudopolynomially solvable for a bounded number of scenarios. Effective lower and upper bounds are generated by surrogate relaxation. The ratio of these two bounds is shown to be bounded by a constant for situations where the data range is limited to be within a fixed percentage from its mean. This result leads to an approximation algorithm for MNK in the special case. A branch-and-bound algorithm has been implemented to efficiently solve the MNK problem to optimality. Extensive computational results are presented.",
"This paper investigates, for the first time in the literature, the approximation of min–max (regret) versions of classical problems like shortest path, minimum spanning tree, and knapsack. For a constant number of scenarios, we establish fully polynomial-time approximation schemes for the min–max versions of these problems, using relationships between multi-objective and min–max optimization. Using dynamic programming and classical trimming techniques, we construct a fully polynomial-time approximation scheme for min–max regret shortest path. We also establish a fully polynomial-time approximation scheme for min–max regret spanning tree and prove that min–max regret knapsack is not at all approximable. For a non-constant number of scenarios, in which case min–max and min–max regret versions of polynomial-time solvable problems usually become strongly NP-hard, non-approximability results are provided for min–max (regret) versions of shortest path and spanning tree.",
"Preface. 1. Approaches to Handle Uncertainty In Decision Making. 2. A Robust Discrete Optimization Framework. 3. Computational Complexity Results of Robust Discrete Optimization Problems. 4. Easily Solvable Cases of Robust Discrete Optimization Problems. 5. Algorithmic Developments for Difficult Robust Discrete Optimization Problems. 6. Robust 1-Median Location Problems: Dynamic Aspects and Uncertainty. 7. Robust Scheduling Problems. 8. Robust Uncapacitated Network Design and International Sourcing Problems. 9. Robust Discrete Optimization: Past Successes and Future Challenges."
]
} |
1609.09179 | 2526392488 | A heuristic framework for a class of robust optimization problems is proposed.The heuristic framework explores dual information.The heuristic is successfully applied to solve two robust optimization problems.The heuristic is able to outperform a widely used 2-approximation procedure.A robust optimization version of the restricted shortest path problem is introduced. This work deals with a class of problems under interval data uncertainty, namely interval robust-hard problems, composed of interval data min-max regret generalizations of classical NP-hard combinatorial problems modeled as 0-1 integer linear programming problems. These problems are more challenging than other interval data min-max regret problems, as solely computing the cost of any feasible solution requires solving an instance of an NP-hard problem. The state-of-the-art exact algorithms in the literature are based on the generation of a possibly exponential number of cuts. As each cut separation involves the resolution of an NP-hard classical optimization problem, the size of the instances that can be solved efficiently is relatively small. To smooth this issue, we present a modeling technique for interval robust-hard problems in the context of a heuristic framework. The heuristic obtains feasible solutions by exploring dual information of a linearly relaxed model associated with the classical optimization problem counterpart. Computational experiments for interval data min-max regret versions of the restricted shortest path problem and the set covering problem show that our heuristic is able to find optimal or near-optimal solutions and also improves the primal bounds obtained by a state-of-the-art exact algorithm and a 2-approximation procedure for interval data min-max regret problems. | More recently, Feizollahi and Averbakh @cite_45 introduced the min-max regret quadratic assignment problem with interval flows, which is a generalization of the classical quadratic assignment problem in which material flows between facilities are uncertain and vary in given intervals. Although quadratic, the problem presents a structure that is very similar to the robust versions of combinatorial problems we address in this work. The authors proposed two mathematical formulations and adapted the logic-based Benders' algorithm of @cite_32 @cite_43 @cite_40 to solve them through the linearization of the corresponding master problems. They also developed a hybrid approach which combines Benders' decomposition with heuristics. | {
"cite_N": [
"@cite_43",
"@cite_40",
"@cite_45",
"@cite_32"
],
"mid": [
"2035591608",
"2161704166",
"2095153636",
"2067999800"
],
"abstract": [
"The traveling salesman problem is one of the most famous combinatorial optimization problems and has been intensively studied. Many extensions to the basic problem have also been proposed, with the aim of making the resulting mathematical models as realistic as possible. We present a new extension to the basic problem, where travel times are specified as a range of possible values. This model reflects the intrinsic difficulties of estimating travel times in reality. We apply the robust deviation criterion to drive optimization over the interval data problem so obtained. Some interesting theoretical properties of the new optimization problems are identified and discussed, together with a new mathematical formulation and some exact and heuristic algorithms. Computational experiments are finally presented.",
"Many real problems can be modelled as robust shortest path problems on digraphs with interval costs, where intervals represent uncertainty about real costs and a robust path is not too far from the shortest path for each possible configuration of the arc costs.",
"We consider a generalization of the classical quadratic assignment problem, where material flows between facilities are uncertain, and only upper and lower bounds are known for each flow. The objective is to find a minmax regret solution. We present an exact Benders decomposition algorithm based on two developed mathematical programming formulations and on the developed linearizations of master problems, and a heuristic based on using tabu search in the context of a Benders decomposition framework. Then, we develop a hybrid Benders decomposition approach that allows us to combine the speed of heuristics with the rigor and precision of the exact Benders method. We discuss the results of extensive computational experiments.",
"Abstract The robust spanning tree problem is a variation, motivated by telecommunications applications, of the classic minimum spanning tree problem. In the robust spanning tree problem edge costs lie in an interval instead of having a fixed value. Interval numbers model uncertainty about the exact cost values. A robust spanning tree is a spanning tree whose total cost minimizes the maximum deviation from the optimal spanning tree over all realizations of the edge costs. This robustness concept is formalized in mathematical terms and is used to drive optimization. This paper describes a new exact method, based on Benders decomposition, for the robust spanning tree problem with interval data. Computational results highlight the efficiency of the new method, which is shown to be very fast on all the benchmarks considered, and in particular on those that were harder to solve for the methods previously known."
]
} |
1609.09179 | 2526392488 | A heuristic framework for a class of robust optimization problems is proposed.The heuristic framework explores dual information.The heuristic is successfully applied to solve two robust optimization problems.The heuristic is able to outperform a widely used 2-approximation procedure.A robust optimization version of the restricted shortest path problem is introduced. This work deals with a class of problems under interval data uncertainty, namely interval robust-hard problems, composed of interval data min-max regret generalizations of classical NP-hard combinatorial problems modeled as 0-1 integer linear programming problems. These problems are more challenging than other interval data min-max regret problems, as solely computing the cost of any feasible solution requires solving an instance of an NP-hard problem. The state-of-the-art exact algorithms in the literature are based on the generation of a possibly exponential number of cuts. As each cut separation involves the resolution of an NP-hard classical optimization problem, the size of the instances that can be solved efficiently is relatively small. To smooth this issue, we present a modeling technique for interval robust-hard problems in the context of a heuristic framework. The heuristic obtains feasible solutions by exploring dual information of a linearly relaxed model associated with the classical optimization problem counterpart. Computational experiments for interval data min-max regret versions of the restricted shortest path problem and the set covering problem show that our heuristic is able to find optimal or near-optimal solutions and also improves the primal bounds obtained by a state-of-the-art exact algorithm and a 2-approximation procedure for interval data min-max regret problems. | Regarding heuristics for interval robust-hard problems, a simple and efficient scenario-based procedure to tackle interval min-max regret problems in general was proposed in @cite_46 and successfully applied in several works (see, e.g., @cite_23 @cite_46 @cite_12 ). The procedure, called Algorithm Mean Upper (AMU), consists in solving the corresponding classical optimization problem in two specific scenarios: the so called , where the cost referred to each binary variable is set to its upper bound, and the , where the cost of the binary variables are set to the mean values referred to the bounds of the respective cost intervals. With this heuristic, one can obtain a feasible solution for any interval min-max regret optimization problem (including interval robust-hard problems) with the same worst-case asymptotic complexity of solving an instance of the classical optimization problem counterpart. Moreover, it is proven in @cite_12 that this algorithm is 2-approximative for any interval min-max regret optimization problem. Notice that AMU does not run in polynomial time for interval robust-hard problems, unless P = NP. | {
"cite_N": [
"@cite_46",
"@cite_12",
"@cite_23"
],
"mid": [
"2075772595",
"2051407192",
"2180809430"
],
"abstract": [
"The use of \"optimality\" as an operational research criterion is insufficiently discriminating. Ample evidence exists that for many problems simple optimization (particularly profit maximization) does not represent the aims of management. In this paper we discuss the nature of the problem situations for which alternative decision criteria are more appropriate. In particular the structure of strategic planning problems is analysed. The provisional commitment involved in a plan (in contrast to the irrevocable commitment of a decision) leads to the development of a particular criterion, robustness—a measure of the flexibility which an initial decision of a plan maintains for achieving near-optimal states in conditions of uncertainty. The robustness concept is developed through the case study of a sequential factory location problem.",
"The general problem of minimizing the maximal regret in combinatorial optimization problems with interval data is considered. In many cases, the minmax regret versions of the classical, polynomially solvable, combinatorial optimization problems become NP-hard and no approximation algorithms for them have been known. Our main result is a polynomial time approximation algorithm with a performance ratio of 2 for this class of problems.",
"We study a version of the traveling salesman problem where travel times are specified as a range of possible values. This model reflects the difficulties to estimate travel times exactly in reality. Robustness concepts are used to drive optimization. We propose some efficient heuristic and preprocessing techniques. Computational experiments are presented."
]
} |
1609.09179 | 2526392488 | A heuristic framework for a class of robust optimization problems is proposed.The heuristic framework explores dual information.The heuristic is successfully applied to solve two robust optimization problems.The heuristic is able to outperform a widely used 2-approximation procedure.A robust optimization version of the restricted shortest path problem is introduced. This work deals with a class of problems under interval data uncertainty, namely interval robust-hard problems, composed of interval data min-max regret generalizations of classical NP-hard combinatorial problems modeled as 0-1 integer linear programming problems. These problems are more challenging than other interval data min-max regret problems, as solely computing the cost of any feasible solution requires solving an instance of an NP-hard problem. The state-of-the-art exact algorithms in the literature are based on the generation of a possibly exponential number of cuts. As each cut separation involves the resolution of an NP-hard classical optimization problem, the size of the instances that can be solved efficiently is relatively small. To smooth this issue, we present a modeling technique for interval robust-hard problems in the context of a heuristic framework. The heuristic obtains feasible solutions by exploring dual information of a linearly relaxed model associated with the classical optimization problem counterpart. Computational experiments for interval data min-max regret versions of the restricted shortest path problem and the set covering problem show that our heuristic is able to find optimal or near-optimal solutions and also improves the primal bounds obtained by a state-of-the-art exact algorithm and a 2-approximation procedure for interval data min-max regret problems. | The study of approximative procedures for interval min-max regret problems in general is a relatively unexplored field, and not much is known. Conde @cite_14 proved that AMU gives a 2-approximation for an even broader class of min-max regret problems. Precisely, while the works of @cite_46 @cite_12 only address combinatorial (finite and discrete) min-max regret optimization models, Conde @cite_14 extends the 2-approximation result for models with compact constraint sets in general, including continuous ones. Recently, some research has been conducted to refine the constant factor of the aforementioned approximation. For example, in @cite_44 , the author attempts to tighten this factor of 2 through the resolution of a robust optimization problem over a reduced uncertainty cost set. Moreover, in @cite_9 , the authors introduced a new bound that gives an instance dependent performance guarantee of the solution for interval data min-max regret versions of combinatorial problems. They show that the new performance ratio is at most 2, and the bound is successfully applied to solve the interval data min-max regret shortest path problem @cite_26 within a framework. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_9",
"@cite_44",
"@cite_46",
"@cite_12"
],
"mid": [
"2001532848",
"1963888335",
"2089903952",
"1966509452",
"2075772595",
"2051407192"
],
"abstract": [
"In this note, a 2-approximation method for minmax regret optimization problems is developed which extends the work of Kasperski and Zielinski [A. Kasperski, P. Zielinski, An approximation algorithm for interval data minmax regret combinatorial optimization problems, Information Processing Letters 97 (2006) 177-180] from finite to compact constraint sets.",
"Preface. 1. Approaches to Handle Uncertainty In Decision Making. 2. A Robust Discrete Optimization Framework. 3. Computational Complexity Results of Robust Discrete Optimization Problems. 4. Easily Solvable Cases of Robust Discrete Optimization Problems. 5. Algorithmic Developments for Difficult Robust Discrete Optimization Problems. 6. Robust 1-Median Location Problems: Dynamic Aspects and Uncertainty. 7. Robust Scheduling Problems. 8. Robust Uncapacitated Network Design and International Sourcing Problems. 9. Robust Discrete Optimization: Past Successes and Future Challenges.",
"Minmax regret optimization aims at finding robust solutions that perform best in the worst-case, compared to the respective optimum objective value in each scenario. Even for simple uncertainty sets like boxes, most polynomially solvable optimization problems have strongly NP-complete minmax regret counterparts. Thus, heuristics with performance guarantees can potentially be of great value, but only few such guarantees exist.",
"In order to find a robust solution under an unknown linear cost function it will be considered the minmax regret criterion. It is assumed the vector of costs can take on values from a given uncertainty set. The resulting optimization model has been extensively analyzed in the literature when the uncertain costs are modeled by closed intervals. Unfortunately, except for rare applications, this problem has NP-hard complexity which has led to the appearance of approximated methods seeking for good solutions in a short computational time.",
"The use of \"optimality\" as an operational research criterion is insufficiently discriminating. Ample evidence exists that for many problems simple optimization (particularly profit maximization) does not represent the aims of management. In this paper we discuss the nature of the problem situations for which alternative decision criteria are more appropriate. In particular the structure of strategic planning problems is analysed. The provisional commitment involved in a plan (in contrast to the irrevocable commitment of a decision) leads to the development of a particular criterion, robustness—a measure of the flexibility which an initial decision of a plan maintains for achieving near-optimal states in conditions of uncertainty. The robustness concept is developed through the case study of a sequential factory location problem.",
"The general problem of minimizing the maximal regret in combinatorial optimization problems with interval data is considered. In many cases, the minmax regret versions of the classical, polynomially solvable, combinatorial optimization problems become NP-hard and no approximation algorithms for them have been known. Our main result is a polynomial time approximation algorithm with a performance ratio of 2 for this class of problems."
]
} |
1609.09179 | 2526392488 | A heuristic framework for a class of robust optimization problems is proposed.The heuristic framework explores dual information.The heuristic is successfully applied to solve two robust optimization problems.The heuristic is able to outperform a widely used 2-approximation procedure.A robust optimization version of the restricted shortest path problem is introduced. This work deals with a class of problems under interval data uncertainty, namely interval robust-hard problems, composed of interval data min-max regret generalizations of classical NP-hard combinatorial problems modeled as 0-1 integer linear programming problems. These problems are more challenging than other interval data min-max regret problems, as solely computing the cost of any feasible solution requires solving an instance of an NP-hard problem. The state-of-the-art exact algorithms in the literature are based on the generation of a possibly exponential number of cuts. As each cut separation involves the resolution of an NP-hard classical optimization problem, the size of the instances that can be solved efficiently is relatively small. To smooth this issue, we present a modeling technique for interval robust-hard problems in the context of a heuristic framework. The heuristic obtains feasible solutions by exploring dual information of a linearly relaxed model associated with the classical optimization problem counterpart. Computational experiments for interval data min-max regret versions of the restricted shortest path problem and the set covering problem show that our heuristic is able to find optimal or near-optimal solutions and also improves the primal bounds obtained by a state-of-the-art exact algorithm and a 2-approximation procedure for interval data min-max regret problems. | Kasperski and Zieli ' n ski @cite_31 also developed a Fully Polynomial Time Approximation Scheme (FPTAS) for interval min-max regret problems. However, this FPTAS relies on two very restrictive conditions: (i) the problem tackled must present a pseudopolynomial algorithm, and (ii) the corresponding classical counterpart has to be polynomially solvable. Condition (ii) comes from the fact that the aforementioned FPTAS uses AMU within its framework. Notice that, from (ii), this FPTAS does not naturally hold for interval robust-hard problems. | {
"cite_N": [
"@cite_31"
],
"mid": [
"1988146710"
],
"abstract": [
"The general problem of minimizing the maximal regret in combinatorial optimization problems with interval costs is considered. Some results are proven that allow to obtain a fully polynomial time approximation scheme (FPTAS) for the problem under the assumption that a pseudopolynomial algorithm is given."
]
} |
1609.09058 | 2949822910 | Three-dimensional shape reconstruction of 2D landmark points on a single image is a hallmark of human vision, but is a task that has been proven difficult for computer vision algorithms. We define a feed-forward deep neural network algorithm that can reconstruct 3D shapes from 2D landmark points almost perfectly (i.e., with extremely small reconstruction errors), even when these 2D landmarks are from a single image. Our experimental results show an improvement of up to two-fold over state-of-the-art computer vision algorithms; 3D shape reconstruction of human faces is given at a reconstruction error < .004, cars at .0022, human bodies at .022, and highly-deformable flags at an error of .0004. Our algorithm was also a top performer at the 2016 3D Face Alignment in the Wild Challenge competition (done in conjunction with the European Conference on Computer Vision, ECCV) that required the reconstruction of 3D face shape from a single image. The derived algorithm can be trained in a couple hours and testing runs at more than 1, 000 frames s on an i7 desktop. We also present an innovative data augmentation approach that allows us to train the system efficiently with small number of samples. And the system is robust to noise (e.g., imprecise landmark points) and missing data (e.g., occluded or undetected landmark points). | These works assume a low-dimensional embedding of the underlying shape space, and represent each 3D shape as a linear combination of shape bases. For example, Principal Component Analysis (PCA) and other linear methods are employed to obtain the low-dimensional shape bases @cite_32 @cite_0 . A major limitation of linear models is that when the true (but unknown) mapping function is nonlinear, the accuracy drops significantly @cite_38 . Therefore, these methods cannot efficiently handle highly deformable or articulated shapes. | {
"cite_N": [
"@cite_0",
"@cite_38",
"@cite_32"
],
"mid": [
"",
"2154752133",
"2091849600"
],
"abstract": [
"",
"Non-rigid structure from motion (NRSFM) is a difficult, underconstrained problem in computer vision. The standard approach in NRSFM constrains 3D shape deformation using a linear combination of K basis shapes; the solution is then obtained as the low-rank factorization of an input observation matrix. An important but overlooked problem with this approach is that non-linear deformations are often observed; these deformations lead to a weakened low-rank constraint due to the need to use additional basis shapes to linearly model points that move along curves. Here, we demonstrate how the kernel trick can be applied in standard NRSFM. As a result, we model complex, deformable 3D shapes as the outputs of a non-linear mapping whose inputs are points within a low-dimensional shape space. This approach is flexible and can use different kernels to build different non-linear models. Using the kernel trick, our model complements the low-rank constraint by capturing non-linear relationships in the shape coefficients of the linear model. The net effect can be seen as using non-linear dimensionality reduction to further compress the (shape) space of possible solutions.",
"We address the classical computer vision problems of rigid and nonrigid structure from motion (SFM) with occlusion. We assume that the columns of the input observation matrix W describe smooth 2D point trajectories over time. We then derive a family of efficient methods that estimate the column space of W using compact parameterizations in the Discrete Cosine Transform (DCT) domain. Our methods tolerate high percentages of missing data and incorporate new models for the smooth time trajectories of 2D-points, affine and weak-perspective cameras, and 3D deformable shape. We solve a rigid SFM problem by estimating the smooth time trajectory of a single camera moving around the structure of interest. By considering a weak-perspective camera model from the outset, we directly compute euclidean 3D shape reconstructions without requiring postprocessing steps such as euclidean upgrade and bundle adjustment. Our results on real SFM data sets with high percentages of missing data compared positively to those in the literature. In nonrigid SFM, we propose a novel 3D shape trajectory approach that solves for the deformable structure as the smooth time trajectory of a single point in a linear shape space. A key result shows that, compared to state-of-the-art algorithms, our nonrigid SFM method can better model complex articulated deformation with higher frequency DCT components while still maintaining the low-rank factorization constraint. Finally, we also offer an approach for nonrigid SFM when W is presented with missing data."
]
} |
1609.08777 | 2524363614 | We present a neural network architecture to predict a point in color space from the sequence of characters in the color's name. Using large scale color--name pairs obtained from an online color design forum, we evaluate our model on a "color Turing test" and find that, given a name, the colors predicted by our model are preferred by annotators to color names created by humans. Our datasets and demo system are available online at this http URL. | Color is one of the lowest-level visual signals playing an important role in cognition @cite_0 and behavior @cite_6 @cite_28 . It plays a role in human object recognition: to name an object, we first need to encode visual information such as shape and surface information including color and texture. Given a visual encoding, we search our memory for a structural, semantic and phonological description @cite_29 . Adding color information to shape significantly improves naming accuracy and speeds correct response times @cite_12 . | {
"cite_N": [
"@cite_28",
"@cite_29",
"@cite_6",
"@cite_0",
"@cite_12"
],
"mid": [
"1983285138",
"2064341675",
"",
"2018544594",
"2010141198"
],
"abstract": [
"Recent research has shown that a two second glimpse of color can have an important influence on affect, cognition, and behavior. The present research examined whether perceiving color is necessary to produce an effect on psychological functioning or whether the mere act of processing a color word might be sufficient. Specifically, four experiments tested the hypothesis that processing the word red undermines intellectual performance, much like actually perceiving the color red. Supportive data were obtained with three different types of subtle manipulation, with three different types of control words, and on two different types of IQ test performance. Worry, but not mood or general arousal, was shown to mediate the semantic red effect, which appears to take place outside of individuals’ awareness. The theoretical and practical implications of the results are discussed.",
"To name an object, we need both to recognize it and to access the associated phonological form, and phonological retrieval itself may be constrained by aspects of the visual recognition process. This paper reviews evidence for such constraints, drawing on data from experimental psychology, neuropsychology, functional imaging, and computational modelling. Data on picture identification in normal observers demonstrate that the speed of name retrieval processes differs for natural objects and artifacts, due at least in part to differences in visual similarity between exemplars within these categories. Also, effects of variables on early and late stages of object identification combine in an interactive rather than an additive manner, consistent with object processing stages operating in a continuous rather than a discrete manner. Neuropsychological evidence supports this proposal, demonstrating that subtle perceptual deficits can produce naming problems, even when there is good access to associated semantic knowledge. Functional activation studies further show increased activity in visual processing areas when conditions stress object naming relative to the recognition of familiar object structures. These studies indicate that object naming is based on a series of continuous processing stages and that naming involves increased visual processing relative to recognition tasks. The data can be modelled within an interactive activation and competition framework.",
"",
"Does color improve object recognition? If so, is the improvement greater for images with low spatial resolution in which there is less shape information? Do people with low visual acuity benefit more from color? Three experiments measured reaction time (RT) and accuracy for naming food objects displayed in 4 types of images: gray scale or color, and high or low spatial resolution (produced by blur). Normally sighted Ss had faster RTs with color, but the improvement was not significantly greater for images with low spatial resolution. Low vision subjects were also faster with color, but the difference did not depend significantly on acuity. In 2 additional experiments, it was found that the faster RTs for color stimuli were related to objects' prototypicality but not to their color diagnosticity. It was concluded that color does improve object recognition, and the mechanism is probably sensory rather than cognitive in origin. Language: en",
"Theories of object recognition differ to the extent that they consider object representations as being mediated only by the shape of the object, or shape and surface details, if surface details are part of the representation. In particular, it has been suggested that color information may be helpful at recognizing objects only in very special cases, but not during basic-level object recognition in good viewing conditions. In this study, we collected normative data (naming agreement, familiarity, complexity, and imagery judgments) for Snodgrass and Vanderwart's object database of 260 black-and-white line drawings, and then compared the data to exactly the same shapes but with added gray-level texture and surface details (set 2), and color (set 3). Naming latencies were also recorded. Whereas the addition of texture and shading without color only slightly improved naming agreement scores for the objects, the addition of color information unambiguously improved naming accuracy and speeded correct response times. As shown in previous studies, the advantage provided by color was larger for objects with a diagnostic color, and structurally similar shapes, such as fruits and vegetables, but was also observed for man-made objects with and without a single diagnostic color. These observations show that basic-level 'everyday' object recognition in normal conditions is facilitated by the presence of color information, and support a 'shape + surface' model of object recognition, for which color is an integral part of the object representation. In addition, the new stimuli (sets 2 and 3) and the corresponding normative data provide valuable materials for a wide range of experimental and clinical studies of object recognition."
]
} |
1609.08777 | 2524363614 | We present a neural network architecture to predict a point in color space from the sequence of characters in the color's name. Using large scale color--name pairs obtained from an online color design forum, we evaluate our model on a "color Turing test" and find that, given a name, the colors predicted by our model are preferred by annotators to color names created by humans. Our datasets and demo system are available online at this http URL. | Colors and their names have some association in our cognition. The Stroop effect is a well-known example showing interference of colors and color terms: when we see a color term printed in a different color--- red blue ---it takes us longer to name the word, and we are more prone to naming errors than when the ink matches--- blue blue @cite_16 . | {
"cite_N": [
"@cite_16"
],
"mid": [
"2164335769"
],
"abstract": [
"Stroop effects might be due to differences in stimulus-response compatibility (SRC) and or to differences in stimulus-stimulus compatibility (SSC). Recent evidence for the role of SSC is inconclusive, because there were no controls for effects of SRC that are based on short-term associations between stimuli and responses (i.e., associations set up as the result of task instructions). In two experiments, SRC effects were controlled for. Regardless of whether the irrelevant and the relevant stimulus features were separated (Experiment 1) or integrated in one stimulus (Experiment 2), the results revealed an effect of SSC and an effect of SRC that was based on short-term associations. The results thus confirm that both processes at the level of encoding and processes at the level of response selection contribute to the Stroop effect."
]
} |
1609.08777 | 2524363614 | We present a neural network architecture to predict a point in color space from the sequence of characters in the color's name. Using large scale color--name pairs obtained from an online color design forum, we evaluate our model on a "color Turing test" and find that, given a name, the colors predicted by our model are preferred by annotators to color names created by humans. Our datasets and demo system are available online at this http URL. | Recent evidence suggests that colors and words are associated in the brain. The brain uses different regions to perceive various modalities, but processing a color word activates the same brain region as the color it denotes @cite_21 @cite_8 . | {
"cite_N": [
"@cite_21",
"@cite_8"
],
"mid": [
"2043464897",
"2072657665"
],
"abstract": [
"Abstract In this study, we investigated the spatio-temporal patterns of category-specific cortical activation elicited by the visual presentation of words whose meaning relates to a color or to a visual form or shape. We recorded the event-related potentials (ERPs) of ten healthy, right-handed volunteers while they passively read words presented tachistoscopically. As early as 150 ms after stimulus onset, the ERPs revealed significant neurophysiological differences between words and strings of hash marks. Around 200 ms after stimulus onset, we found significant differences in the ERPs elicited by color- and form-related words. We used minimum norm current estimates to investigate the spatial location of these differences. This revealed that, at the 150 ms peak, the activation advantage of words over sequences of hash marks was located in a left posterior area, proximal to what has previously been called the visual word form area. At the 200 ms peak, the advantage of words related to colors over words related to forms seemed to reside in temporal cortical areas, whereas the form-related words elicited greater activation in frontal areas than color-related words. These results provide evidence for early access to detailed category-specific representations of word meaning, with subtle differences in meaning being reflected in the activation of different cortical areas, as early as 200 ms after stimulus presentation. In line with previous studies, these differences can be related to the areas involved in the conceptual processing of sensory (visual) and action-related information.",
"Functional neuroimaging research has demonstrated that retrieving information about object-associated colors activates the left fusiform gyrus in posterior temporal cortex. Although regions near the fusiform have previously been implicated in color perception, it remains unclear whether color knowledge retrieval actually activates the color perception system. Evidence to this effect would be particularly strong if color perception cortex was activated by color knowledge retrieval triggered strictly with linguistic stimuli. To address this question, subjects performed two tasks while undergoing fMRI. First, subjects performed a property verification task using only words to assess conceptual knowledge. On each trial, subjects verified whether a named color or motor property was true of a named object (e.g., TAXI-yellow, HAIR-combed). Next, subjects performed a color perception task. A region of the left fusiform gyrus that was highly responsive during color perception also showed greater activity for retrieving color than motor property knowledge. These data provide the first evidence for a direct overlap in the neural bases of color perception and stored information about object-associated color, and they significantly add to accumulating evidence that conceptual knowledge is grounded in the brain's modality-specific systems."
]
} |
1609.08777 | 2524363614 | We present a neural network architecture to predict a point in color space from the sequence of characters in the color's name. Using large scale color--name pairs obtained from an online color design forum, we evaluate our model on a "color Turing test" and find that, given a name, the colors predicted by our model are preferred by annotators to color names created by humans. Our datasets and demo system are available online at this http URL. | Closer to NLP, the relationship between visual stimuli and their linguistic descriptions by humans has been explored extensively through automatic text generation from images @cite_23 @cite_14 @cite_30 . Color association with word semantics has also been investigated in several previous papers @cite_22 @cite_15 @cite_10 @cite_19 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_22",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_10"
],
"mid": [
"2950178297",
"2951805548",
"2949673373",
"224572144",
"2171361956",
"2137167595",
"2250318122"
],
"abstract": [
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"Since many real-world concepts are associated with colour, for example danger with red, linguistic information is often complimented with the use of appropriate colours in information visualization and product marketing. Yet, there is no comprehensive resource that captures concept-colour associations. We present a method to create a large word-colour association lexicon by crowdsourcing. We focus especially on abstract concepts and emotions to show that even though they cannot be physically visualized, they too tend to have strong colour associations. Finally, we show how word-colour associations manifest themselves in language, and quantify usefulness of co-occurrence and polarity cues in automatically detecting colour associations.",
"Natural language meanings allow speakers to encode important real-world distinctions, but corpora of grounded language use also reveal that speakers categorize the world in different ways and describe situations with different terminology. To learn meanings from data, we therefore need to link underlying representations of meaning to models of speaker judgment and speaker choice. This paper describes a new approach to this problem: we model variability through uncertainty in categorization boundaries and distributions over preferred vocabulary. We apply the approach to a large data set of color descriptions, where statistical evaluation documents its accuracy. The results are available as a Lexicon of Uncertain Color Standards (LUX), which supports future efforts in grounded language understanding and generation by probabilistically mapping 829 English color descriptions to potentially context-sensitive regions in HSV color space.",
"We introduce two multimodal neural language models: models of natural language that can be conditioned on other modalities. An image-text multimodal neural language model can be used to retrieve images given complex sentence queries, retrieve phrase descriptions given image queries, as well as generate text conditioned on images. We show that in the case of image-text modelling we can jointly learn word representations and image features by training our models together with a convolutional network. Unlike many of the existing methods, our approach can generate sentence descriptions for images without the use of templates, structured prediction, and or syntactic trees. While we focus on imagetext modelling, our algorithms can be easily applied to other modalities such as audio.",
"Our ability to reliably name colors provides a link between visual perception and symbolic cognition. In this paper, we investigate how a statistical model of color naming can enable user interfaces to meaningfully mimic this link and support novel interactions. We present a method for constructing a probabilistic model of color naming from a large, unconstrained set of human color name judgments. We describe how the model can be used to map between colors and names and define metrics for color saliency (how reliably a color is named) and color name distance (the similarity between colors based on naming patterns). We then present a series of applications that demonstrate how color naming models can enhance graphical interfaces: a color dictionary & thesaurus, name-based pixel selection methods for image editing, and evaluation aids for color palette design.",
"We present a model for generating pathvalued interpretations of natural language text. Our model encodes a map from natural language descriptions to paths, mediated by segmentation variables which break the language into a discrete set of events, and alignment variables which reorder those events. Within an event, lexical weights capture the contribution of each word to the aligned path segment. We demonstrate the applicability of our model on three diverse tasks: a new color description task, a new financial news task and an established direction-following task. On all three, the model outperforms strong baselines, and on a hard variant of the direction-following task it achieves results close to the state-of-the-art system described in Vogel and Jurafsky (2010)."
]
} |
1609.09019 | 2525851358 | Natural language processing techniques are increasingly applied to identify social trends and predict behavior based on large text collections. Existing methods typically rely on surface lexical and syntactic information. Yet, research in psychology shows that patterns of human conceptualisation, such as metaphorical framing, are reliable predictors of human expectations and decisions. In this paper, we present a method to learn patterns of metaphorical framing from large text collections, using statistical techniques. We apply the method to data in three different languages and evaluate the identified patterns, demonstrating their psychological validity. | NLP techniques have been successfully used for a number of tasks in political science, including automatically estimating the influence of particular politicians in the US senate @cite_56 , identifying lexical features that differentiate political rhetoric of opposing parties @cite_10 , predicting voting patterns of politicians based on their use of language @cite_13 , and predicting political affiliation of Twitter users @cite_44 . Other approaches @cite_39 @cite_17 @cite_0 @cite_16 @cite_57 detected the contrasting perspectives on a set of topics attested in distinct corpora using LDA topic modelling. Some works focused on subjectivity detection, identifying opinion, evaluation, and speculation in text @cite_22 . There is a large body of work on identifying such opinions and attributing it to specific people @cite_27 @cite_4 . While successful in their tasks, these methods rely on surface linguistic cues, rather than generalising patterns of human association and conceptualisation, which limits the information they discover to that explicitly stated. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_10",
"@cite_39",
"@cite_44",
"@cite_56",
"@cite_0",
"@cite_57",
"@cite_27",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"2167053715",
"",
"",
"",
"",
"1694883724",
"2023907142",
"1601788506",
"2087211905",
"",
"2145767445",
""
],
"abstract": [
"The rapid and continuous growth of social networking sites has led to the emergence of many communities of communicating groups. Many of these groups discuss ideological and political topics. It is not uncommon that the participants in such discussions split into two or more subgroups. The members of each subgroup share the same opinion toward the discussion topic and are more likely to agree with members of the same subgroup and disagree with members from opposing subgroups. In this paper, we propose an unsupervised approach for automatically detecting discussant subgroups in online communities. We analyze the text exchanged between the participants of a discussion to identify the attitude they carry toward each other and towards the various aspects of the discussion topic. We use attitude predictions to construct an attitude vector for each discussant. We use clustering techniques to cluster these vectors and, hence, determine the subgroup membership of each participant. We compare our methods to text clustering and other baselines, and show that our method achieves promising results.",
"",
"",
"",
"",
"We introduce a technique for identifying the most salient participants in a discussion. Our method, MavenRank is based on lexical centrality: a random walk is performed on a graph in which each node is a participant in the discussion and an edge links two participants who use similar rhetoric. As a test, we used MavenRank to identify the most influential members of the US Senate using data from the US Congressional Record and used committee ranking to evaluate the output. Our results show that MavenRank scores are largely driven by committee status in most topics, but can capture speaker centrality in topics where speeches are used to indicate ideological position instead of influence legislation.",
"This paper presents a novel opinion mining research problem, which is called Contrastive Opinion Modeling (COM). Given any query topic and a set of text collections from multiple perspectives, the task of COM is to present the opinions of the individual perspectives on the topic, and furthermore to quantify their difference. This general problem subsumes many interesting applications, including opinion summarization and forecasting, government intelligence and cross-cultural studies. We propose a novel unsupervised topic model for contrastive opinion modeling. It simulates the generative process of how opinion words occur in the documents of different collections. The ad hoc opinion search process can be efficiently accomplished based on the learned parameters in the model. The difference of perspectives can be quantified in a principled way by the Jensen-Shannon divergence among the individual topic-opinion distributions. An extensive set of experiments have been conducted to evaluate the proposed model on two datasets in the political domain: 1) statement records of U.S. senators; 2) world news reports from three representative media in U.S., China and India, respectively. The experimental results with both qualitative and quantitative analysis have shown the effectiveness of the proposed model.",
"We explore Debatepedia, a communityauthored encyclopedia of sociopolitical debates, as evidence for inferring a lowdimensional, human-interpretable representation in the domain of issues and positions. We introduce a generative model positing latent topics and cross-cutting positions that gives special treatment to person mentions and opinion words. We evaluate the resulting representation’s usefulness in attaching opinionated documents to arguments and its consistency with human judgments about positions.",
"The wikileaks documents or the economic crises in Ireland and Portugal are some of the controversial topics being played on the news everyday. Each of these topics has many different aspects, and there is no absolute, simple truth in answering questions such as: should the EU guarantee the financial stability of each member country, or should the countries themselves be solely responsible? To understand the landscape of opinions, it would be helpful to know which politician or other stakeholder takes which position - support or opposition - on these aspects of controversial topics. In this paper, we describe our system, named OpinioNetIt (pronounced similar to \"opinionated\"), which aims to automatically derive a map of the opinions-people network from news and other Web documents. We build this network as follows. First, we make use of a small number of generic seeds to identify controversial phrases from text. These phrases are then clustered and organized into a hierarchy of topics. Second, opinion holders are identified for each topic and their opinions (either supporting or opposing the topic) are extracted. Third, the known topics and people are used to construct a lexicon phrases indicating support or opposition. Finally, the lexicon is uses to identify more opinion holders, opinions and topics. Our system currently consists of approximately 30000 person-opinion-topic triples. Our evaluation shows that OpinioNetIt has high accuracy.",
"",
"We develop several predictive models linking legislative sentiment to legislative text. Our models, which draw on ideas from ideal point estimation and topic models, predict voting patterns based on the contents of bills and infer the political leanings of legislators. With supervised topics, we provide an exploratory window into how the language of the law is correlated with political support. We also derive approximate posterior inference algorithms based on variational methods. Across 12 years of legislative data, we predict specific voting patterns with high accuracy.",
""
]
} |
1609.08811 | 2949817159 | Micro Air Vehicles (MAVs) will unlock their true potential once they can operate in groups. To this end, it is essential for them to estimate on-board the relative location of their neighbors. The challenge lies in limiting the mass and processing burden needed to enable this. We developed a relative localization method that only requires the MAVs to communicate via their wireless transceiver. Communication allows the exchange of on-board states (velocity, height, and orientation), while the signal-strength provides range data. These quantities are fused to provide a full relative location estimate. We used our method to tackle the problem of collision avoidance in tight areas. The system was tested with a team of AR.Drones flying in a 4mx4m area and with miniature drones of 50g in a 2mx2m area. The MAVs were able to track their relative positions and fly several minutes without collisions. Our implementation used Bluetooth to communicate between the drones. This featured significant noise and disturbances in signal-strength, which worsened as more drones were added. Simulation analysis suggests that results can improve with a more suitable transceiver module. | MAVs should be designed to be as efficient as possible to decrease mass and maximize flight-time. This means that they are often limited in sensing, computational power, and payload capabilities . Collision avoidance is important for mission success but it must not exhaust the already limited resources, which should remain free to pursue the real mission. Arguably, the simplest method to avoid collisions is to have the MAVs fly at different heights. However, experiments by @cite_8 have shown that MAV multi-rotors flying on top of each-other are subject to considerable aerodynamic disturbances. Furthermore, height sensor (e.g. sonar) readings could be disturbed. Based on this limitation, we conclude that lateral evasive maneuvers are needed, and these require relative location estimates between MAVs. | {
"cite_N": [
"@cite_8"
],
"mid": [
"194336506"
],
"abstract": [
"The dynamic response and performance of a micro UAV is greatly influenced by its aerodynamics which in turn is affected by the interactions with features in the environment in close proximity. In the paper we address the modeling of quadrotor robots in different flight conditions that include relative wind velocity and proximity to the ground, the ceiling and other robots. We discuss the incorporation of these models into controllers and the use of a swarm of robots to map features in the environment from variations in the aerodynamics."
]
} |
1609.08811 | 2949817159 | Micro Air Vehicles (MAVs) will unlock their true potential once they can operate in groups. To this end, it is essential for them to estimate on-board the relative location of their neighbors. The challenge lies in limiting the mass and processing burden needed to enable this. We developed a relative localization method that only requires the MAVs to communicate via their wireless transceiver. Communication allows the exchange of on-board states (velocity, height, and orientation), while the signal-strength provides range data. These quantities are fused to provide a full relative location estimate. We used our method to tackle the problem of collision avoidance in tight areas. The system was tested with a team of AR.Drones flying in a 4mx4m area and with miniature drones of 50g in a 2mx2m area. The MAVs were able to track their relative positions and fly several minutes without collisions. Our implementation used Bluetooth to communicate between the drones. This featured significant noise and disturbances in signal-strength, which worsened as more drones were added. Simulation analysis suggests that results can improve with a more suitable transceiver module. | @cite_2 proposed IR sensors. If arranged in an array, these enables an accurate measure or relative bearing between two MAVs. Unfortunately, because IR is uni-directional, several sensors are needed to each face in a direction. This is not easily exportable to smaller MAVs. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2152806738"
],
"abstract": [
"Swarms of indoor flying robots are promising for many applications, including searching tasks in collapsing buildings, or mobile surveillance and monitoring tasks in complex man-made structures. For tasks that employ several flying robots, spatial-coordination between robots is essential for achieving collective operation. However, there is a lack of on-board sensors capable of sensing the highly-dynamic 3-D trajectories required for spatial-coordination of small indoor flying robots. Existing sensing methods typically utilise complex SLAM based approaches, or absolute positioning obtained from off-board tracking sensors, which is not practical for real-world operation. This paper presents an adaptable, embedded infrared based 3-D relative positioning sensor that also operates as a proximity sensor, which is designed to enable inter-robot spatial-coordination and goal-directed flight. This practical approach is robust to varying indoor environmental illumination conditions and is computationally simple."
]
} |
1609.08811 | 2949817159 | Micro Air Vehicles (MAVs) will unlock their true potential once they can operate in groups. To this end, it is essential for them to estimate on-board the relative location of their neighbors. The challenge lies in limiting the mass and processing burden needed to enable this. We developed a relative localization method that only requires the MAVs to communicate via their wireless transceiver. Communication allows the exchange of on-board states (velocity, height, and orientation), while the signal-strength provides range data. These quantities are fused to provide a full relative location estimate. We used our method to tackle the problem of collision avoidance in tight areas. The system was tested with a team of AR.Drones flying in a 4mx4m area and with miniature drones of 50g in a 2mx2m area. The MAVs were able to track their relative positions and fly several minutes without collisions. Our implementation used Bluetooth to communicate between the drones. This featured significant noise and disturbances in signal-strength, which worsened as more drones were added. Simulation analysis suggests that results can improve with a more suitable transceiver module. | Transceivers can be exploited for both ranging and data-exchange. Based on this, we developed a fusion filter that can determine relative location estimates via communicating on-board states between MAVs. To the best of our knowledge, the only instance of on-board relative localization using a wireless transceiver was recently brought forward by @cite_5 with UWB technology. However, they make use of one of the MAVs as a static beacon and their method relies on highly accurate distance measurements. Instead, we propose a method that complements possibly noisy distance measurements by communicating on-board states between MAVs. We then show how it can be used for indoor collision avoidance. We extensively validate this on real platforms as light as 50g that communicate between each other using Bluetooth, which is highly prone to noise and disturbances. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2122858524"
],
"abstract": [
"Autopilots for small UAVs are generally equipped with low fidelity sen- sors that make state estimation challenging. In addition, the sensor suite does not include units that measure angle-of-attack and side-slip angles. The achievable flight performance is directly related to the quality of the state estimates. Unfortunately, the computational resources on-board a small UAV are generally limited and pre- clude large state Kalman filters that estimate all of the states and sensor biases. In this chapter we describe simple models for the sensors typically found on-board small UAVs. We also describes a simple cascaded approach to state estimation that has been extensively flight tested using the Kestrel autopilot produced by Procerus Technologies. Our intention is to provide a tutorial of continuous-discrete Kalman filtering with application to state estimation for small UAVs. High fidelity estimates of the position, velocity, attitude, and angular rates are critical for successful guidance and control of intelligent UAVs. The achiev- able fidelity of the state estimates depends upon the quality of the sensors on-board the UAV. Unfortunately, high quality sensors are usually heavy and expensive. This is particularly true for sensors that directly measure the atti- tude of the UAV. In this chapter we focus on the problem of state estimation using light weight, inexpensive, low quality sensors. In doing so, our target platforms are small and micro air vehicles with limited payload capacity. In recent years, several autopilots for small UAVs have appeared on the commercial market. These include the Procerus Kestrel (4), the Cloudcap Piccolo (2), and the Micropilot MP2028 (3). Each of these autopilots use the following sensors: • rate gyros, • accelerometers, • pressure sensors, and • GPS."
]
} |
1609.09152 | 2524638710 | Predicting personalized sequential behavior is a key task for recommender systems. In order to predict user actions such as the next product to purchase, movie to watch, or place to visit, it is essential to take into account both long-term user preferences and sequential patterns (i.e., short-term dynamics). Matrix Factorization and Markov Chain methods have emerged as two separate but powerful paradigms for modeling the two respectively. Combining these ideas has led to unified methods that accommodate long- and short-term dynamics simultaneously by modeling pairwise user-item and item-item interactions. In spite of the success of such methods for tackling dense data, they are challenged by sparsity issues, which are prevalent in real-world datasets. In recent years, similarity-based methods have been proposed for (sequentially-unaware) item recommendation with promising results on sparse datasets. In this paper, we propose to fuse such methods with Markov Chains to make personalized sequential recommendations. We evaluate our method, Fossil, on a variety of large, real-world datasets. We show quantitatively that Fossil outperforms alternative algorithms, especially on sparse datasets, and qualitatively that it captures personalized dynamics and is able to make meaningful recommendations. | In order to tackle implicit feedback data where only positive signals (e.g. purchases, clicks, thumbs-up) are observed, both neighborhood- and model-based methods have been extended. Recently, Ning proposed SLIM to learn an item-item similarity matrix, which has shown to outperform a series of state-of-the-art recommendation approaches @cite_10 . Kabbur further explored the low-rank property of the similarity matrix to handle sparse datasets @cite_2 . Since similarity (or neighborhood) relationships are learned from the data, these methods overcome the rigidity of using a predefined similarity metric. On the other hand, MF has also been extended in several ways including point-wise methods that inherently assume non-observed feedback to be negative @cite_3 @cite_27 , and pair-wise methods like BPR-MF @cite_21 that are based on a more realistic assumption that positive feedback should only be more preferable' than non-observed feedback. | {
"cite_N": [
"@cite_21",
"@cite_3",
"@cite_27",
"@cite_2",
"@cite_10"
],
"mid": [
"2950975304",
"2101409192",
"",
"2108920354",
"1987431925"
],
"abstract": [
"Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive knearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.",
"A common task of recommender systems is to improve customer experience through personalized recommendations based on prior implicit feedback. These systems passively track different sorts of user behavior, such as purchase history, watching habits and browsing activity, in order to model user preferences. Unlike the much more extensively researched explicit feedback, we do not have any direct input from the users regarding their preferences. In particular, we lack substantial evidence on which products consumer dislike. In this work we identify unique properties of implicit feedback datasets. We propose treating the data as indication of positive and negative preference associated with vastly varying confidence levels. This leads to a factor model which is especially tailored for implicit feedback recommenders. We also suggest a scalable optimization procedure, which scales linearly with the data size. The algorithm is used successfully within a recommender system for television shows. It compares favorably with well tuned implementations of other known methods. In addition, we offer a novel way to give explanations to recommendations given by this factor model.",
"",
"The effectiveness of existing top-N recommendation methods decreases as the sparsity of the datasets increases. To alleviate this problem, we present an item-based method for generating top-N recommendations that learns the item-item similarity matrix as the product of two low dimensional latent factor matrices. These matrices are learned using a structural equation modeling approach, wherein the value being estimated is not used for its own estimation. A comprehensive set of experiments on multiple datasets at three different sparsity levels indicate that the proposed methods can handle sparse datasets effectively and outperforms other state-of-the-art top-N recommendation methods. The experimental results also show that the relative performance gains compared to competing methods increase as the data gets sparser.",
"This paper focuses on developing effective and efficient algorithms for top-N recommender systems. A novel Sparse Linear Method (SLIM) is proposed, which generates top-N recommendations by aggregating from user purchase rating profiles. A sparse aggregation coefficient matrix W is learned from SLIM by solving an 1-norm and 2-norm regularized optimization problem. W is demonstrated to produce high quality recommendations and its sparsity allows SLIM to generate recommendations very fast. A comprehensive set of experiments is conducted by comparing the SLIM method and other state-of-the-art top-N recommendation methods. The experiments show that SLIM achieves significant improvements both in run time performance and recommendation quality over the best existing methods."
]
} |
1609.09152 | 2524638710 | Predicting personalized sequential behavior is a key task for recommender systems. In order to predict user actions such as the next product to purchase, movie to watch, or place to visit, it is essential to take into account both long-term user preferences and sequential patterns (i.e., short-term dynamics). Matrix Factorization and Markov Chain methods have emerged as two separate but powerful paradigms for modeling the two respectively. Combining these ideas has led to unified methods that accommodate long- and short-term dynamics simultaneously by modeling pairwise user-item and item-item interactions. In spite of the success of such methods for tackling dense data, they are challenged by sparsity issues, which are prevalent in real-world datasets. In recent years, similarity-based methods have been proposed for (sequentially-unaware) item recommendation with promising results on sparse datasets. In this paper, we propose to fuse such methods with Markov Chains to make personalized sequential recommendations. We evaluate our method, Fossil, on a variety of large, real-world datasets. We show quantitatively that Fossil outperforms alternative algorithms, especially on sparse datasets, and qualitatively that it captures personalized dynamics and is able to make meaningful recommendations. | Temporal dynamics. Several works take temporal dynamics into account, mostly based on MF techniques @cite_14 . This includes seminal work proposed by Koren @cite_24 @cite_19 , where they showed state-of-the-art results on data by modeling the evolution of users and items over time. However, such works are ultimately building models to understand past actions (e.g. What did Tom like in 2008?', What does Grace like to do on Weekends?'), by making use of the explicit time stamps. The sequential prediction task differs from theirs in that it does not use time stamps directly, but rather models sequential relationships between actions. | {
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_14"
],
"mid": [
"",
"2054141820",
"1994389483"
],
"abstract": [
"",
"As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.",
"Recommender systems provide users with personalized suggestions for products or services. These systems often rely on Collaborating Filtering (CF), where past transactions are analyzed in order to establish connections between users and products. The two more successful approaches to CF are latent factor models, which directly profile both users and products, and neighborhood models, which analyze similarities between products or users. In this work we introduce some innovations to both approaches. The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model. Further accuracy improvements are achieved by extending the models to exploit both explicit and implicit feedback by the users. The methods are tested on the Netflix data. Results are better than those previously published on that dataset. In addition, we suggest a new evaluation metric, which highlights the differences among methods, based on their performance at a top-K recommendation task."
]
} |
1609.09152 | 2524638710 | Predicting personalized sequential behavior is a key task for recommender systems. In order to predict user actions such as the next product to purchase, movie to watch, or place to visit, it is essential to take into account both long-term user preferences and sequential patterns (i.e., short-term dynamics). Matrix Factorization and Markov Chain methods have emerged as two separate but powerful paradigms for modeling the two respectively. Combining these ideas has led to unified methods that accommodate long- and short-term dynamics simultaneously by modeling pairwise user-item and item-item interactions. In spite of the success of such methods for tackling dense data, they are challenged by sparsity issues, which are prevalent in real-world datasets. In recent years, similarity-based methods have been proposed for (sequentially-unaware) item recommendation with promising results on sparse datasets. In this paper, we propose to fuse such methods with Markov Chains to make personalized sequential recommendations. We evaluate our method, Fossil, on a variety of large, real-world datasets. We show quantitatively that Fossil outperforms alternative algorithms, especially on sparse datasets, and qualitatively that it captures personalized dynamics and is able to make meaningful recommendations. | Sequential recommendation. Markov Chains have demonstrated their strength at modeling stochastic transitions, from uncovering sequential patterns (e.g. @cite_23 @cite_12 ) to directly modeling decision processes @cite_18 . For the sequential prediction recommendation task, Rendle proposed FPMC which combines the power of MF at modeling personal preferences and the strength of Markov Chains at modeling sequential patterns @cite_17 . Our work follows this thread but contributes in that (1) we make use of a similarity-based method for modeling user preferences so that sparsity issues are mitigated; and (2) we further consider Markov Chains with higher orders to model sequential smoothness across multiple time steps. | {
"cite_N": [
"@cite_18",
"@cite_12",
"@cite_23",
"@cite_17"
],
"mid": [
"2953132212",
"2117111450",
"2157973827",
"2171279286"
],
"abstract": [
"Typical Recommender systems adopt a static view of the recommendation process and treat it as a prediction problem. We argue that it is more appropriate to view the problem of generating recommendations as a sequential decision problem and, consequently, that Markov decision processes (MDP) provide a more appropriate model for Recommender systems. MDPs introduce two benefits: they take into account the long-term effects of each recommendation, and they take into account the expected value of each recommendation. To succeed in practice, an MDP-based Recommender system must employ a strong initial model; and the bulk of this paper is concerned with the generation of such a model. In particular, we suggest the use of an n-gram predictive model for generating the initial MDP. Our n-gram model induces a Markov-chain model of user behavior whose predictive accuracy is greater than that of existing predictive models. We describe our predictive model in detail and evaluate its performance on real data. In addition, we show how the model can be used in an MDP-based Recommender system.",
"We describe an efficient framework for Web personalization based on sequential and non-sequential pattern discovery from usage data. Our experimental results performed on real usage data indicate that more restrictive patterns, such as contiguous sequential patterns (e.g., frequent navigational paths) are more suitable for predictive tasks, such as Web prefetching, (which involve predicting which item is accessed next by a user), while less constrained patterns, such as frequent item sets or general sequential patterns are more effective alternatives in the context of Web personalization and recommender systems.",
"We treat collaborative filtering as a univariate time series problem: given a user's previous votes, predict the next vote. We describe two families of methods for transforming data to encode time order in ways amenable to off-the-shelf classification and density estimation tools. Using a decision-tree learning tool and two real-world data sets, we compare the results of these approaches to the results of collaborative filtering without ordering information. The improvements in both predictive accuracy and in recommendation quality that we realize advocate the use of predictive algorithms exploiting the temporal order of data.",
"Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization."
]
} |
1609.09028 | 2526848774 | Rumour stance classification, the task that determines if each tweet in a collection discussing a rumour is supporting, denying, questioning or simply commenting on the rumour, has been attracting substantial interest. Here we introduce a novel approach that makes use of the sequence of transitions observed in tree-structured conversation threads in Twitter. The conversation threads are formed by harvesting users' replies to one another, which results in a nested tree-like structure. Previous work addressing stance classification task in Twitter has treated each tweet as a separate unit. Here we analyse tweets by virtue of their position in a sequence and test two sequential classifiers, Linear-Chain CRF and Tree CRF, each of which makes different assumptions about the conversation structure. We experiment with eight Twitter datasets, collected during breaking news, and show that exploiting the sequential structure of Twitter conversations achieves significant improvements over the non-sequential methods. Our work is the first to model Twitter conversations as a tree structure in this manner, introducing a novel way of tackling NLP tasks on Twitter conversations. | Our work presents different objectives in three aspects. First, we aim to classify the stance of tweets towards rumours that emerge while breaking news unfold; these rumours are unlikely to have been observed before, and hence rumours from previously observed events, which are likely to diverge, need to be leveraged for training. As far as we know, only have tackled stance classification in the context of breaking news applied to new rumours. used Gaussian Processes to perform 3-way stance classification into supporting, denying or questioning, while comments where not considered as part of the task. did include comments to perform 4-way stance classification; they used Hawkes Processes to exploit the temporal sequence of stances towards rumours to classify new tweets discussing rumours. Work by @cite_16 has also performed stance classification for rumours around breaking news, but overlapping rumours were used for training and testing. | {
"cite_N": [
"@cite_16"
],
"mid": [
"1906632846"
],
"abstract": [
"Social media are frequently rife with rumours, and the study of rumour conversational aspects can provide valuable knowledge about how rumours evolve over time and are discussed by others who support or deny them. In this work, we present a new annotation scheme for capturing rumour-bearing conversational threads, as well as the crowdsourcing methodology used to create high quality, human annotated datasets of rumourous conversations from social media. The rumour annotation scheme is validated through comparison between crowdsourced and reference annotations. We also found that only a third of the tweets in rumourous conversations contribute towards determining the veracity of rumours, which reinforces the need for developing methods to extract the relevant pieces of information automatically."
]
} |
1609.09028 | 2526848774 | Rumour stance classification, the task that determines if each tweet in a collection discussing a rumour is supporting, denying, questioning or simply commenting on the rumour, has been attracting substantial interest. Here we introduce a novel approach that makes use of the sequence of transitions observed in tree-structured conversation threads in Twitter. The conversation threads are formed by harvesting users' replies to one another, which results in a nested tree-like structure. Previous work addressing stance classification task in Twitter has treated each tweet as a separate unit. Here we analyse tweets by virtue of their position in a sequence and test two sequential classifiers, Linear-Chain CRF and Tree CRF, each of which makes different assumptions about the conversation structure. We experiment with eight Twitter datasets, collected during breaking news, and show that exploiting the sequential structure of Twitter conversations achieves significant improvements over the non-sequential methods. Our work is the first to model Twitter conversations as a tree structure in this manner, introducing a novel way of tackling NLP tasks on Twitter conversations. | As far as we know, no work has leveraged the conversational structure of Twitter postings for stance classification, and hence its utility remains unexplored. A work that is related is that of , who exploited the temporal sequence of tweets, although the conversational structure was ignored and each tweet was treated as a separate unit. In other domains where debates or conversations are involved, the sequence of responses has been exploited to make the most of the evolving discourse and perform an improved classification of each individual post after learning the structure and dynamics of the conversation as a whole. For instance, Qu and Liu found Hidden Markov Models to be an effective approach to classify threads in on-line fora as successfully solving or not the question raised in the initial post. This was later further studied in a SemEval shared task, where each post in a forum thread had to also be classified as good, potential or bad @cite_14 . used a linear-chain CRF to identify high-quality comments in threads responding to blog posts. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2252217313"
],
"abstract": [
"Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. In this context, we organized SemEval2015 Task 3 on Answer Selection in cQA, which included two subtasks: (a) classifying answers as good, bad, or potentially relevant with respect to the question, and (b) answering a YES NO question with yes, no, or unsure, based on the list of all answers. We set subtask A for Arabic and English on two relatively different cQA domains, i.e., the Qatar Living website for English, and a Quran-related website for Arabic. We used crowdsourcing on Amazon Mechanical Turk to label a large English training dataset, which we released to the research community. Thirteen teams participated in the challenge with a total of 61 submissions: 24 primary and 37 contrastive. The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A."
]
} |
1609.09028 | 2526848774 | Rumour stance classification, the task that determines if each tweet in a collection discussing a rumour is supporting, denying, questioning or simply commenting on the rumour, has been attracting substantial interest. Here we introduce a novel approach that makes use of the sequence of transitions observed in tree-structured conversation threads in Twitter. The conversation threads are formed by harvesting users' replies to one another, which results in a nested tree-like structure. Previous work addressing stance classification task in Twitter has treated each tweet as a separate unit. Here we analyse tweets by virtue of their position in a sequence and test two sequential classifiers, Linear-Chain CRF and Tree CRF, each of which makes different assumptions about the conversation structure. We experiment with eight Twitter datasets, collected during breaking news, and show that exploiting the sequential structure of Twitter conversations achieves significant improvements over the non-sequential methods. Our work is the first to model Twitter conversations as a tree structure in this manner, introducing a novel way of tackling NLP tasks on Twitter conversations. | In a task that is related to stance classification, researchers have also studied the identification of agreement and disagreement in on-line conversations. To classify agreement between question-answer (Q-A) message pairs in fora, used Naive Bayes as the classifier, and Rosenthal and McKeown used a logistic regression classifier. However, in both cases only pairs of messages were considered, and the entire sequence of responses in the tree was not used. CRF has also been used to detect agreement and disagreement between speakers in broadcast debates @cite_8 , which our task differs from in that it solely focuses on text. It is also worthwhile to emphasise that stance classification is different to agreement disagreement detection, given that in stance classification one has to determine the orientation of a user towards a rumour. Instead, in agreement disagreement detection, one has to determine if a pair of posts share the same view. In stance classification, one might agree with another user who is denying a rumour, and hence they are denying the rumour as well, irrespective of the pairwise agreement. To the best of our knowledge Twitter conversational thread structure has not been explored in the stance classification problem. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2161354826"
],
"abstract": [
"We present Conditional Random Fields based approaches for detecting agreement disagreement between speakers in English broadcast conversation shows. We develop annotation approaches for a variety of linguistic phenomena. Various lexical, structural, durational, and prosodic features are explored. We compare the performance when using features extracted from automatically generated annotations against that when using human annotations. We investigate the efficacy of adding prosodic features on top of lexical, structural, and durational features. Since the training data is highly imbalanced, we explore two sampling approaches, random downsampling and ensemble downsampling. Overall, our approach achieves 79.2 (precision), 50.5 (recall), 61.7 (F1) for agreement detection and 69.2 (precision), 46.9 (recall), and 55.9 (F1) for disagreement detection, on the English broadcast conversation data."
]
} |
1609.08524 | 2527318842 | In this paper, we present UbuntuWorld 1.0 LTS - a platform for developing automated technical support agents in the Ubuntu operating system. Specifically, we propose to use the Bash terminal as a simulator of the Ubuntu environment for a learning-based agent and demonstrate the usefulness of adopting reinforcement learning (RL) techniques for basic problem solving and troubleshooting in this environment. We provide a plug-and-play interface to the simulator as a python package where different types of agents can be plugged in and evaluated, and provide pathways for integrating data from online support forums like AskUbuntu into an automated agent's learning process. Finally, we show that the use of this data significantly improves the agent's learning efficiency. We believe that this platform can be adopted as a real-world test bed for research on automated technical support. | Bringing AI techniques - particularly reasoning and decision-making - to the problem of automated software agents has a rich history within the automated planning community. Of particular importance is the work on building softbots for UNIX @cite_9 @cite_0 , which is the most comprehensive previous study on this problem. Indeed, as we introduced earlier, many of the issues that are cataloged in that work remain of importance today. The current work builds upon the work of , particularly their focus on a goal-oriented approach to the problem @cite_7 ; however, it goes beyond that work (and related approaches @cite_5 ) in actually realizing a agent framework for the Ubuntu technical support domain. Succinctly, we seek to automate in the largest possible degree the promise of the softbot approach by: (1) exploiting the Bash shell as a robust simulator for learning agents to explore the world; and (2) using the large amounts of data generated by human experts on the internet. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_9",
"@cite_7"
],
"mid": [
"2161256997",
"2606561165",
"2155582674",
"54331064"
],
"abstract": [
"The Internet Softbot (software robot) is a fullyimplemented AI agent developed at the University of Washington (Etzioni, Lcsh, & Segal 1993). The softbot uses a UNIX shell and the World-Wide Web to interact with a wide range of internet resources. The softbot’s effectors include ftp, telnet, mail, and numerous file manipulation commaslds. Its sensors include internet facilities such as archie, gopher, netfind, and many more. The softbot is designed to incorporate new facilities into its repertoirc as they become available. The softbot’s \"added value\" is three-fold. First, it provides an integrated and expressive interface to the internet. Second, the softbot dynamically chooses which facilities to invoke, and in what sequence. For example, the softbot might use netfind to determine David McAllester’s e-mail address. Since it knows that netfind requires a person’s institution as input, the softbot would first search bibliographic databases for a technical report by McAllester which would reveal his institutkm, and then feed that information to netfind. Third, the softbot fluidly backtracks from one facility to another based on information collected at run time. As a result., the softbot’s behavior changes in response to transient system conditions (e.g., the UUCP gateway is down). In this article, we focus on the ideas underlying the softbot-based interface.",
"A user’s interaction with a computer operating system is most commonly reflected in the use of “desktop” application programs. In this paper we investigate the prospect of building plans that link together the services provided by such applications using an inter-process communication language called DCOP (Desktop COmmunication Protocol). Such services can be used to manipulate or query desktop applications, often in a manner similar to the standard user interfaces of those applications, while offering the possibility of a practical interface that a planning agent can utilize in a real software setting. Using the knowledge-level conditional planner PKS, we show how we can construct plans for controlling a set of existing desktop applications in the open source K Desktop Environment (KDE), and illustrate our approach with a series of fully executable examples that include application control and information gathering, under conditions of incomplete information and sensing.",
"AI is moving away from \"toy tasks\" such as block stacking towards real-world problems. This trend is positive, but the amount of preliminary groundwork required to tackle a real-world task can be staggering, particularly when developing an integrated agent architecture. To address this problem, we advocate real-world software environments, such as operating systems or databases, as domains for agent research. The cost, effort, and expertise required to develop and experiment with software agents is relatively low. Furthermore, software environments circumvent many thorny, but peripheral, research issues that are inescapable in other environments. Thus, software environments enable us to test agents in a real world yet focus on core AI research issues. To support this claim, we describe our project to develop UNIX 1 softbots (software robots)--complete intelligent agents that interact with UNIX. Our fully-implemented softbot is able to accept a diverse set of high-level goals, generate and execute plans to achieve these goals in real time, and recover from errors when necessary.",
""
]
} |
1609.08524 | 2527318842 | In this paper, we present UbuntuWorld 1.0 LTS - a platform for developing automated technical support agents in the Ubuntu operating system. Specifically, we propose to use the Bash terminal as a simulator of the Ubuntu environment for a learning-based agent and demonstrate the usefulness of adopting reinforcement learning (RL) techniques for basic problem solving and troubleshooting in this environment. We provide a plug-and-play interface to the simulator as a python package where different types of agents can be plugged in and evaluated, and provide pathways for integrating data from online support forums like AskUbuntu into an automated agent's learning process. Finally, we show that the use of this data significantly improves the agent's learning efficiency. We believe that this platform can be adopted as a real-world test bed for research on automated technical support. | On the learning side, 's work on using reinforcement learning (RL) to map natural language instructions to sequences of executable actions @cite_10 explores a similar problem setting in a Windows OS domain. However, that work focuses on the application of RL techniques to the language processing problem, and on mapping text instructions to executable actions. In contrast, our work focuses on learning task-oriented models for solving the e2eGCS problem. Thus the most relevant prior explorations into this area are complementary to our work in different ways; while the softbot work lays the groundwork for our architecture, 's work provides a report on using learning on a related but different problem. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2122223050"
],
"abstract": [
"In this paper, we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions. We assume access to a reward function that defines the quality of the executed actions. During training, the learner repeatedly constructs action sequences for a set of documents, executes those actions, and observes the resulting reward. We use a policy gradient algorithm to estimate the parameters of a log-linear model for action selection. We apply our method to interpret instructions in two domains --- Windows troubleshooting guides and game tutorials. Our results demonstrate that this method can rival supervised learning techniques while requiring few or no annotated training examples."
]
} |
1609.08439 | 2526758805 | Robotic code needs to be verified to ensure its safety and functional correctness, especially when the robot is interacting with people. Testing the real code in simulation is a viable option. It reduces the costs of experiments and provides detail that is lost when using formal methods. However, generating tests that cover interesting scenarios, while executing most of the code, is a challenge amplified by the complexity of the interactions between the environment and the software. Model-based test generation methods can automate otherwise manual processes and facilitate reaching rare scenarios during testing. In this paper, we compare the use of Belief-Desire-Intention (BDI) agents as models for test generation, with more conventional, model-based test generation, that exploits automata and model checking techniques, and random test generation methods, in terms of practicality, performance, scalability, and exploration ( coverage'). Simulators and automated testbenches were implemented in Robot Operating System (ROS) and Gazebo, for testing the code of two robots, BERT2 in a cooperative manufacture (table assembly) task, and Tiago as a home care assistant. The results highlight the clear advantages of using BDI agents for test generation, compared to random and conventional automata-based approaches. BDI agents naturally emulate the agency present in Human-Robot Interaction (HRI). They are thus more expressive and scale well in HRI applications. | In our previous work, we presented a simulation-based method to test real, high-level robot control code in an effective and scalable manner @cite_7 @cite_19 . Automation of the testing process and a systematic exploration of the code under test within HRI scenarios was achieved through Coverage-Driven Verification (CDV), a method that guides the generation of tests, according to feedback from coverage metrics @cite_15 . @cite_7 @cite_19 , we illustrated how a CDV testbench, comprising a test generator, driver, self-checker and coverage collector, can be integrated into a simulator running in the Robot Operating System (ROS) http: www.ros.org framework and Gazebo http: gazebosim.org . In this paper we focus on effective and efficient test generation. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_7"
],
"mid": [
"2301698066",
"",
"2197974018"
],
"abstract": [
"Industries such as flexible manufacturing and home care will be transformed by the presence of robotic assistants. Assurance of safety and functional soundness for these robotic systems will require rigorous verification and validation. We propose testing in simulation using Coverage-Driven Verification (CDV) to guide the testing process in an automatic and systematic way. We use a two-tiered test generation approach, where abstract test sequences are computed first and then concretized (e.g., data and variables are instantiated), to reduce the complexity of the test generation problem. To demonstrate the effectiveness of our approach, we developed a testbench for robotic code, running in ROS-Gazebo, that implements an object handover as part of a human-robot interaction (HRI) task. Tests are generated to stimulate the robot’s code in a realistic manner, through stimulating the human, environment, sensors, and actuators in simulation. We compare the merits of unconstrained, constrained and model-based test generation in achieving thorough exploration of the code under test, and interesting combinations of human-robot interactions. Our results show that CDV combined with systematic test generation achieves a very high degree of automation in simulation-based verification of control code for robots in HRI.",
"",
"Collaborative robots could transform several industries, such as manufacturing and healthcare, but they present a significant challenge to verification. The complex nature of their working environment neces- sitates testing in realistic detail under a broad range of circumstances. We propose the use of Coverage-Driven Verification (CDV) to meet this challenge. By automating the simulation-based testing process as far as possible, CDV provides an efficient route to coverage closure. We dis- cuss the need, practical considerations, and potential benefits of trans- ferring this approach from microelectronic design verification to the field of human-robot interaction. We demonstrate the validity and feasibility of the proposed approach by constructing a custom CDV testbench and applying it to the verification of an object handover task."
]
} |
1609.08439 | 2526758805 | Robotic code needs to be verified to ensure its safety and functional correctness, especially when the robot is interacting with people. Testing the real code in simulation is a viable option. It reduces the costs of experiments and provides detail that is lost when using formal methods. However, generating tests that cover interesting scenarios, while executing most of the code, is a challenge amplified by the complexity of the interactions between the environment and the software. Model-based test generation methods can automate otherwise manual processes and facilitate reaching rare scenarios during testing. In this paper, we compare the use of Belief-Desire-Intention (BDI) agents as models for test generation, with more conventional, model-based test generation, that exploits automata and model checking techniques, and random test generation methods, in terms of practicality, performance, scalability, and exploration ( coverage'). Simulators and automated testbenches were implemented in Robot Operating System (ROS) and Gazebo, for testing the code of two robots, BERT2 in a cooperative manufacture (table assembly) task, and Tiago as a home care assistant. The results highlight the clear advantages of using BDI agents for test generation, compared to random and conventional automata-based approaches. BDI agents naturally emulate the agency present in Human-Robot Interaction (HRI). They are thus more expressive and scale well in HRI applications. | In many robotics applications, test generation has been needed only for stimulating dedicated and simple components (equivalent to unit testing), such as choosing from a set of inputs for a controller @cite_22 , or generating a timing sequence for activating individual controllers @cite_1 . For these applications, random data generation or sampling @cite_10 might suffice to explore the state space or data ranges @cite_22 , along with alternatives such as constraint solving or optimization techniques @cite_1 . When testing a full robot system, however, the orchestration of different timed sequences in parallel (e.g. for emulated hardware components), coupled with several tasks of data instantiation (e.g. for sensor readings), is more complex. Sophisticated model-based approaches, such as those presented in this paper, offer a practical and viable solution for complex test generation problems @cite_23 @cite_13 @cite_9 . A model-based approach can be used in a hierarchical manner in order to coordinate lower-level random data generation and optimization with more complex, higher-level test generation tasks. A two layered approach is proposed in @cite_7 @cite_19 . | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_10",
"@cite_9",
"@cite_1",
"@cite_19",
"@cite_23",
"@cite_13"
],
"mid": [
"2082784420",
"2197974018",
"2075699551",
"2130422196",
"2145257696",
"2301698066",
"2052495090",
"2119296494"
],
"abstract": [
"The problem of testing complex reactive control systems and validating the effectiveness of multi-agent controllers is addressed. Testing and validation involve searching for conditions that lead to system failure by exploring all adversarial inputs and disturbances for errant trajectories. This problem of testing is related to motion planning. In both cases, there is a goal or specification set consisting of a set of points in state space that is of interest, either for finding a plan, demonstrating failure or for validation. Unlike motion planning problems, the problem of testing generally involves systems that are not controllable with respect to disturbances or adversarial inputs and therefore, the reachable set of states is a small subset of the entire state space. In this work, sampling-based algorithms based on the Rapidly-exploring Random Trees (RRT) algorithm are applied to the testing and validation problem. First, some of the factors that govern the exploration rate of the RRT algorithm are analysed, this analysis serving to motivate some enhancements. Then, three modifications to the original RRT algorithm are proposed, suited for use on uncontrollable systems. First, a new distance function is introduced which incorporates information about the system's dynamics to select nodes for extension. Second, a weighting is introduced to penalize nodes which are repeatedly selected but fail to extend.Third, a scheme for adaptively modifying the sampling probability distribution is proposed, based on tree growth. Application of the algorithm is demonstrated using several examples, and computational statistics are provided to illustrate the effect of each modification. The final algorithm is demonstrated on a 25 state example and results in nearly an order of magnitude reduction in computation time when compared with the traditional RRT. The proposed algorithms are also applicable to motion planning for systems that are not small time locally controllable.",
"Collaborative robots could transform several industries, such as manufacturing and healthcare, but they present a significant challenge to verification. The complex nature of their working environment neces- sitates testing in realistic detail under a broad range of circumstances. We propose the use of Coverage-Driven Verification (CDV) to meet this challenge. By automating the simulation-based testing process as far as possible, CDV provides an efficient route to coverage closure. We dis- cuss the need, practical considerations, and potential benefits of trans- ferring this approach from microelectronic design verification to the field of human-robot interaction. We demonstrate the validity and feasibility of the proposed approach by constructing a custom CDV testbench and applying it to the verification of an object handover task.",
"Combinatorial Testing (CT) can detect failures triggered by interactions of parameters in the Software Under Test (SUT) with a covering array test suite generated by some sampling mechanisms. It has been an active field of research in the last twenty years. This article aims to review previous work on CT, highlights the evolution of CT, and identifies important issues, methods, and applications of CT, with the goal of supporting and directing future practice and research in this area. First, we present the basic concepts and notations of CT. Second, we classify the research on CT into the following categories: modeling for CT, test suite generation, constraints, failure diagnosis, prioritization, metric, evaluation, testing procedure and the application of CT. For each of the categories, we survey the motivation, key issues, solutions, and the current state of research. Then, we review the contribution from different research groups, and present the growing trend of CT research. Finally, we recommend directions for future CT research, including: (1) modeling for CT, (2) improving the existing test suite generation algorithm, (3) improving analysis of testing result, (4) exploring the application of CT to different levels of testing and additional types of systems, (5) conducting more empirical studies to fully understand limitations and strengths of CT, and (6) combining CT with other testing techniques.",
"This paper describes a systematic review performed on model-based testing (MBT) approaches. A selection criterion was used to narrow the initially identified four hundred and six papers to focus on seventy-eight papers. Detailed analysis of these papers shows where MBT approaches have been applied, the characteristics, and the limitations. The comparison criteria includes representation models, support tools, test coverage criteria, the level of automation, intermediate models, and the complexity. This paper defines and explains the review methodology and presents some results.",
"Abstract Context Testing complex industrial robots (CIRs) requires testing several interacting control systems. This is challenging, especially for robots performing process-intensive tasks such as painting or gluing, since their dedicated process control systems can be loosely coupled with the robot’s motion control. Objective Current practices for validating CIRs involve manual test case design and execution. To reduce testing costs and improve quality assurance, a trend is to automate the generation of test cases. Our work aims to define a cost-effective automated testing technique to validate CIR control systems in an industrial context. Method This paper reports on a methodology, developed at ABB Robotics in collaboration with SIMULA, for the fully automated testing of CIRs control systems. Our approach draws on continuous integration principles and well-established constraint-based testing techniques. It is based on a novel constraint-based model for automatically generating test sequences where test sequences are both generated and executed as part of a continuous integration process. Results By performing a detailed analysis of experimental results over a simplified version of our constraint model, we determine the most appropriate parameterization of the operational version of the constraint model. This version is now being deployed at ABB Robotics’s CIR testing facilities and used on a permanent basis. This paper presents the empirical results obtained when automatically generating test sequences for CIRs at ABB Robotics. In a real industrial setting, the results show that our methodology is not only able to detect reintroduced known faults, but also to spot completely new faults. Conclusion Our empirical evaluation shows that constraint-based testing is appropriate for automatically generating test sequences for CIRs and can be faithfully deployed in an industrial context.",
"Industries such as flexible manufacturing and home care will be transformed by the presence of robotic assistants. Assurance of safety and functional soundness for these robotic systems will require rigorous verification and validation. We propose testing in simulation using Coverage-Driven Verification (CDV) to guide the testing process in an automatic and systematic way. We use a two-tiered test generation approach, where abstract test sequences are computed first and then concretized (e.g., data and variables are instantiated), to reduce the complexity of the test generation problem. To demonstrate the effectiveness of our approach, we developed a testbench for robotic code, running in ROS-Gazebo, that implements an object handover as part of a human-robot interaction (HRI) task. Tests are generated to stimulate the robot’s code in a realistic manner, through stimulating the human, environment, sensors, and actuators in simulation. We compare the merits of unconstrained, constrained and model-based test generation in achieving thorough exploration of the code under test, and interesting combinations of human-robot interactions. Our results show that CDV combined with systematic test generation achieves a very high degree of automation in simulation-based verification of control code for robots in HRI.",
"About a decade after the initial proposal to use model checkers for the generation of test cases we take a look at the results in this field of research. Model checkers are formal verification tools, capable of providing counterexamples to violated properties. Normally, these counterexamples are meant to guide an analyst when searching for the root cause of a property violation. They are, however, also very useful as test cases. Many different approaches have been presented, many problems have been solved, yet many issues remain. This survey paper reviews the state of the art in testing with model checkers. Copyright © 2008 John Wiley & Sons, Ltd.",
"Model-based testing (MBT) relies on models of a system under test and or its environment to derive test cases for the system. This paper discusses the process of MBT and defines a taxonomy that covers the key aspects of MBT approaches. It is intended to help with understanding the characteristics, similarities and differences of those approaches, and with classifying the approach used in a particular MBT tool. To illustrate the taxonomy, a description of how three different examples of MBT tools fit into the taxonomy is provided. Copyright © 2011 John Wiley & Sons, Ltd."
]
} |
1609.08439 | 2526758805 | Robotic code needs to be verified to ensure its safety and functional correctness, especially when the robot is interacting with people. Testing the real code in simulation is a viable option. It reduces the costs of experiments and provides detail that is lost when using formal methods. However, generating tests that cover interesting scenarios, while executing most of the code, is a challenge amplified by the complexity of the interactions between the environment and the software. Model-based test generation methods can automate otherwise manual processes and facilitate reaching rare scenarios during testing. In this paper, we compare the use of Belief-Desire-Intention (BDI) agents as models for test generation, with more conventional, model-based test generation, that exploits automata and model checking techniques, and random test generation methods, in terms of practicality, performance, scalability, and exploration ( coverage'). Simulators and automated testbenches were implemented in Robot Operating System (ROS) and Gazebo, for testing the code of two robots, BERT2 in a cooperative manufacture (table assembly) task, and Tiago as a home care assistant. The results highlight the clear advantages of using BDI agents for test generation, compared to random and conventional automata-based approaches. BDI agents naturally emulate the agency present in Human-Robot Interaction (HRI). They are thus more expressive and scale well in HRI applications. | Many languages and formalisms have been proposed for generic software model-based test generation @cite_28 , e.g. UML and process algebras for concurrency @cite_21 , or Lustre and MATLAB Simulink for data flow @cite_13 . Their suitability for the HRI domain, in terms of capturing realistic and uncertain environments with people, is yet to be determined @cite_14 . Also, deriving models automatically from generic code (e.g. Python and C++ in ROS), or from user requirements, remains a challenge. BDI agents @cite_0 have been used successfully to model decision making in autonomous robots @cite_27 . Because BDI agents naturally reflect agency, they are also ideal to model the agency present in the robot's environment (e.g. people). @cite_20 , we have shown how to use BDI agents for model-based test generation. | {
"cite_N": [
"@cite_14",
"@cite_28",
"@cite_21",
"@cite_0",
"@cite_27",
"@cite_13",
"@cite_20"
],
"mid": [
"156561816",
"",
"1594775721",
"1497120980",
"",
"2119296494",
"2949647627"
],
"abstract": [
"The seminal works of Wilf and Nijenhuis in the late 70s have led to efficient algorithms for counting and generating uniformly at random a variety of combinatorial structures. In 1994, Flajolet, Zimmermann and Van Cutsem have widely generalised and systematised the approach. This paper presents several applications of these powerful results to software random testing, and random model exploration.",
"",
"The use of autonomous systems, including cooperating agents, is indispensable in certain fields of application. Nevertheless, the verification of autonomous systems still represents a challenge due to lack of suitable modelling languages and verification techniques. To address these difficulties, different modelling languages allowing concurrency are compared. Coloured Petri Nets (CPNs) are further analysed and illustrated by means of an example modelling autonomous systems. Finally, some existing structural coverage concepts for Petri Nets are presented and extended by further criteria tailored to the characteristics of CPNs.",
"Preface. 1 Introduction. 1.1 Autonomous Agents. 1.2 Characteristics of Agents. 1.3 Multi-Agent Systems. 1.4 Hello World! 2 The BDI Agent Model. 2.1 Agent-Oriented Programming. 2.2 Practical Reasoning. 2.3 A Computational Model of BDI Practical Reasoning. 2.4 The Procedural Reasoning System. 2.5 Agent Communication. 3 The Jason Agent Programming Language. 3.1 Beliefs. 3.2 Goals. 3.3 Plans. 3.4 Example: A Complete Agent Program. 3.5 Exercises. 4 Jason Interpreter. 4.1 The Reasoning Cycle. 4.2 Plan Failure. 4.3 Interpreter Configuration and Execution Modes. 4.4 Pre-Defined Plan Annotations. 4.5 Exercises. 5 Environments. 5.1 Support for Defining Simulated Environments. 5.2 Example: Running a System of Multiple Situated Agents. 5.3 Exercises. 6 Communication and Interaction. 6.1 Available Performatives. 6.2 Informal Semantics of Receiving Messages. 6.3 Example: Contract Net Protocol. 6.4 Exercises. 7 User-Defined Components. 7.1 Defining New Internal Actions. 7.2 Customising the Agent Class. 7.3 Customising the Overall Architecture. 7.4 Customising the Belief Base. 7.5 Pre-Processing Directives. 7.6 Exercises. 8 Advanced Goal-Based Programming. 8.1 BDI Programming. 8.2 Declarative (Achievement) Goal Patterns. 8.3 Commitment Strategy Patterns. 8.4 Other Useful Patterns. 8.5 Pre-Processing Directives for Plan Patterns. 9 Case Studies. 9.1 Case Study I: Gold Miners. 9.2 Case Study II: Electronic Bookstore. 10 Formal Semantics. 10.1 Semantic Rules. 10.2 Semantics of Message Exchange in a Multi-Agent System. 10.3 Semantic Rules for Receiving Messages. 10.4 Semantics of the BDI Modalities for AgentSpeak. 11 Conclusions. 11.1 Jason and Agent-Oriented Programming. 11.2 Ongoing Work and Related Research. 11.3 General Advice on Programming Style and Practice. A Reference Guide. A.1 EBNF for the Agent Language. A.2 EBNF for the Multi-Agent Systems Language. A.3 Standard Internal Actions. A.4 Pre-Defined Annotations. A.5 Pre-Processing Directives. A.6 Interpreter Configuration. Bibliography.",
"",
"Model-based testing (MBT) relies on models of a system under test and or its environment to derive test cases for the system. This paper discusses the process of MBT and defines a taxonomy that covers the key aspects of MBT approaches. It is intended to help with understanding the characteristics, similarities and differences of those approaches, and with classifying the approach used in a particular MBT tool. To illustrate the taxonomy, a description of how three different examples of MBT tools fit into the taxonomy is provided. Copyright © 2011 John Wiley & Sons, Ltd.",
"The challenges of robotic software testing extend beyond conventional software testing. Valid, realistic and interesting tests need to be generated for multiple programs and hardware running concurrently, deployed into dynamic environments with people. We investigate the use of BeliefDesire-Intention (BDI) agents as models for test generation, in the domain of human-robot interaction (HRI) in simulations. These models provide rational agency, causality, and a reasoning mechanism for planning, which emulate both intelligent and adaptive robots, as well as smart testing environments directed by humans. We introduce reinforcement learning (RL) to automate the exploration of the BDI models using a reward function based on coverage feedback. Our approach is evaluated using a collaborative manufacture example, where the robotic software under test is stimulated indirectly via a simulated human co-worker. We conclude that BDI agents provide intuitive models for test generation in the HRI domain. Our results demonstrate that RL can fully automate BDI model exploration, leading to very effective coverage-directed test generation."
]
} |
1609.08286 | 2949180264 | In the era of big data, it is becoming common to have data with multiple modalities or coming from multiple sources, known as "multi-view data". Multi-view data are usually unlabeled and come from high-dimensional spaces (such as language vocabularies), unsupervised multi-view feature selection is crucial to many applications. However, it is nontrivial due to the following challenges. First, there are too many instances or the feature dimensionality is too large. Thus, the data may not fit in memory. How to select useful features with limited memory space? Second, how to select features from streaming data and handles the concept drift? Third, how to leverage the consistent and complementary information from different views to improve the feature selection in the situation when the data are too big or come in as streams? To the best of our knowledge, none of the previous works can solve all the challenges simultaneously. In this paper, we propose an Online unsupervised Multi-View Feature Selection, OMVFS, which deals with large-scale streaming multi-view data in an online fashion. OMVFS embeds unsupervised feature selection into a clustering algorithm via NMF with sparse learning. It further incorporates the graph regularization to preserve the local structure information and help select discriminative features. Instead of storing all the historical data, OMVFS processes the multi-view data chunk by chunk and aggregates all the necessary information into several small matrices. By using the buffering technique, the proposed OMVFS can reduce the computational and storage cost while taking advantage of the structure information. Furthermore, OMVFS can capture the concept drifts in the data streams. Extensive experiments on four real-world datasets show the effectiveness and efficiency of the proposed OMVFS method. More importantly, OMVFS is about 100 times faster than the off-line methods. | Nonnegative matrix factorization @cite_8 , especially online NMF, is the third area that is related to our work. NMF has been successfully used in unsupervised learning @cite_31 @cite_23 . However, traditional NMF cannot deal with large-scale data. Different variations were proposed in the last few years. For example, @cite_0 proposed an online NMF algorithm for document clustering. @cite_22 proposed an efficient online NMF algorithm (OR-NMF) that takes one sample or a chunk of samples per step and updates the bases via robust hastic approximation. All the online NMF methods either focus on clustering or dimension reduction. None of them are designed for feature selection. Furthermore, none of them can handle multi-view data. However, our proposed OMVFS directly embeds feature selection into the online joint NMF framework with graph regularization, which handles large streaming multi-view data. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_0",
"@cite_23",
"@cite_31"
],
"mid": [
"2121281940",
"2135029798",
"",
"2043545458",
"2108119513"
],
"abstract": [
"Nonnegative matrix factorization (NMF) has become a popular dimension-reduction method and has been widely applied to image processing and pattern recognition problems. However, conventional NMF learning methods require the entire dataset to reside in the memory and thus cannot be applied to large-scale or streaming datasets. In this paper, we propose an efficient online RSA-NMF algorithm (OR-NMF) that learns NMF in an incremental fashion and thus solves this problem. In particular, OR-NMF receives one sample or a chunk of samples per step and updates the bases via robust stochastic approximation. Benefitting from the smartly chosen learning rate and averaging technique, OR-NMF converges at the rate of in each update of the bases. Furthermore, we prove that OR-NMF almost surely converges to a local optimal solution by using the quasi-martingale. By using a buffering strategy, we keep both the time and space complexities of one step of the OR-NMF constant and make OR-NMF suitable for large-scale or streaming datasets. Preliminary experimental results on real-world datasets show that OR-NMF outperforms the existing online NMF (ONMF) algorithms in terms of efficiency. Experimental results of face recognition and image annotation on public datasets confirm the effectiveness of OR-NMF compared with the existing ONMF algorithms.",
"Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.",
"",
"Currently, most research on nonnegative matrix factorization (NMF)focus on 2-factor @math factorization. We provide a systematicanalysis of 3-factor @math NMF. While it unconstrained 3-factor NMF is equivalent to it unconstrained 2-factor NMF, itconstrained 3-factor NMF brings new features to it constrained 2-factor NMF. We study the orthogonality constraint because it leadsto rigorous clustering interpretation. We provide new rules for updating @math and prove the convergenceof these algorithms. Experiments on 5 datasets and a real world casestudy are performed to show the capability of bi-orthogonal 3-factorNMF on simultaneously clustering rows and columns of the input datamatrix. We provide a new approach of evaluating the quality ofclustering on words using class aggregate distribution andmulti-peak distribution. We also provide an overview of various NMF extensions andexamine their relationships.",
"Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems."
]
} |
1609.07983 | 2764291333 | Mining of spatial data is an enabling technology for mobile services, Internet-connected cars and the Internet of Things. But the very distinctiveness of spatial data that drives utility can cost user privacy. Past work has focused upon points and trajectories for differentially private release. In this work, we continue the tradition of privacy-preserving spatial analytics, focusing not on point or path data, but on planar spatial regions. Such data represent the area of a user’s most frequent visitation—such as “around home and nearby shops”. Specifically we consider the differentially private release of data structures that support range queries for counting users’ spatial regions. Counting planar regions leads to unique challenges not faced in existing work. A user’s spatial region that straddles multiple data structure cells can lead to duplicate counting at query time. We provably avoid this pitfall by leveraging the Euler characteristic for the first time with differential privacy. To address the increased sensitivity of range queries to spatial region data, we calibrate privacy-preserving noise using bounded user region size and a constrained inference that uses robust least absolute deviations. Our novel constrained inference reduces noise and promotes covertness by (privately) imposing consistency. We provide a full end-to-end theoretical analysis of both differential privacy and high-probability utility for our approach using concentration bounds. A comprehensive experimental study on several real-world datasets establishes practical validity. | A series of effective privacy attacks on location data @cite_5 @cite_15 @cite_47 has launched a significant amount of activity around privacy-preserving techniques for spatial analytics @cite_18 @cite_47 @cite_21 . | {
"cite_N": [
"@cite_47",
"@cite_18",
"@cite_21",
"@cite_5",
"@cite_15"
],
"mid": [
"1996587544",
"126996204",
"2170166043",
"2115760869",
"2141854027"
],
"abstract": [
"Sharing of location data enables numerous exciting applications, such as location-based queries, location-based social recommendations, monitoring of traffic and air pollution levels, etc. Disclosing exact user locations raises serious privacy concerns, as locations may give away sensitive information about individuals' health status, alternative lifestyles, political and religious affiliations, etc. Preserving location privacy is an essential requirement towards the successful deployment of location-based applications. These lecture notes provide an overview of the state-of-the-art in location privacy protection. A diverse body of solutions is reviewed, including methods that use location generalization, cryptographic techniques or differential privacy. The most prominent results are discussed, and promising directions for future work are identified. Table of Contents: Introduction Privacy-Preserving Spatial Transformations Cryptographic Approaches Hybrid Approaches Private Matching of Spatial Datasets Trajectory Anonymization Differentially Private Publication of Spatial Datasets Conclusions",
"The ubiquity of mobile devices with global positioning functionality (e.g., GPS and Assisted GPS) and Internet connectivity (e.g., 3G and Wi-Fi) has resulted in widespread development of location-based services (LBS). Typical examples of LBS include local business search, e-marketing, social networking, and automotive traffic monitoring. Although LBS provide valuable services for mobile users, revealing their private locations to potentially untrusted LBS service providers pose privacy concerns. In general, there are two types of LBS, namely, snapshot and continuous LBS. For snapshot LBS, a mobile user only needs to report its current location to a service provider once to get its desired information. On the other hand, a mobile user has to report its location to a service provider in a periodic or on-demand manner to obtain its desired continuous LBS. Protecting user location privacy for continuous LBS is more challenging than snapshot LBS because adversaries may use the spatial and temporal correlations in the user's a sequence of location samples to infer the user's location information with a higher degree of certainty. Such user spatial trajectories are also very important for many applications, e.g., business analysis, city planning, and intelligent transportation. However, publishing original spatial trajectories to the public or a third party for data analysis could pose serious privacy concerns. Privacy protection in continuous LBS and trajectory data publication has increasingly drawn attention from the research community and industry. In this chapter, we describe the state-of-the-art privacy-preserving techniques for continuous LBS and trajectory publication.",
"This is a literature survey of computational location privacy, meaning computation-based privacy mechanisms that treat location data as geometric information. This definition includes privacy-preserving algorithms like anonymity and obfuscation as well as privacy-breaking algorithms that exploit the geometric nature of the data. The survey omits non-computational techniques like manually inspecting geotagged photos, and it omits techniques like encryption or access control that treat location data as general symbols. The paper reviews studies of peoples' attitudes about location privacy, computational threats on leaked location data, and computational countermeasures for mitigating these threats.",
"Although some users might willingly subscribe to location-tracking services, few would be comfortable having their location known in all situations. The authors investigate disclosure-control algorithms that hide users' positions in sensitive areas and withhold path information that indicates which areas they have visited.",
"Although the privacy threats and countermeasures associated with location data are well known, there has not been a thorough experiment to assess the effectiveness of either. We examine location data gathered from volunteer subjects to quantify how well four different algorithms can identify the subjects' home locations and then their identities using a freely available, programmable Web search engine. Our procedure can identify at least a small fraction of the subjects and a larger fraction of their home addresses. We then apply three different obscuration countermeasures designed to foil the privacy attacks: spatial cloaking, noise, and rounding. We show how much obscuration is necessary to maintain the privacy of all the subjects."
]
} |
1609.07983 | 2764291333 | Mining of spatial data is an enabling technology for mobile services, Internet-connected cars and the Internet of Things. But the very distinctiveness of spatial data that drives utility can cost user privacy. Past work has focused upon points and trajectories for differentially private release. In this work, we continue the tradition of privacy-preserving spatial analytics, focusing not on point or path data, but on planar spatial regions. Such data represent the area of a user’s most frequent visitation—such as “around home and nearby shops”. Specifically we consider the differentially private release of data structures that support range queries for counting users’ spatial regions. Counting planar regions leads to unique challenges not faced in existing work. A user’s spatial region that straddles multiple data structure cells can lead to duplicate counting at query time. We provably avoid this pitfall by leveraging the Euler characteristic for the first time with differential privacy. To address the increased sensitivity of range queries to spatial region data, we calibrate privacy-preserving noise using bounded user region size and a constrained inference that uses robust least absolute deviations. Our novel constrained inference reduces noise and promotes covertness by (privately) imposing consistency. We provide a full end-to-end theoretical analysis of both differential privacy and high-probability utility for our approach using concentration bounds. A comprehensive experimental study on several real-world datasets establishes practical validity. | Aggregation under range queries has emerged as a fundamental primitive in spatial and spatio-temporal analytics @cite_14 @cite_7 @cite_20 @cite_3 @cite_53 . Originally motivated by statistical and computational efficiency, aggregation is now also used for qualitative privacy. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_53",
"@cite_3",
"@cite_20"
],
"mid": [
"1546365363",
"2112128729",
"2080393976",
"2117010345",
"2023578311"
],
"abstract": [
"Spatial databases store information about the position of individual objects in space. In many applications however, such as traffic supervision or mobile communications, only summarized data, like the number of cars in an area or phones serviced by a cell, is required. Although this information can be obtained from transactional spatial databases, its computation is expensive, rendering online processing inapplicable. Driven by the non-spatial paradigm, spatial data warehouses can be constructed to accelerate spatial OLAP operations. In this paper we consider the star-schema and we focus on the spatial dimensions. Unlike the non-spatial case, the groupings and the hierarchies can be numerous and unknown at design time, therefore the well-known materialization techniques are not directly applicable. In order to address this problem, we construct an ad-hoc grouping hierarchy based on the spatial index at the finest spatial granularity. We incorporate this hierarchy in the lattice model and present efficient methods to process arbitrary aggregations. We finally extend our technique to moving objects by employing incremental update methods.",
"Several spatio-temporal applications require the retrieval of summarized information about moving objects that lie in a query region during a query interval (e.g., the number of mobile users covered by a cell, traffic volume in a district, etc.). Existing solutions have the distinct counting problem: if an object remains in the query region for several timestamps during the query interval, it will be counted multiple times in the result. We solve this problem by integrating spatio-temporal indexes with sketches, traditionally used for approximate query processing. The proposed techniques can also be applied to reduce the space requirements of conventional spatio-temporal data and to mine spatio-temporal association rules.",
"In this paper we present a formal framework for modelling a trajectory data warehouse (TDW), namely a data warehouse aimed at storing aggregate information on trajectories of moving objects, which also offers visual OLAP operations for data analysis. The data warehouse model includes both temporal and spatial dimensions, and it is flexible and general enough to deal with objects that are either completely free or constrained in their movements (e.g., they move along a road network). In particular, the spatial dimension and the associated concept hierarchy reflect the structure of the environment in which the objects travel. Moreover, we cope with some issues related to the efficient computation of aggregate measures, as needed for implementing roll-up operations. The TDW and its visual interface allow one to investigate the behaviour of objects inside a given area as well as the movements of objects between areas in the same neighbourhood. A user can easily navigate the aggregate measures obtained from OLAP queries at different granularities, and get overall views in time and in space of the measures, as well as a focused view on specific measures, spatial areas, or temporal intervals. We discuss two application scenarios of our TDW, namely road traffic and vessel movement analysis, for which we built prototype systems. They mainly differ in the kind of information available for the moving objects under observation and their movement constraints.",
"In this paper we discuss how data warehousing technology can be used to store aggregate information about trajectories and perform OLAP operations over them. To this end, we define a data cube with spatial and temporal dimensions, discretized according to a regular grid. We investigate in depth some issues related to the computation of a holistic aggregate function, i.e, the presence, which returns the number of distinct trajectories occurring in a given spatio-temporal area. In particular, we introduce a novel way to compute an approximate, but nevertheless very accurate, presence aggregate function, which uses only a bounded amount of measures stored in the base cells of our cuboid. We also concentrate on the loading phase of our data warehouse, which has to deal with an unbounded stream of trajectory observations. We suggest how the complexity of this phase can be reduced, and we analyse the errors that this procedure induces at the level of the sub-aggregates stored in the base cells. These errors and the accuracy of our approximate aggregate functions are carefully evaluated by means of tests performed on synthetic trajectory datasets.",
"Spatiotemporal databases are becoming increasingly more common. Typically, applications modeling spatiotemporal objects need to process vast amounts of data. In such cases, generating aggregate information from the data set is more useful than individually analyzing every entry. In this paper, we study the most relevant techniques for the evaluation of aggregate queries on spatial, temporal, and spatiotemporal data. We also present a model that reduces the evaluation of aggregate queries to the problem of selecting qualifying tuples and the grouping of these tuples into collections on which an aggregate function is to be applied. This model gives us a framework that allows us to analyze and compare the different existing techniques for the evaluation of aggregate queries. At the same time, it allows us to identify opportunities for research on types of aggregate queries that have not been studied."
]
} |
1609.07983 | 2764291333 | Mining of spatial data is an enabling technology for mobile services, Internet-connected cars and the Internet of Things. But the very distinctiveness of spatial data that drives utility can cost user privacy. Past work has focused upon points and trajectories for differentially private release. In this work, we continue the tradition of privacy-preserving spatial analytics, focusing not on point or path data, but on planar spatial regions. Such data represent the area of a user’s most frequent visitation—such as “around home and nearby shops”. Specifically we consider the differentially private release of data structures that support range queries for counting users’ spatial regions. Counting planar regions leads to unique challenges not faced in existing work. A user’s spatial region that straddles multiple data structure cells can lead to duplicate counting at query time. We provably avoid this pitfall by leveraging the Euler characteristic for the first time with differential privacy. To address the increased sensitivity of range queries to spatial region data, we calibrate privacy-preserving noise using bounded user region size and a constrained inference that uses robust least absolute deviations. Our novel constrained inference reduces noise and promotes covertness by (privately) imposing consistency. We provide a full end-to-end theoretical analysis of both differential privacy and high-probability utility for our approach using concentration bounds. A comprehensive experimental study on several real-world datasets establishes practical validity. | A key challenge in aggregation is the distinct counting @cite_14 @cite_7 @cite_20 @cite_3 @cite_53 or multiple-counting problem @cite_36 @cite_46 . In contrast to point objects, a spatial body can span more than one cell in a partitioned space, inhibiting the ability of regular histograms to form accurate counts. Euler histograms @cite_45 are designed to address this problem for convex bodies @cite_36 @cite_46 , by appealing to Euler's formula from graph theory @cite_39 . A variation of Euler histogram has been studied for trajectory data to address aggregate queries on moving objects @cite_19 . In that work, Euler histograms were used in a distributed setting (motivating a distributed Euler histogram), to tackle the duplicate (distinct) entry problem rather than duplicate (distinct) counting. The Euler-histogram tree @cite_23 has been studied as a tree-based data structure for counting vehicle trajectories using the approach first developed in @cite_52 to address the distinct counting problem for reducing storage requirements. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_36",
"@cite_53",
"@cite_52",
"@cite_3",
"@cite_39",
"@cite_19",
"@cite_45",
"@cite_23",
"@cite_46",
"@cite_20"
],
"mid": [
"1546365363",
"2112128729",
"1851595508",
"2080393976",
"1002055276",
"2117010345",
"571207148",
"2019972606",
"1520033739",
"1978945372",
"",
"2023578311"
],
"abstract": [
"Spatial databases store information about the position of individual objects in space. In many applications however, such as traffic supervision or mobile communications, only summarized data, like the number of cars in an area or phones serviced by a cell, is required. Although this information can be obtained from transactional spatial databases, its computation is expensive, rendering online processing inapplicable. Driven by the non-spatial paradigm, spatial data warehouses can be constructed to accelerate spatial OLAP operations. In this paper we consider the star-schema and we focus on the spatial dimensions. Unlike the non-spatial case, the groupings and the hierarchies can be numerous and unknown at design time, therefore the well-known materialization techniques are not directly applicable. In order to address this problem, we construct an ad-hoc grouping hierarchy based on the spatial index at the finest spatial granularity. We incorporate this hierarchy in the lattice model and present efficient methods to process arbitrary aggregations. We finally extend our technique to moving objects by employing incremental update methods.",
"Several spatio-temporal applications require the retrieval of summarized information about moving objects that lie in a query region during a query interval (e.g., the number of mobile users covered by a cell, traffic volume in a district, etc.). Existing solutions have the distinct counting problem: if an object remains in the query region for several timestamps during the query interval, it will be counted multiple times in the result. We solve this problem by integrating spatio-temporal indexes with sketches, traditionally used for approximate query processing. The proposed techniques can also be applied to reduce the space requirements of conventional spatio-temporal data and to mine spatio-temporal association rules.",
"Spatial join is an expensive operation that is commonly used in spatial database systems. In order to generate efficient query plans for the queries involving spatial join operations, it is crucial to obtain accurate selectivity estimates for these operations. In this paper we introduce a framework for estimating the selectivity of spatial joins constrained by geometric selections. The center piece of the framework is Euler Histogram, which decomposes the estimation process into estimations on vertices, edges and faces. Based on the characteristics of different datasets, different probabilistic models can be plugged into the framework to provide better estimation results. To demonstrate the effectiveness of this framework, we implement it by incorporating two existing probabilistic models, and compare the performance with the Geometric Histogram [1] and the algorithm recently proposed by Mamoulis and Papadias.",
"In this paper we present a formal framework for modelling a trajectory data warehouse (TDW), namely a data warehouse aimed at storing aggregate information on trajectories of moving objects, which also offers visual OLAP operations for data analysis. The data warehouse model includes both temporal and spatial dimensions, and it is flexible and general enough to deal with objects that are either completely free or constrained in their movements (e.g., they move along a road network). In particular, the spatial dimension and the associated concept hierarchy reflect the structure of the environment in which the objects travel. Moreover, we cope with some issues related to the efficient computation of aggregate measures, as needed for implementing roll-up operations. The TDW and its visual interface allow one to investigate the behaviour of objects inside a given area as well as the movements of objects between areas in the same neighbourhood. A user can easily navigate the aggregate measures obtained from OLAP queries at different granularities, and get overall views in time and in space of the measures, as well as a focused view on specific measures, spatial areas, or temporal intervals. We discuss two application scenarios of our TDW, namely road traffic and vessel movement analysis, for which we built prototype systems. They mainly differ in the kind of information available for the moving objects under observation and their movement constraints.",
"Due to the high uptake of location-based services (LBSs), large spatio-temporal datasets of moving objects' trajectories are being created every day. An important task in spatial data analytics is to service range queries by returning trajectory counts within a queried region. The question of how to keep an individual user's data private whilst enabling spatial data analytics by third parties has become an urgent research direction. Indeed, it is increasingly becoming a concern for users. To preserve privacy we discard individual trajectories and aggregate counts over a spatial and temporal partition. However the privacy gained comes at a cost to utility: trajectories passing through multiple cells and re-entering a query region, lead to inaccurate query responses. This is known as the distinct counting problem. We propose the Connection Aware Spatial Euler (CASE) histogram to address this long-standing problem. The CASE histogram maintains the connectivity of a moving object path, but does not require the ID of an object to distinguish multiple entries into an arbitrary query region. Our approach is to process trajectories offline into aggregate counts which are sent to third parties, rather than the original trajectories. We also explore modifications of our aggregate counting approach that preserve differential privacy. Theoretically and experimentally we demonstrate that our method provides a high level of accuracy compared to the best known methods for the distinct counting problem, whilst preserving privacy. We conduct our experiments on both synthetic and real datasets over two competitive Euler histogram-based methods presented in the literature. Our methods enjoy improvements to accuracy from 10 up to 70 depending on trip data and query region size, with the greatest increase seen on the Microsoft T-Drive real dataset, representing a more than tripling of accuracy.",
"In this paper we discuss how data warehousing technology can be used to store aggregate information about trajectories and perform OLAP operations over them. To this end, we define a data cube with spatial and temporal dimensions, discretized according to a regular grid. We investigate in depth some issues related to the computation of a holistic aggregate function, i.e, the presence, which returns the number of distinct trajectories occurring in a given spatio-temporal area. In particular, we introduce a novel way to compute an approximate, but nevertheless very accurate, presence aggregate function, which uses only a bounded amount of measures stored in the base cells of our cuboid. We also concentrate on the loading phase of our data warehouse, which has to deal with an unbounded stream of trajectory observations. We suggest how the complexity of this phase can be reduced, and we analyse the errors that this procedure induces at the level of the sub-aggregates stored in the base cells. These errors and the accuracy of our approximate aggregate functions are carefully evaluated by means of tests performed on synthetic trajectory datasets.",
"",
"For monitoring moving objects via wireless sensor networks, we introduce two aggregate query types: distinct entries to an area and the number of objects in that area. We present a new technique, Distributed Euler Histograms (DEHs), to store and query aggregated moving object data. Aggregate queries occur in a variety of applications ranging from wildlife monitoring to traffic management. We show that DEHs are significantly more efficient, in terms of communication and data storage costs, than techniques based on moving object identifiers and more accurate than techniques based on simple histograms.",
"We present a geometric counting problem that arises in browsing and solve it in constant time per query using nonexhaustive tables. On the other hand, we prove that several closely related problems require exhaustive tables, no matter how much time we allow per query.",
"This work addresses the problem of aggregation of trajectories data. Specifically, we propose a tree-based data structure for counting vehicle trajectories by mapping them into a set of spatial histograms with different granularities. We also present an approach for processing spatio-temporal range queries by aggregating the histograms in the query rectangles. The proposed methodology can be used for preserving the privacy of vehicle drivers by maintaining aggregated trajectory data. In addition, as we show, it can be used to handle the well-known distinct counting problem. Experimental results show that the new data structure achieves a high level of accuracy in query results and consistently outperforms the leading histogram-based approach.",
"",
"Spatiotemporal databases are becoming increasingly more common. Typically, applications modeling spatiotemporal objects need to process vast amounts of data. In such cases, generating aggregate information from the data set is more useful than individually analyzing every entry. In this paper, we study the most relevant techniques for the evaluation of aggregate queries on spatial, temporal, and spatiotemporal data. We also present a model that reduces the evaluation of aggregate queries to the problem of selecting qualifying tuples and the grouping of these tuples into collections on which an aggregate function is to be applied. This model gives us a framework that allows us to analyze and compare the different existing techniques for the evaluation of aggregate queries. At the same time, it allows us to identify opportunities for research on types of aggregate queries that have not been studied."
]
} |
1609.07983 | 2764291333 | Mining of spatial data is an enabling technology for mobile services, Internet-connected cars and the Internet of Things. But the very distinctiveness of spatial data that drives utility can cost user privacy. Past work has focused upon points and trajectories for differentially private release. In this work, we continue the tradition of privacy-preserving spatial analytics, focusing not on point or path data, but on planar spatial regions. Such data represent the area of a user’s most frequent visitation—such as “around home and nearby shops”. Specifically we consider the differentially private release of data structures that support range queries for counting users’ spatial regions. Counting planar regions leads to unique challenges not faced in existing work. A user’s spatial region that straddles multiple data structure cells can lead to duplicate counting at query time. We provably avoid this pitfall by leveraging the Euler characteristic for the first time with differential privacy. To address the increased sensitivity of range queries to spatial region data, we calibrate privacy-preserving noise using bounded user region size and a constrained inference that uses robust least absolute deviations. Our novel constrained inference reduces noise and promotes covertness by (privately) imposing consistency. We provide a full end-to-end theoretical analysis of both differential privacy and high-probability utility for our approach using concentration bounds. A comprehensive experimental study on several real-world datasets establishes practical validity. | There is a line of work @cite_52 , in which the CASE histogram has been proposed as a privacy-preserving approach for trajectory data analytics, where only count data is utilised in a partitioned space applying the Euler characteristic to address duplicate counting. The authors in @cite_52 discuss the interactive setting for differentially-private Euler histogram release, which has a prohibitive limitation of the number of queries being linear in the number of bodies. Our work has no such limitation (see @cite_24 ). | {
"cite_N": [
"@cite_24",
"@cite_52"
],
"mid": [
"2109426455",
"1002055276"
],
"abstract": [
"Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.",
"Due to the high uptake of location-based services (LBSs), large spatio-temporal datasets of moving objects' trajectories are being created every day. An important task in spatial data analytics is to service range queries by returning trajectory counts within a queried region. The question of how to keep an individual user's data private whilst enabling spatial data analytics by third parties has become an urgent research direction. Indeed, it is increasingly becoming a concern for users. To preserve privacy we discard individual trajectories and aggregate counts over a spatial and temporal partition. However the privacy gained comes at a cost to utility: trajectories passing through multiple cells and re-entering a query region, lead to inaccurate query responses. This is known as the distinct counting problem. We propose the Connection Aware Spatial Euler (CASE) histogram to address this long-standing problem. The CASE histogram maintains the connectivity of a moving object path, but does not require the ID of an object to distinguish multiple entries into an arbitrary query region. Our approach is to process trajectories offline into aggregate counts which are sent to third parties, rather than the original trajectories. We also explore modifications of our aggregate counting approach that preserve differential privacy. Theoretically and experimentally we demonstrate that our method provides a high level of accuracy compared to the best known methods for the distinct counting problem, whilst preserving privacy. We conduct our experiments on both synthetic and real datasets over two competitive Euler histogram-based methods presented in the literature. Our methods enjoy improvements to accuracy from 10 up to 70 depending on trip data and query region size, with the greatest increase seen on the Microsoft T-Drive real dataset, representing a more than tripling of accuracy."
]
} |
1609.07983 | 2764291333 | Mining of spatial data is an enabling technology for mobile services, Internet-connected cars and the Internet of Things. But the very distinctiveness of spatial data that drives utility can cost user privacy. Past work has focused upon points and trajectories for differentially private release. In this work, we continue the tradition of privacy-preserving spatial analytics, focusing not on point or path data, but on planar spatial regions. Such data represent the area of a user’s most frequent visitation—such as “around home and nearby shops”. Specifically we consider the differentially private release of data structures that support range queries for counting users’ spatial regions. Counting planar regions leads to unique challenges not faced in existing work. A user’s spatial region that straddles multiple data structure cells can lead to duplicate counting at query time. We provably avoid this pitfall by leveraging the Euler characteristic for the first time with differential privacy. To address the increased sensitivity of range queries to spatial region data, we calibrate privacy-preserving noise using bounded user region size and a constrained inference that uses robust least absolute deviations. Our novel constrained inference reduces noise and promotes covertness by (privately) imposing consistency. We provide a full end-to-end theoretical analysis of both differential privacy and high-probability utility for our approach using concentration bounds. A comprehensive experimental study on several real-world datasets establishes practical validity. | Differential privacy @cite_27 @cite_24 has now become a preferred approach to data sanitisation as it provides a strong semantic guarantee with minimal assumptions placed on the adversary's knowledge or capabilities. Differential privacy has been studied for location privacy @cite_47 . One existing approach is to obfuscate the user's location by perturbing their real geographic coordinates. The concept of geo-indistinguishability has been defined @cite_12 @cite_43 as a notion of differential privacy in location-based services. Due to its popularity, differential privacy has been applied to many algorithms and across many domains, such as specialized versions of spatial data indexing structures designed with differential privacy for the purpose of private record matching @cite_40 ; in spatial crowdsourcing to help volunteer workers' locations remain private @cite_4 ; in machine learning, releasing differentially-private learned models of SVM classifiers @cite_31 ; in geo-social networks for location recommendation @cite_42 ; and for modelling human mobility from real-world cellular network data @cite_9 . | {
"cite_N": [
"@cite_31",
"@cite_4",
"@cite_9",
"@cite_42",
"@cite_24",
"@cite_43",
"@cite_27",
"@cite_40",
"@cite_47",
"@cite_12"
],
"mid": [
"2096633407",
"2196824873",
"1978648695",
"2151955123",
"2109426455",
"1541376630",
"2517104773",
"2054514509",
"1996587544",
"2082894754"
],
"abstract": [
"The ubiquitous need for analyzing privacy-sensitive information—including health records, personal communications, product ratings and social network data—is driving significant interest in privacy-preserving data analysis across several research communities. This paper explores the release of Support Vector Machine (SVM) classifiers while preserving the privacy of training data. The SVM is a popular machine learning method that maps data to a high-dimensional feature space before learning a linear decision boundary. We present efficient mechanisms for finite-dimensional feature mappings and for (potentially infinite-dimensional) mappings with translation-invariant kernels. In the latter case, our mechanism borrows a technique from large-scale learning to learn in a finite-dimensional feature space whose inner-product uniformly approximates the desired feature space inner-product (the desired kernel) with high probability. Differential privacy is established using algorithmic stability, a property used in learning theory to bound generalization error. Utility—when the private classifier is pointwise close to the non-private classifier with high probability—is proven using smoothness of regularized empirical risk minimization with respect to small perturbations to the feature mapping. Finally we conclude with lower bounds on the differential privacy of any mechanism approximating the SVM.",
"Spatial Crowdsourcing (SC) is a transformative platform that engages individuals, groups and communities in the act of collecting, analyzing, and disseminating environmental, social and other spatio-temporal information. The objective of SC is to outsource a set of spatio-temporal tasks to a set of workers, i.e., individuals with mobile devices that perform the tasks by physically traveling to specified locations of interest. However, current solutions require the workers, who in many cases are simply volunteering for a cause, to disclose their locations to untrustworthy entities. In this paper, we introduce a framework for protecting location privacy of workers participating in SC tasks. We argue that existing location privacy techniques are not sufficient for SC, and we propose a mechanism based on differential privacy and geocasting that achieves effective SC services while offering privacy guarantees to workers. We investigate analytical models and task assignment strategies that balance multiple crucial aspects of SC functionality, such as task completion rate, worker travel distance and system overhead. Extensive experimental results on real-world datasets show that the proposed technique protects workers' location privacy without incurring significant performance metrics penalties.",
"Models of human mobility have broad applicability in urban planning, ecology, epidemiology, and other fields. Starting with Call Detail Records (CDRs) from a cellular telephone network that have gone through a straightforward anonymization procedure, the prior WHERE modeling approach produces synthetic CDRs for a synthetic population. The accuracy of WHERE has been validated against billions of location samples for hundreds of thousands of cell phones in the New York and Los Angeles metropolitan areas. In this paper, we introduce DP-WHERE, which modifies WHERE by adding controlled noise to achieve differential privacy, a strict definition of privacy that makes no assumptions about the power or background knowledge of a potential adversary. We also present experiments showing that the accuracy of DP-WHERE remains close to that of WHERE and of real CDRs. With this work, we aim to enable the creation and possible release of synthetic models that capture the mobility patterns of real metropolitan populations while preserving privacy.",
"Location-tagged social media have an increasingly important role in shaping behavior of individuals. With the help of location recommendations, users are able to learn about events, products or places of interest that are relevant to their preferences. User locations and movement patterns are available from geosocial networks such as Foursquare, mass transit logs or traffic monitoring systems. However, disclosing movement data raises serious privacy concerns, as the history of visited locations can reveal sensitive details about an individual's health status, alternative lifestyle, etc. In this paper, we investigate mechanisms to sanitize location data used in recommendations with the help of differential privacy. We also identify the main factors that must be taken into account to improve accuracy. Extensive experimental results on real-world datasets show that a careful choice of differential privacy technique leads to satisfactory location recommendation results.",
"Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.",
"With the wide adoption of handheld devices (e.g., smartphones, tablets), a large number of location-based services (also called LBSs) have flourished providing mobile users with real-time and contextual information on the move. Accounting for the amount of location information they are given by users, these services are able to track users wherever they go and to learn sensitive information about them (e.g., their points of interest including home, work, religious or political places regularly visited). A number of solutions have been proposed in the past few years to protect users location information while still allowing them to enjoy geo-located services. Among the most robust solutions are those that apply the popular notion of differential privacy to location privacy (e.g., Geo-Indistinguishability), promising strong theoretical privacy guarantees with a bounded accuracy loss. While these theoretical guarantees are attracting, it might be difficult for end users or practitioners to assess their effectiveness in the wild. In this paper, we carry on a practical study using real mobility traces coming from two different datasets, to assess the ability of Geo-Indistinguishability to protect users' points of interest (POIs). We show that a curious LBS collecting obfuscated location information sent by mobile users is still able to infer most of the users POIs with a reasonable both geographic and semantic precision. This precision depends on the degree of obfuscation applied by Geo-Indistinguishability. Nevertheless, the latter also has an impact on the overhead incurred on mobile devices resulting in a privacy versus overhead trade-off. Finally, we show in our study that POIs constitute a quasi-identifier for mobile users and that obfuscating them using Geo-Indistinguishability is not sufficient as an attacker is able to re-identify at least 63 of them despite a high degree of obfuscation.",
"We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.",
"Private matching between datasets owned by distinct parties is a challenging problem with several applications. Private matching allows two parties to identify the records that are close to each other according to some distance functions, such that no additional information other than the join result is disclosed to any party. Private matching can be solved securely and accurately using secure multi-party computation (SMC) techniques, but such an approach is prohibitively expensive in practice. Previous work proposed the release of sanitized versions of the sensitive datasets which allows blocking, i.e., filtering out sub-sets of records that cannot be part of the join result. This way, SMC is applied only to a small fraction of record pairs, reducing the matching cost to acceptable levels. The blocking step is essential for the privacy, accuracy and efficiency of matching. However, the state-of-the-art focuses on sanitization based on k-anonymity, which does not provide sufficient privacy. We propose an alternative design centered on differential privacy, a novel paradigm that provides strong privacy guarantees. The realization of the new model presents difficult challenges, such as the evaluation of distance-based matching conditions with the help of only a statistical queries interface. Specialized versions of data indexing structures (e.g., kd-trees) also need to be devised, in order to comply with differential privacy. Experiments conducted on the real-world Census-income dataset show that, although our methods provide strong privacy, their effectiveness in reducing matching cost is not far from that of k-anonymity based counterparts.",
"Sharing of location data enables numerous exciting applications, such as location-based queries, location-based social recommendations, monitoring of traffic and air pollution levels, etc. Disclosing exact user locations raises serious privacy concerns, as locations may give away sensitive information about individuals' health status, alternative lifestyles, political and religious affiliations, etc. Preserving location privacy is an essential requirement towards the successful deployment of location-based applications. These lecture notes provide an overview of the state-of-the-art in location privacy protection. A diverse body of solutions is reviewed, including methods that use location generalization, cryptographic techniques or differential privacy. The most prominent results are discussed, and promising directions for future work are identified. Table of Contents: Introduction Privacy-Preserving Spatial Transformations Cryptographic Approaches Hybrid Approaches Private Matching of Spatial Datasets Trajectory Anonymization Differentially Private Publication of Spatial Datasets Conclusions",
"The growing popularity of location-based systems, allowing unknown untrusted servers to easily collect huge amounts of information regarding users' location, has recently started raising serious privacy concerns. In this paper we introduce geoind, a formal notion of privacy for location-based systems that protects the user's exact location, while allowing approximate information -- typically needed to obtain a certain desired service -- to be released. This privacy definition formalizes the intuitive notion of protecting the user's location within a radius @math with a level of privacy that depends on r, and corresponds to a generalized version of the well-known concept of differential privacy. Furthermore, we present a mechanism for achieving geoind by adding controlled random noise to the user's location. We describe how to use our mechanism to enhance LBS applications with geo-indistinguishability guarantees without compromising the quality of the application results. Finally, we compare state-of-the-art mechanisms from the literature with ours. It turns out that, among all mechanisms independent of the prior, our mechanism offers the best privacy guarantees."
]
} |
1609.07983 | 2764291333 | Mining of spatial data is an enabling technology for mobile services, Internet-connected cars and the Internet of Things. But the very distinctiveness of spatial data that drives utility can cost user privacy. Past work has focused upon points and trajectories for differentially private release. In this work, we continue the tradition of privacy-preserving spatial analytics, focusing not on point or path data, but on planar spatial regions. Such data represent the area of a user’s most frequent visitation—such as “around home and nearby shops”. Specifically we consider the differentially private release of data structures that support range queries for counting users’ spatial regions. Counting planar regions leads to unique challenges not faced in existing work. A user’s spatial region that straddles multiple data structure cells can lead to duplicate counting at query time. We provably avoid this pitfall by leveraging the Euler characteristic for the first time with differential privacy. To address the increased sensitivity of range queries to spatial region data, we calibrate privacy-preserving noise using bounded user region size and a constrained inference that uses robust least absolute deviations. Our novel constrained inference reduces noise and promotes covertness by (privately) imposing consistency. We provide a full end-to-end theoretical analysis of both differential privacy and high-probability utility for our approach using concentration bounds. A comprehensive experimental study on several real-world datasets establishes practical validity. | Within the scope of aggregation, studies in the area of point privacy have also proposed sanitization algorithms for generating differentially-private histograms and releasing aggregate statistics. Many studies have explored differential privacy of point sets @cite_25 @cite_40 @cite_50 @cite_8 @cite_48 @cite_34 @cite_2 @cite_44 @cite_35 . They have studied regular grid partitioning data structures and hierarchical structures. This work for the first time addresses the problem of differentially-private counting of planar bodies. | {
"cite_N": [
"@cite_35",
"@cite_8",
"@cite_48",
"@cite_44",
"@cite_40",
"@cite_50",
"@cite_2",
"@cite_34",
"@cite_25"
],
"mid": [
"2764863051",
"2015569667",
"83413860",
"2024021429",
"2054514509",
"2122290076",
"2245706161",
"2212164227",
"2071511328"
],
"abstract": [
"",
"With the wide deployment of smart card automated fare collection (SCAFC) systems, public transit agencies have been benefiting from huge volume of transit data, a kind of sequential data, collected every day. Yet, improper publishing and use of transit data could jeopardize passengers' privacy. In this paper, we present our solution to transit data publication under the rigorous differential privacy model for the Societe de transport de Montreal (STM). We propose an efficient data-dependent yet differentially private transit data sanitization approach based on a hybrid-granularity prefix tree structure. Moreover, as a post-processing step, we make use of the inherent consistency constraints of a prefix tree to conduct constrained inferences, which lead to better utility. Our solution not only applies to general sequential data, but also can be seamlessly extended to trajectory data. To our best knowledge, this is the first paper to introduce a practical solution for publishing large volume of sequential data under differential privacy. We examine data utility in terms of two popular data analysis tasks conducted at the STM, namely count queries and frequent sequential pattern mining. Extensive experiments on real-life STM datasets confirm that our approach maintains high utility and is scalable to large datasets.",
"Differential privacy has emerged as one of the most promising privacy models for releasing the results of statistical queries on sensitive data, with strong privacy guarantees. Existing works on differential privacy mostly focus on simple aggregations such as counts. This paper investigates the spatial OLAP queries, which combines GIS and OLAP queries at the same time. We employ a differentially private R-tree(DiffR-Tree) to help spatial OLAP queries. In our method, several steps need to be carefully designed to equip the spatial data warehouse structure with differential privacy requirements. Our experiments results demonstrate the efficiency of our spatial OLAP query index structure and the accuracy of answering queries.",
"In this paper, we tackle the problem of constructing a differentially private synopsis for two-dimensional datasets such as geospatial datasets. The current state-of-the-art methods work by performing recursive binary partitioning of the data domains, and constructing a hierarchy of partitions. We show that the key challenge in partition-based synopsis methods lies in choosing the right partition granularity to balance the noise error and the non-uniformity error. We study the uniform-grid approach, which applies an equi-width grid of a certain size over the data domain and then issues independent count queries on the grid cells. This method has received no attention in the literature, probably due to the fact that no good method for choosing a grid size was known. Based on an analysis of the two kinds of errors, we propose a method for choosing the grid size. Experimental results validate our method, and show that this approach performs as well as, and often times better than, the state-of-the-art methods. We further introduce a novel adaptive-grid method. The adaptive grid method lays a coarse-grained grid over the dataset, and then further partitions each cell according to its noisy count. Both levels of partitions are then used in answering queries over the dataset. This method exploits the need to have finer granularity partitioning over dense regions and, at the same time, coarse partitioning over sparse regions. Through extensive experiments on real-world datasets, we show that this approach consistently and significantly outperforms the uniform-grid method and other state-of-the-art methods.",
"Private matching between datasets owned by distinct parties is a challenging problem with several applications. Private matching allows two parties to identify the records that are close to each other according to some distance functions, such that no additional information other than the join result is disclosed to any party. Private matching can be solved securely and accurately using secure multi-party computation (SMC) techniques, but such an approach is prohibitively expensive in practice. Previous work proposed the release of sanitized versions of the sensitive datasets which allows blocking, i.e., filtering out sub-sets of records that cannot be part of the join result. This way, SMC is applied only to a small fraction of record pairs, reducing the matching cost to acceptable levels. The blocking step is essential for the privacy, accuracy and efficiency of matching. However, the state-of-the-art focuses on sanitization based on k-anonymity, which does not provide sufficient privacy. We propose an alternative design centered on differential privacy, a novel paradigm that provides strong privacy guarantees. The realization of the new model presents difficult challenges, such as the evaluation of distance-based matching conditions with the help of only a statistical queries interface. Specialized versions of data indexing structures (e.g., kd-trees) also need to be devised, in order to comply with differential privacy. Experiments conducted on the real-world Census-income dataset show that, although our methods provide strong privacy, their effectiveness in reducing matching cost is not far from that of k-anonymity based counterparts.",
"Differential privacy has recently emerged as the de facto standard for private data release. This makes it possible to provide strong theoretical guarantees on the privacy and utility of released data. While it is well-understood how to release data based on counts and simple functions under this guarantee, it remains to provide general purpose techniques to release data that is useful for a variety of queries. In this paper, we focus on spatial data such as locations and more generally any multi-dimensional data that can be indexed by a tree structure. Directly applying existing differential privacy methods to this type of data simply generates noise. We propose instead the class of private spatial decompositions'': these adapt standard spatial indexing methods such as quad trees and kd-trees to provide a private description of the data distribution. Equipping such structures with differential privacy requires several steps to ensure that they provide meaningful privacy guarantees. Various basic steps, such as choosing splitting points and describing the distribution of points within a region, must be done privately, and the guarantees of the different building blocks composed to provide an overall guarantee. Consequently, we expose the design space for private spatial decompositions, and analyze some key examples. A major contribution of our work is to provide new techniques for parameter setting and post-processing the output to improve the accuracy of query answers. Our experimental study demonstrates that it is possible to build such decompositions efficiently, and use them to answer a variety of queries privately with high accuracy.",
"GPS-enabled devices are now ubiquitous, from airplanes and cars to smartphones and wearable technology. This has resulted in a wealth of data about the movements of individuals and populations, which can be analyzed for useful information to aid in city and traffic planning, disaster preparedness and so on. However, the places that people go can disclose extremely sensitive information about them, and thus their use needs to be filtered through privacy preserving mechanisms. This turns out to be a highly challenging task: raw trajectories are highly detailed, and typically no pair is alike. Previous attempts fail either to provide adequate privacy protection, or to remain sufficiently faithful to the original behavior. This paper presents DPT, a system to synthesize mobility data based on raw GPS trajectories of individuals while ensuring strong privacy protection in the form of e-differential privacy. DPT makes a number of novel modeling and algorithmic contributions including (i) discretization of raw trajectories using hierarchical reference systems (at multiple resolutions) to capture individual movements at differing speeds, (ii) adaptive mechanisms to select a small set of reference systems and construct prefix tree counts privately, and (iii) use of direction-weighted sampling for improved utility. While there have been prior attempts to solve the subproblems required to generate synthetic trajectories, to the best of our knowledge, ours is the first system that provides an end-to-end solution. We show the efficacy of our synthetic trajectory generation system using an extensive empirical evaluation.",
"Sharing real-time traffic data can be of great value to understanding many important phenomena, such as congestion patterns or popular places. To this end, private user data must be aggregated and shared continuously over time with data privacy guarantee. However, releasing time series data with standard differential privacy mechanism can lead to high perturbation error due to the correlation between time stamps. In addition, data sparsity in the spatial domain imposes another challenge to user privacy as well as utility. To address the challenges, we propose a real-time framework that guarantees differential privacy for individual users and releases accurate data for research purposes. We present two estimation algorithms designed to utilize domain knowledge in order to mitigate the effect of perturbation error. Evaluations with simulated traffic data show our solutions outperform existing methods in both utility and computation efficiency, enabling real-time data sharing with strong privacy guarantee.",
"Differential privacy has emerged as one of the most promising privacy models for private data release. It can be used to release different types of data, and, in particular, histograms, which provide useful summaries of a dataset. Several differentially private histogram releasing schemes have been proposed recently. However, most of them directly add noise to the histogram counts, resulting in undesirable accuracy. In this paper, we propose two sanitization techniques that exploit the inherent redundancy of real-life datasets in order to boost the accuracy of histograms. They lossily compress the data and sanitize the compressed data. Our first scheme is an optimization of the Fourier Perturbation Algorithm (FPA) presented in RN10 . It improves the accuracy of the initial FPA by a factor of 10. The other scheme relies on clustering and exploits the redundancy between bins. Our extensive experimental evaluation over various real-life and synthetic datasets demonstrates that our techniques preserve very accurate distributions and considerably improve the accuracy of range queries over attributed histograms."
]
} |
1609.08239 | 2952598594 | Social bots can affect online communication among humans. We study this phenomenon by focusing on #YaMeCanse, the most active protest hashtag in the history of Twitter in Mexico. Accounts using the hashtag are classified using the BotOrNot bot detection tool. Our preliminary analysis suggests that bots played a critical role in disrupting online communication about the protest movement. | Twitter bots have been alleged to influence the political discourse surrounding Brexit in the United Kingdom @cite_3 . To evaluate the role of bots, opinions were clustered based on hashtag use. It was found that a very small fraction of most active accounts was responsible for a large fraction of pro-Brexit content: fewer than 2,000 accounts in a collection of 300,000 users (less than 1 Network decomposition techniques were used to conclude that the success of social protests depends in part on activating a critical periphery @cite_1 . These peripheral participants may be as essential to the communication of the protest message as the most connected and active members. The influence of bots, hindering communication and blocking potential adhesion of new members to a community, could lead to a halt in the movement's growth. | {
"cite_N": [
"@cite_1",
"@cite_3"
],
"mid": [
"2183515712",
"2466146218"
],
"abstract": [
"Social media have provided instrumental means of communication in many recent political protests. The efficiency of online networks in disseminating timely information has been praised by many commentators; at the same time, users are often derided as “slacktivists” because of the shallow commitment involved in clicking a forwarding button. Here we consider the role of these peripheral online participants, the immense majority of users who surround the small epicenter of protests, representing layers of diminishing online activity around the committed minority. We analyze three datasets tracking protest communication in different languages and political contexts through the social media platform Twitter and employ a network decomposition technique to examine their hierarchical structure. We provide consistent evidence that peripheral participants are critical in increasing the reach of protest messages and generating online content at levels that are comparable to core participants. Although committed minorities may constitute the heart of protest movements, our results suggest that their success in maximizing the number of online citizens exposed to protest messages depends, at least in part, on activating the critical periphery. Peripheral users are less active on a per capita basis, but their power lies in their numbers: their aggregate contribution to the spread of protest messages is comparable in magnitude to that of core participants. An analysis of two other datasets unrelated to mass protests strengthens our interpretation that core-periphery dynamics are characteristically important in the context of collective action events. Theoretical models of diffusion in social networks would benefit from increased attention to the role of peripheral nodes in the propagation of information and behavior.",
"Bots are social media accounts that automate interaction with other users, and they are active on the StrongerIn-Brexit conversation happening over Twitter. These automated scripts generate content through these platforms and then interact with people. Political bots are automated accounts that are particularly active on public policy issues, elections, and political crises. In this preliminary study on the use of political bots during the UK referendum on EU membership, we analyze the tweeting patterns for both human users and bots. We find that political bots have a small but strategic role in the referendum conversations: (1) the family of hashtags associated with the argument for leaving the EU dominates, (2) different perspectives on the issue utilize different levels of automation, and (3) less than 1 percent of sampled accounts generate almost a third of all the messages.\u0000"
]
} |
1609.08239 | 2952598594 | Social bots can affect online communication among humans. We study this phenomenon by focusing on #YaMeCanse, the most active protest hashtag in the history of Twitter in Mexico. Accounts using the hashtag are classified using the BotOrNot bot detection tool. Our preliminary analysis suggests that bots played a critical role in disrupting online communication about the protest movement. | In relation to Mexico, the use of Twitter as a vital communication channel was investigated in the context of urban warfare related to the ongoing war on drugs @cite_7 . Users would tweet the location of conflicts as they erupted, so that their followers could then avoid these violent zones. The authors emphasized that warfare is also a conflict over the control of information, and provided a longitudinal survey of the adoption of Twitter to create safe networks of information. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2135298385"
],
"abstract": [
"In this paper we examine the information sharing practices of people living in cities amid armed conflict. We describe the volume and frequency of microblogging activity on Twitter from four cities afflicted by the Mexican Drug War, showing how citizens use social media to alert one another and to comment on the violence that plagues their communities. We then investigate the emergence of civic media \"curators,\" individuals who act as \"war correspondents\" by aggregating and disseminating information to large numbers of people on social media. We conclude by outlining the implications of our observations for the design of civic media systems in wartime."
]
} |
1609.08239 | 2952598594 | Social bots can affect online communication among humans. We study this phenomenon by focusing on #YaMeCanse, the most active protest hashtag in the history of Twitter in Mexico. Accounts using the hashtag are classified using the BotOrNot bot detection tool. Our preliminary analysis suggests that bots played a critical role in disrupting online communication about the protest movement. | The appearance of fake Twitter accounts among the followers of political figures is common, with 20-29 The SentiBot tool employed a combination of graph-theoretic, syntactic, and semantic features to discern between humans, cyborgs, and bots @cite_10 . 19 out of the top 25 variables that determine if an account is a bot were found to be related to sentiment. Another classifier exploits natural language processing for social bot detection @cite_9 . Such techniques were recently used to investigate the presence and effect of social bots promoting vaporizers and e-cigarettes @cite_8 . These are examples of tools that would have to be retrained in order to establish the same results in languages other than English. Porting them to Spanish would be necessary for application to a corpus of tweets such as the one analyzed in this paper. One of our present contributions is the observation that in considering the different classification scores produced by , we can use the language-independent features to flag potential bot accounts in Spanish. This technique could potentially be used in other languages as well. | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_8"
],
"mid": [
"2963208026",
"2072715695",
"2962985272"
],
"abstract": [
"Abstract Twitter, a popular social media outlet, has evolved into a vast source of linguistic data, rich with opinion, sentiment, and discussion. Due to the increasing popularity of Twitter, its perceived potential for exerting social influence has led to the rise of a diverse community of automatons, commonly referred to as bots. These inorganic and semi-organic Twitter entities can range from the benevolent (e.g., weather-update bots, help-wanted-alert bots) to the malevolent (e.g., spamming messages, advertisements, or radical opinions). Existing detection algorithms typically leverage metadata (time between tweets, number of followers, etc.) to identify robotic accounts. Here, we present a powerful classification scheme that exclusively uses the natural language text from organic users to provide a criterion for identifying accounts posting automated messages. Since the classifier operates on text alone, it is flexible and may be applied to any textual data beyond the Twittersphere.",
"Twitter is a new web application playing dual roles of online social networking and microblogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot, and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot, and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: 1) an entropy-based component, 2) a spam detection component, 3) an account properties component, and 4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot, or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.",
"Background Twitter has become the “wild-west” of marketing and promotional strategies for advertisement agencies. Electronic cigarettes have been heavily marketed across Twitter feeds, offering discounts, “kid-friendly” flavors, algorithmically generated false testimonials, and free samples. Methods All electronic cigarette keyword related tweets from a 10 sample of Twitter spanning January 2012 through December 2014 (approximately 850,000 total tweets) were identified and categorized as Automated or Organic by combining a keyword classification and a machine trained Human Detection algorithm. A sentiment analysis using Hedonometrics was performed on Organic tweets to quantify the change in consumer sentiments over time. Commercialized tweets were topically categorized with key phrasal pattern matching. Results The overwhelming majority (80 ) of tweets were classified as automated or promotional in nature. The majority of these tweets were coded as commercialized (83.65 in 2013), up to 33 of which offered discounts or free samples and appeared on over a billion twitter feeds as impressions. The positivity of Organic (human) classified tweets has decreased over time (5.84 in 2013 to 5.77 in 2014) due to a relative increase in the negative words ‘ban’, ‘tobacco’, ‘doesn’t’, ‘drug’, ‘against’, ‘poison’, ‘tax’ and a relative decrease in the positive words like ‘haha’, ‘good’, ‘cool’. Automated tweets are more positive than organic (6.17 versus 5.84) due to a relative increase in the marketing words like ‘best’, ‘win’, ‘buy’, ‘sale’, ‘health’, ‘discount’ and a relative decrease in negative words like ‘bad’, ‘hate’, ‘stupid’, ‘don’t’. Conclusions Due to the youth presence on Twitter and the clinical uncertainty of the long term health complications of electronic cigarette consumption, the protection of public health warrants scrutiny and potential regulation of social media marketing."
]
} |
1609.08059 | 2527273497 | The outcomes of this paper are twofold. Implicit complexity. We provide an implicit characterization of polynomial time computation in terms of ordinary differential equations: we characterize the class PTIME of languages computable in polynomial time in terms of differential equations with polynomial right-hand side. This result gives a purely continuous elegant and simple characterization of PTIME. We believe it is the first time complexity classes are characterized using only ordinary differential equations. Our characterization extends to functions computable in polynomial time over the reals in the sense of computable analysis. Our results may provide a new perspective on classical complexity, by giving a way to define complexity classes, like PTIME, in a very simple way, without any reference to a notion of (discrete) machine. This may also provide ways to state classical questions about computational complexity via ordinary differential equations. Continuous-Time Models of Computation. Our results can also be interpreted in terms of analog computers or analog models of computation: As a side effect, we get that the 1941 General Purpose Analog Computer (GPAC) of Claude Shannon is provably equivalent to Turing machines both in terms of computability and complexity, a fact that has never been established before. This result provides arguments in favour of a generalised form of the Church-Turing Hypothesis, which states that any physically realistic (macroscopic) computer is equivalent to Turing machines both in terms of computability and complexity. | Attempts to derive a complexity theory for continuous-time systems include @cite_20 . However, the theory developed there is not intended to cover generic dynamical systems but only specific systems that are related to Lyapunov theory for dynamical systems. The global minimizers of particular energy functions are supposed to give solutions of the problem. The structure of such energy functions leads to the introduction of problem classes @math and @math , with the existence of complete problems for theses classes. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2041768209"
],
"abstract": [
"Recent years have seen an increasing interest in the study of continuous-time computational models. However, not so much has been done with respect to setting up a complexity theoretic framework for such models. The present paper intends to go a step into this direction. We consider problems over the real numbers which we try to relate to Lyapunov theory for dynamical systems: The global minimizers of particular energy functions are supposed to give solutions of the problem. The structure of such energy functions leads to the introduction of problem classes U and NU; for the systems we are considering they parallel the classical complexity classes P and NP. We then introduce a notion of reducibility among problems and show existence of complete problems for NU and for PU, a polynomial hierarchy of continuous-time problems. For previous work on the computational capabilities of continuous-time systems see the surveys by Cris Moore [9] and by Pekka Orponen [10]. Our paper presents a step into the direction of creating a general framework for a complexity theory of continuous-time systems as outlined in [10]. It is closely related to work done by A. Ben-Hur, H. Siegelmann, and S. Fishman [12, 11]."
]
} |
1609.08059 | 2527273497 | The outcomes of this paper are twofold. Implicit complexity. We provide an implicit characterization of polynomial time computation in terms of ordinary differential equations: we characterize the class PTIME of languages computable in polynomial time in terms of differential equations with polynomial right-hand side. This result gives a purely continuous elegant and simple characterization of PTIME. We believe it is the first time complexity classes are characterized using only ordinary differential equations. Our characterization extends to functions computable in polynomial time over the reals in the sense of computable analysis. Our results may provide a new perspective on classical complexity, by giving a way to define complexity classes, like PTIME, in a very simple way, without any reference to a notion of (discrete) machine. This may also provide ways to state classical questions about computational complexity via ordinary differential equations. Continuous-Time Models of Computation. Our results can also be interpreted in terms of analog computers or analog models of computation: As a side effect, we get that the 1941 General Purpose Analog Computer (GPAC) of Claude Shannon is provably equivalent to Turing machines both in terms of computability and complexity, a fact that has never been established before. This result provides arguments in favour of a generalised form of the Church-Turing Hypothesis, which states that any physically realistic (macroscopic) computer is equivalent to Turing machines both in terms of computability and complexity. | Another attempt is @cite_24 , which also focused on a very specific type of systems: dissipative flow models. The proposed theory is nice but non-generic. This theory has been used in several papers from the same authors to study a particular class of flow dynamics @cite_2 for solving linear programming problems. | {
"cite_N": [
"@cite_24",
"@cite_2"
],
"mid": [
"2057137378",
"1567604129"
],
"abstract": [
"We present a model of computation with ordinary differential equations (ODEs) which converge to attractors that are interpreted as the output of a computation. We introduce a measure of complexity for exponentially convergent ODEs, enabling an algorithmic analysis of continuous time flows and their comparison with discrete algorithms. We define polynomial and logarithmic continuous time complexity classes and show that an ODE which solves the maximum network flow problem has polynomial time complexity. We also analyze a simple flow that solves the Maximum problem in logarithmic time. We conjecture that a subclass of the continuous P is equivalent to the classical P.",
"In this paper we address the complexity of solving linear programming problems with a set of differential equations that converge to a fixed point that represents the optimal solution. Assuming a probabilistic model, where the inputs are i.i.d. Gaussian variables, we compute the distribution of the convergence rate to the attracting fixed point. Using the framework of Random Matrix Theory, we derive a simple expression for this distribution in the asymptotic limit of large problem size. In this limit, we find the surprising result that the distribution of the convergence rate is a scaling function of a single variable. This scaling variable combines the convergence rate with the problem size (i.e., the number of variables and the number of constraints). We also estimate numerically the distribution of the computation time to an approximate solution, which is the time required to reach a vicinity of the attracting fixed point. We find that it is also a scaling function. Using the problem size dependence of the distribution functions, we derive high probability bounds on the convergence rates and on the computation times to the approximate solution."
]
} |
1609.08059 | 2527273497 | The outcomes of this paper are twofold. Implicit complexity. We provide an implicit characterization of polynomial time computation in terms of ordinary differential equations: we characterize the class PTIME of languages computable in polynomial time in terms of differential equations with polynomial right-hand side. This result gives a purely continuous elegant and simple characterization of PTIME. We believe it is the first time complexity classes are characterized using only ordinary differential equations. Our characterization extends to functions computable in polynomial time over the reals in the sense of computable analysis. Our results may provide a new perspective on classical complexity, by giving a way to define complexity classes, like PTIME, in a very simple way, without any reference to a notion of (discrete) machine. This may also provide ways to state classical questions about computational complexity via ordinary differential equations. Continuous-Time Models of Computation. Our results can also be interpreted in terms of analog computers or analog models of computation: As a side effect, we get that the 1941 General Purpose Analog Computer (GPAC) of Claude Shannon is provably equivalent to Turing machines both in terms of computability and complexity, a fact that has never been established before. This result provides arguments in favour of a generalised form of the Church-Turing Hypothesis, which states that any physically realistic (macroscopic) computer is equivalent to Turing machines both in terms of computability and complexity. | Relating computational complexity problems (such as the @math vs @math question) to problems of analysis has already been the motivation of other papers. In particular, Félix Costa and Jerzy Mycka have a series of work (see e.g. @cite_21 ) relating the @math vs @math question to questions in the context of real and complex analysis. Their approach is very different: they do so at the price of introducing a whole hierarchy of functions and operators over functions. In particular, they can use multiple times an operator which solves ordinary differential equations before defining an element of @math and @math (the counterparts of @math and @math introduced in their paper), while in our case we do not need the multiple application of this kind of operator: we only need to use application of such an operator (i.e. we only need to solve one ordinary differential equations with polynomial right-hand side). | {
"cite_N": [
"@cite_21"
],
"mid": [
"1972178849"
],
"abstract": [
"In this paper, we aim at an analog characterization of the classical P ≠ NP conjecture of Structural Complexity. We consider functions over continuous real and complex valued variables. Subclasses of functions can be defined using Laplace transforms adapted to continuous-time computation, introducing analog classes DAnalog and NAnalog. We then show that if DAnalog ≠ NAnalog then P ≠ NP."
]
} |
1609.08059 | 2527273497 | The outcomes of this paper are twofold. Implicit complexity. We provide an implicit characterization of polynomial time computation in terms of ordinary differential equations: we characterize the class PTIME of languages computable in polynomial time in terms of differential equations with polynomial right-hand side. This result gives a purely continuous elegant and simple characterization of PTIME. We believe it is the first time complexity classes are characterized using only ordinary differential equations. Our characterization extends to functions computable in polynomial time over the reals in the sense of computable analysis. Our results may provide a new perspective on classical complexity, by giving a way to define complexity classes, like PTIME, in a very simple way, without any reference to a notion of (discrete) machine. This may also provide ways to state classical questions about computational complexity via ordinary differential equations. Continuous-Time Models of Computation. Our results can also be interpreted in terms of analog computers or analog models of computation: As a side effect, we get that the 1941 General Purpose Analog Computer (GPAC) of Claude Shannon is provably equivalent to Turing machines both in terms of computability and complexity, a fact that has never been established before. This result provides arguments in favour of a generalised form of the Church-Turing Hypothesis, which states that any physically realistic (macroscopic) computer is equivalent to Turing machines both in terms of computability and complexity. | It its true that one can sometimes convert the multiple use of operators solving ordinary differential equations into a single application @cite_27 , but this happens only in very specific cases, which do not seem to include the classes @math and @math . In particular, the application of nested continuous recursion (i.e. nested use of solving ordinary differential equations) may be needed using their constructions, whereas we define @math using only a simple notion of acceptance and only system of ordinary differential equations. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2014107656"
],
"abstract": [
"In this paper we show that Shannon's general purpose analog computer (GPAC) is equivalent to a particular class of recursive functions over the reals with the flavour of Kleene's classical recursive function theory.We first consider the GPAC and several of its extensions to show that all these models have drawbacks and we introduce an alternative continuous-time model of computation that solves these problems. We also show that this new model preserves all the significant relations involving the previous models (namely, the equivalence with the differentially algebraic functions).We then continue with the topic of recursive functions over the reals, and we show full connections between functions generated by the model introduced so far and a particular class of recursive functions over the reals."
]
} |
1609.08059 | 2527273497 | The outcomes of this paper are twofold. Implicit complexity. We provide an implicit characterization of polynomial time computation in terms of ordinary differential equations: we characterize the class PTIME of languages computable in polynomial time in terms of differential equations with polynomial right-hand side. This result gives a purely continuous elegant and simple characterization of PTIME. We believe it is the first time complexity classes are characterized using only ordinary differential equations. Our characterization extends to functions computable in polynomial time over the reals in the sense of computable analysis. Our results may provide a new perspective on classical complexity, by giving a way to define complexity classes, like PTIME, in a very simple way, without any reference to a notion of (discrete) machine. This may also provide ways to state classical questions about computational complexity via ordinary differential equations. Continuous-Time Models of Computation. Our results can also be interpreted in terms of analog computers or analog models of computation: As a side effect, we get that the 1941 General Purpose Analog Computer (GPAC) of Claude Shannon is provably equivalent to Turing machines both in terms of computability and complexity, a fact that has never been established before. This result provides arguments in favour of a generalised form of the Church-Turing Hypothesis, which states that any physically realistic (macroscopic) computer is equivalent to Turing machines both in terms of computability and complexity. | We also mention that Friedman and Ko (see @cite_16 ) proved that polynomial time computable functions are closed under maximization and integration if and only if some open problems of computational complexity (like @math for the maximization case) hold. The complexity of solving Lipschitz continuous ordinary differential equations has been proved to be polynomial-space complete by Kawamura @cite_17 . | {
"cite_N": [
"@cite_16",
"@cite_17"
],
"mid": [
"1545819195",
"2102100313"
],
"abstract": [
"Starting with Cook's pioneering work on NP-completeness in 1970, polynomial complexity theory, the study of polynomial-time com putability, has quickly emerged as the new foundation of algorithms. On the one hand, it bridges the gap between the abstract approach of recursive function theory and the concrete approach of analysis of algorithms. It extends the notions and tools of the theory of computability to provide a solid theoretical foundation for the study of computational complexity of practical problems. In addition, the theoretical studies of the notion of polynomial-time tractability some times also yield interesting new practical algorithms. A typical exam ple is the application of the ellipsoid algorithm to combinatorial op timization problems (see, for example, Lovasz [1986]). On the other hand, it has a strong influence on many different branches of mathe matics, including combinatorial optimization, graph theory, number theory and cryptography. As a consequence, many researchers have begun to re-examine various branches of classical mathematics from the complexity point of view. For a given nonconstructive existence theorem in classical mathematics, one would like to find a construc tive proof which admits a polynomial-time algorithm for the solution. One of the examples is the recent work on algorithmic theory of per mutation groups. In the area of numerical computation, there are also two tradi tionally independent approaches: recursive analysis and numerical analysis.\"",
"In answer to Ko's question raised in 1983, we show that an initial value problem given by a polynomial-time computable, Lipschitz continuous function can have a polynomial-space complete solution. The key insight is simple: the Lipschitz condition means that the feedback in the differential equation is weak. We define a class of polynomial-space computation tableaux with equally restricted feedback, and show that they are still polynomial-space complete. The same technique also settles Ko's two later questions on Volterra integral equations."
]
} |
1609.07972 | 2527416940 | Algebraic characterizations of the computational aspects of functions defined over the real numbers provide very effective tool to understand what computability and complexity over the reals, and generally over continuous spaces, mean. This is relevant for both communities of computer scientists and mathematical analysts, particularly the latter who do not understand (and or like) the language of machines and string encodings. Recursive analysis can be considered the most standard framework of computation over continuous spaces, it is however defined in a very machine specific way which does not leave much to intuitiveness. Recently several characterizations, in the form of function algebras, of recursively computable functions and some sub-recursive classes were introduced. These characterizations shed light on the hidden behavior of recursive analysis as they convert complex computational operations on sequences of real objects to "simple" intuitive mathematical operations such as integration or taking limits. The authors previously presented a framework for obtaining algebraic characterizations at the complexity level over compact domains. The current paper presents a comprehensive extension to that framework. Though we focus our attention in this paper on functions defined over the whole real line, the framework, and accordingly the obtained results, can be easily extended to functions defined over arbitrary domains. | We prove our results by relating the notion of polytime (polynomial time) computable functions over the reals (arbitrary domains) to the corresponding notion over the integers. This gives a direct way to lift algebraic characterizations of integer computability and complexity to algebraic characterizations of the corresponding analog notions. Our setting is actually proved to be robust to approximations. One does not need to be able to compute exactly the computability and or complexity class over the integers, but only some defined approximation of it in order to be able to compute the corresponding class over the reals. This can be seen as a way to reformulate reprove reread very nicely some constructions already used in @cite_7 @cite_16 . | {
"cite_N": [
"@cite_16",
"@cite_7"
],
"mid": [
"1882278886",
"2108904240"
],
"abstract": [
"Recently, using a limit schema, we presented an analog and machine independent algebraic characterization of elementary functions over the real numbers in the sense of recursive analysis. In a different and orthogonal work, we proposed a minimalization schema that allows to provide a class of real recursive functions that corresponds to extensions of computable functions over the integers. Mixing the two approaches we prove that computable functions over the real numbers in the sense of recursive analysis can be characterized as the smallest class of functions that contains some basic functions, and closed by composition, linear integration, minimalization and limit schema.",
"In the last decade, there have been several attempts to understand the relations between the many models of analog computation. Unfortunately, most models are not equivalent. Euler's Gamma function, which is computable according to computable analysis, but that cannot be generated by Shannon's General Purpose Analog Computer (GPAC), has often been used to argue that the GPAC is less powerful than digital computation. However, when computability with GPACs is not restricted to real-time generation of functions, it has been shown recently that Gamma becomes computable by a GPAC. Here we extend this result by showing that, in an appropriate framework, the GPAC and computable analysis are actually equivalent from the computability point of view, at least in compact intervals. Since GPACs are equivalent to systems of polynomial differential equations then we show that all real computable functions over compact intervals can be defined by such models."
]
} |
1609.07666 | 2952939673 | The concept of well-rounded lattices has recently found important applications in the setting of a fading single-input single-output (SISO) wiretap channel. It has been shown that, under this setup, the property of being well-rounded is critical for minimizing the eavesdropper's probability of correct decoding in lower SNR regimes. The superior performance of coset codes constructed from well-rounded lattices has been illustrated in several simulations. In the present article, this work is extended to fading multiple-input multiple-output (MIMO) wiretap channels, and similar design criteria as in the SISO case are derived. Further, explicit coset codes for Rayleigh fading MIMO wiretap channels are designed. In particular, it is shown through extensive simulations that sublattices of the well-known Alamouti code and Golden code which meet our design criteria perform better than scalar multiples of the code lattice for the same parameters. | The previous ECDP-based design criteria are commonly based on relatively coarse approximations, resulting in the so-called in the (SISO) case @cite_22 , also studied in @cite_19 , or the @cite_11 in the MIMO setting. See @cite_16 for related work in the reliability setting. In @cite_6 so-called @math -th coding gains are defined which were used in @cite_17 to derive a simple geometric criterion for the design of coset codes. | {
"cite_N": [
"@cite_22",
"@cite_17",
"@cite_6",
"@cite_19",
"@cite_16",
"@cite_11"
],
"mid": [
"2098538375",
"2273955308",
"2016084247",
"2284062147",
"2069530276",
"2060644465"
],
"abstract": [
"Coding for the Gaussian Wiretap Channel can be done with nested lattices. The fine lattice must be designed in the same way as lattices used on the regular Gaussian Channel while the coarse one must be a lattice whose theta series is minimized. We present a criterion of design of both the fine and coarse lattice when used on a Rayleigh fading wiretap channel",
"In a distributed communications scenario, a relay applying the compute-and-forward strategy for the real-valued channel model aims to decode an integer linear combination of transmitted messages, a task of very complex nature for which general efficient algorithms for dimension n > 1 have not yet been developed. Nonetheless, the maximum-likelihood decoding metric related to solving for the desired lattice point exhibits interesting properties which lead to partial design criteria for lattice codes in compute-and-forward. This article generalizes maximum-likelihood decoding at the relay to allow for arbitrary real lattice codes at the transmitters, and studies the behavior of the resulting decoding metric using an approximation of the theta series of a lattice, which is itself derived in this article. For the first time, the resulting random sums of lattices over whose points the relay needs to perform a sum are analyzed and furthermore, previous related work is extended to the case of K > 2 transmitters. The specific cases K = 2 and K = 3 are studied empirically.",
"Recently, several papers have been concentrating on reducing the decoding complexity of high-rate space-time codes. While the research has led to some impressive reductions in decoding complexity, the geometric methods so far used appear to have faced some fundamental limits. In this paper, we study what happens if we let go of the assumption of full diversity and study the possibility of reducing the complexity, while holding on to a high code rate, by reducing the length of codes. We will develop some tools that can be used to measure the changes we will encounter when reducing the code length. We will also study the achievable diversity-multiplexing gain trade-off (DMT) of codes with less than minimum delay (LMD) and discuss some code constructions.",
"In this paper, new probability estimates are derived for ideal lattice codes from totally real number fields using ideal class Dedekind zeta functions. In contrast to previous work on the subject, it is not assumed that the ideal in question is principal. In particular, it is shown that the corresponding inverse norm sum depends not only on the regulator and discriminant of the number field, but also on the values of the ideal class Dedekind zeta functions. Along the way, we derive an estimate of the number of elements in a given ideal with a certain algebraic norm within a finite hypercube. We provide several examples which measure the accuracy and predictive ability of our theorems.",
"This work considers inverse determinant sums, which arise from the union bound on the error probability, as a tool for designing and analyzing algebraic space-time block codes. A general framework to study these sums is established, and the connection between asymptotic growth of inverse determinant sums and the diversity-multiplexing gain tradeoff is investigated. It is proven that the growth of the inverse determinant sum of a division algebra-based space-time code is completely determined by the growth of the unit group. This reduces the inverse determinant sum analysis to studying certain asymptotic integrals in Lie groups. Using recent methods from ergodic theory, a complete classification of the inverse determinant sums of the most well-known algebraic space-time codes is provided. The approach reveals an interesting and tight relation between diversity-multiplexing gain tradeoff and point counting in Lie groups.",
"We consider MIMO (Multiple Input Multiple Output) wiretap channels, where a legitimate transmitter Alice is communicating with a legitimate receiver Bob in the presence of an eavesdropper Eve, and communication is done via MIMO channels. We suppose that Alice's strategy is to use an infinite lattice codebook, which then allows her to perform coset encoding. We analyze Eve's probability of correctly decoding the message Alice meant to Bob, and from minimizing this probability, we derive a code design criterion for MIMO lattice wiretap codes. The case of block fading channels is treated similarly, and fast fading channels are derived as a particular case. The Alamouti code is carefully studied as an illustration of the analysis provided."
]
} |
1609.07769 | 2949349317 | In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain streak accumulation. Our core ideas lie in the new rain image models and a novel deep learning architecture. We first modify an existing model comprising a rain streak layer and a background layer, by adding a binary map that locates rain streak regions. Second, we create a new model consisting of a component representing rain streak accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which usually happen in heavy rain. Based on the first model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. To handle rain streak accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose a recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively. In each recurrence of our method, a new contextualized dilated network is developed to exploit regional contextual information and outputs better representation for rain detection. The evaluation on real images, particularly on heavy rain, shows the effectiveness of our novel models and architecture, outperforming the state-of-the-art methods significantly. Our codes and data sets will be publicly available. | Compared with the video based deraining problem, the single image based problem is more ill-posed, due to the lack of temporal information. Some single-image based rain removal methods regard the problem as a layer separation problem. Huang @cite_35 attempted to separate the rain streaks from the high frequency layer by sparse coding, with a learned dictionary from the HOG features. However, the capacity of morphological component analysis, the layer separation, and learned dictionary are limited. Thus, it usually causes the over-smoothness of the background. @cite_28 , a generalized low rank model is proposed, where the rain streak layer is assumed to be low rank. @cite_30 , Kim first detected rain streaks and then removed them with the nonlocal mean filter. @cite_24 , Luo proposed a discriminative sparse coding method to separate rain streaks from background images. A recent work of @cite_2 exploits the Gaussian mixture models to separate the rain streaks, achieving the state-of-the-art performance, however, still with slightly smooth background. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_28",
"@cite_24",
"@cite_2"
],
"mid": [
"",
"2121396509",
"2154621477",
"2209874411",
"2466666260"
],
"abstract": [
"",
"Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a “rain component” and a “nonrain component” by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.",
"In this paper, we propose a novel low-rank appearance model for removing rain streaks. Different from previous work, our method needs neither rain pixel detection nor time-consuming dictionary learning stage. Instead, as rain streaks usually reveal similar and repeated patterns on imaging scene, we propose and generalize a low-rank model from matrix to tensor structure in order to capture the spatio-temporally correlated rain streaks. With the appearance model, we thus remove rain streaks from image video (and also other high-order image structure) in a unified way. Our experimental results demonstrate competitive (or even better) visual quality and efficient run-time in comparison with state of the art.",
"Visual distortions on images caused by bad weather conditions can have a negative impact on the performance of many outdoor vision systems. One often seen bad weather is rain which causes significant yet complex local intensity fluctuations in images. The paper aims at developing an effective algorithm to remove visual effects of rain from a single rainy image, i.e. separate the rain layer and the de-rained image layer from an rainy image. Built upon a non-linear generative model of rainy image, namely screen blend mode, we proposed a dictionary learning based algorithm for single image de-raining. The basic idea is to sparsely approximate the patches of two layers by very high discriminative codes over a learned dictionary with strong mutual exclusivity property. Such discriminative sparse codes lead to accurate separation of two layers from their non-linear composite. The experiments showed that the proposed method outperformed the existing single image de-raining methods on tested rain images.",
"This paper addresses the problem of rain streak removal from a single image. Rain streaks impair visibility of an image and introduce undesirable interference that can severely affect the performance of computer vision algorithms. Rain streak removal can be formulated as a layer decomposition problem, with a rain streak layer superimposed on a background layer containing the true scene content. Existing decomposition methods that address this problem employ either dictionary learning methods or impose a low rank structure on the appearance of the rain streaks. While these methods can improve the overall visibility, they tend to leave too many rain streaks in the background image or over-smooth the background image. In this paper, we propose an effective method that uses simple patch-based priors for both the background and rain layers. These priors are based on Gaussian mixture models and can accommodate multiple orientations and scales of the rain streaks. This simple approach removes rain streaks better than the existing methods qualitatively and quantitatively. We overview our method and demonstrate its effectiveness over prior work on a number of examples."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.