aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1707.06426
2736477269
Recent development in fully convolutional neural network enables efficient end-to-end learning of semantic segmentation. Traditionally, the convolutional classifiers are taught to learn the representative semantic features of labeled semantic objects. In this work, we propose a reverse attention network (RAN) architecture that trains the network to capture the opposite concept (i.e., what are not associated with a target class) as well. The RAN is a three-branch network that performs the direct, reverse and reverse-attention learning processes simultaneously. Extensive experiments are conducted to show the effectiveness of the RAN in semantic segmentation. Being built upon the DeepLabv2-LargeFOV, the RAN achieves the state-of-the-art mIoU score (48.1 ) for the challenging PASCAL-Context dataset. Significant performance improvements are also observed for the PASCAL-VOC, Person-Part, NYUDv2 and ADE20K datasets.
Performance improvements have been achieved by introducing several new ideas. One is to integrate low- and high-level convolutional features in the network. This is motivated by the observation that the pooling and the stride operations can offer a larger filed of view (FOV) and extract semantic features with fewer convolutional layers, yet it decreases the resolution of the response maps and thus suffers from inaccurate localization. The combination of segmentation results from multiple layers was proposed in @cite_33 @cite_44 . Fusion of multi-level features before decision gives an even better performance as shown in @cite_1 @cite_20 . Another idea, as presented in @cite_41 , is to adopt a dilation architecture to increase the resolution of response maps while preserving large FOVs. In addition, both local- and long-range conditional random fields can be used to refine segmentation details as done in @cite_27 @cite_43 . Recent advances in the RefineNet @cite_35 and the PSPNet @cite_6 show that a holistic understanding of the whole image @cite_32 can boost the segmentation performance furthermore.
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_41", "@cite_1", "@cite_32", "@cite_6", "@cite_44", "@cite_43", "@cite_27", "@cite_20" ], "mid": [ "2951402970", "1903029394", "2412782625", "2158865742", "2557327399", "2952596663", "", "", "", "" ], "abstract": [ "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms average- and max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.", "Semantic segmentation requires a detailed labeling of image pixels by object category. Information derived from local image patches is necessary to describe the detailed shape of individual objects. However, this information is ambiguous and can result in noisy labels. Global inference of image content can instead capture the general semantic concepts present. We advocate that high-recall holistic inference of image concepts provides valuable information for detailed pixel labeling. We build a two-stream neural network architecture that facilitates information flow from holistic information to local pixels, while keeping common image features shared among the low-level layers of both the holistic analysis and segmentation branches. We empirically evaluate our network on four standard semantic segmentation datasets. Our network obtains state-of-the-art performance on PASCAL-Context and NYUDv2, and ablation studies verify its effectiveness on ADE20K and SIFT-Flow.", "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.", "", "", "", "" ] }
1707.06426
2736477269
Recent development in fully convolutional neural network enables efficient end-to-end learning of semantic segmentation. Traditionally, the convolutional classifiers are taught to learn the representative semantic features of labeled semantic objects. In this work, we propose a reverse attention network (RAN) architecture that trains the network to capture the opposite concept (i.e., what are not associated with a target class) as well. The RAN is a three-branch network that performs the direct, reverse and reverse-attention learning processes simultaneously. Extensive experiments are conducted to show the effectiveness of the RAN in semantic segmentation. Being built upon the DeepLabv2-LargeFOV, the RAN achieves the state-of-the-art mIoU score (48.1 ) for the challenging PASCAL-Context dataset. Significant performance improvements are also observed for the PASCAL-VOC, Person-Part, NYUDv2 and ADE20K datasets.
Another class of methods focuses on guiding the learning procedure with highlighted knowledge. For example, a hard-case learning was adopted in @cite_31 to guide a network to focus on less confident cases. Besides, the spatial information can be explored to enhance features by considering coherence with neighboring patterns @cite_25 @cite_28 @cite_8 . Some other information such as the object boundary can also be explored to guide the segmentation with more accurate object shape prediction @cite_24 @cite_42 .
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_42", "@cite_24", "@cite_31", "@cite_25" ], "mid": [ "2950800384", "", "", "2962872526", "2341497066", "343636949" ], "abstract": [ "We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: this https URL", "", "", "Deep convolutional neural networks (CNNs) are the backbone of state-of-art semantic image segmentation systems. Recent work has shown that complementing CNNs with fully-connected conditional random fields (CRFs) can significantly enhance their object localization accuracy, yet dense CRF inference is computationally expensive. We propose replacing the fully-connected CRF with domain transform (DT), a modern edge-preserving filtering method in which the amount of smoothing is controlled by a reference edge map. Domain transform filtering is several times faster than dense CRF inference and we show that it yields comparable semantic segmentation results, accurately capturing object boundaries. Importantly, our formulation allows learning the reference edge map from intermediate CNN features instead of using the image gradient magnitude as in standard DT filtering. This produces task-specific edges in an end-to-end trainable system optimizing the target semantic segmentation quality.", "The field of object detection has made significant advances riding on the wave of region-based ConvNets, but their training procedure still includes many heuristics and hyperparameters that are costly to tune. We present a simple yet surprisingly effective online hard example mining (OHEM) algorithm for training region-based ConvNet detectors. Our motivation is the same as it has always been – detection datasets contain an overwhelming number of easy examples and a small number of hard examples. Automatic selection of these hard examples can make training more effective and efficient. OHEM is a simple and intuitive algorithm that eliminates several heuristics and hyperparameters in common use. But more importantly, it yields consistent and significant boosts in detection performance on benchmarks like PASCAL VOC 2007 and 2012. Its effectiveness increases as datasets become larger and more difficult, as demonstrated by the results on the MS COCO dataset. Moreover, combined with complementary advances in the field, OHEM leads to state-of-the-art results of 78.9 and 76.3 mAP on PASCAL VOC 2007 and 2012 respectively.", "This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations." ] }
1707.06406
2737780579
In this paper we present SPREFQL, an extension of the SPARQL language that allows appending a PREFER clause that expresses "soft" preferences over the query results obtained by the main body of the query. The extension does not add expressivity and any SPREFQL query can be transformed to an equivalent standard SPARQL query. However, clearly separating preferences from the "hard" patterns and filters in the WHERE clause gives queries where the intention of the client is more cleanly expressed, an advantage for both human readability and machine optimization. In the paper we formally define the syntax and the semantics of the extension and we also provide empirical evidence that optimizations specific to SPREFQL improve run-time efficiency by comparison to the usually applied optimizations on the equivalent standard SPARQL query.
In the Semantic Web literature there have been proposed SPARQL extensions that feature the expression of preferences @cite_1 , typically transferring ideas and results from relational database frameworks much like the work presented here.
{ "cite_N": [ "@cite_1" ], "mid": [ "2488578254" ], "abstract": [ "The last decade has witnessed an increasing interest in expressing preferences inside database queries. Even though most of the work has been devoted to relational databases, many proposals have also been made in the Semantic Web area in order to query RDF databases in a flexible way. This paper presents a survey of these approaches, classifies them, and points out research perspectives." ] }
1707.06406
2737780579
In this paper we present SPREFQL, an extension of the SPARQL language that allows appending a PREFER clause that expresses "soft" preferences over the query results obtained by the main body of the query. The extension does not add expressivity and any SPREFQL query can be transformed to an equivalent standard SPARQL query. However, clearly separating preferences from the "hard" patterns and filters in the WHERE clause gives queries where the intention of the client is more cleanly expressed, an advantage for both human readability and machine optimization. In the paper we formally define the syntax and the semantics of the extension and we also provide empirical evidence that optimizations specific to SPREFQL improve run-time efficiency by comparison to the usually applied optimizations on the equivalent standard SPARQL query.
When it comes to quantitative preferences, prominent examples include the extensions proposed by and . Closer to our work, influential databases research on preferences includes the work of Kie ling @cite_8 @cite_9 . This was used by to propose a SPARQL extension using a PREFERRING solution modifier. Contrary to our approach, these preferences are expressed using unary preference constructors. These constructors are of two types: boolean preferences where the preferred elements fulfill a specific boolean condition while the non-preferred do not; and scoring preferences , denoted with a HIGHEST or LOWEST keyword, where the preferred elements have a higher (or lower) value from the non preferred ones. Simple preferences expressed with these constructors can be further combined using Pareto and prioritized composition operators. further extended this language with an IF-THEN-ELSE clause which allows expressing conditional preferences that apply only if a condition holds. Conditional preferences allow several other syntactic sugar preference constructors to be defined, such as AROUND and BETWEEN .
{ "cite_N": [ "@cite_9", "@cite_8" ], "mid": [ "1501488077", "2126500482" ], "abstract": [ "Current search engines can hardly cope adequately with fuzzy predicates defined by complex preferences. The biggest problem of search engines implemented with standard SQL is that SQL does not directly understand the notion of preferences. Preference SQL extends SQL by a preference model based on strict partial orders (presented in more detail in the companion paper [Kie02]), where preference queries behave like soft selection constraints. Several built-in base preference types and the powerful Pareto operator, combined with the adherence to declarative SQL programming style, guarantees great programming productivity. The Preference SQL optimizer does an efficient re-writing into standard SQL, including a high-level implementation of the skyline perator for Pareto-optimal sets. This pre-processor approach enables a seamless application integration, making Preference SQL available on all major SQL platforms. Several commercial B2C portals are powered by Preference SQL. Its benefits comprise cooperative query answering and smart customer advice, leading to higher e-customer satisfaction and shorter development times of personalized search engines. We report practical experiences ranging from m-commerce and comparison shopping to a large-scale performance test for a job portal.", "Personalization of e-services poses new challenges to database technology, demanding a powerful and flexible modeling technique for complex preferences. Preference queries have to be answered cooperatively by treating preferences as soft constraints, attempting a best possible match-making. We propose a strict partial order semantics for preferences, which closely matches people's intuition. A variety of natural and of sophisticated preferences are covered by this model. We show how to inductively construct complex preferences by means of various preference constructors. This model is the key to a new discipline called preference engineering and to a preference algebra. Given the Best-Matches-Only (BMO) query model we investigate how complex preference queries can be decomposed into simpler ones, preparing the ground for divide & conquer algorithms. Standard SQL and XPATH can be extended seamlessly by such preferences (presented in detail in the companion paper [15]). We believe that this model is appropriate to extend database technology towards effective support of personalization." ] }
1707.06002
2737290746
An important skill in critical thinking and argumentation is the ability to spot and recognize fallacies. Fallacious arguments, omnipresent in argumentative discourse, can be deceptive, manipulative, or simply leading to wrong moves' in a discussion. Despite their importance, argumentation scholars and NLP researchers with focus on argumentation quality have not yet investigated fallacies empirically. The nonexistence of resources dealing with fallacious argumentation calls for scalable approaches to data acquisition and annotation, for which the serious games methodology offers an appealing, yet unexplored, alternative. We present Argotario, a serious game that deals with fallacies in everyday argumentation. Argotario is a multilingual, open-source, platform-independent application with strong educational aspects, accessible at www.argotario.net.
Surprisingly, the vast majority of current works on fallacies, and especially textbooks, present only toy examples that one is unlikely to encounter in real life [p. 432] Boudry.et.al.2015 . The distinction between fallacies and acceptable inference is fuzzy and theories do not offer any practical guidance: fully-fledged fallacies are harder to find in real life than is commonly assumed @cite_12 . To this account, analysis of fallacies in actual argumentative discourse has been rather limited in scope and size. examined fallacies found in articles supporting creationism. manualy analysed fallacies in newswire editorials in major U.S. newspapers before invading Iraq in 2003. These two works rely on a list of several fallacy types, such as , , , , , and few others.
{ "cite_N": [ "@cite_12" ], "mid": [ "2165516939" ], "abstract": [ "Philosophers of science have given up on the quest for a silver bullet to put an end to all pseudoscience, as such a neat formal criterion to separate good science from its contenders has proven elusive. In the literature on critical thinking and in some philosophical quarters, however, this search for silver bullets lives on in the taxonomies of fallacies. The attractive idea is to have a handy list of abstract definitions or argumentation schemes, on the basis of which one can identify bad or invalid types of reasoning, abstracting away from the specific content and dialectical context. Such shortcuts for debunking arguments are tempting, but alas, the promise is hardly if ever fulfilled. Different strands of research on the pragmatics of argumentation, probabilistic reasoning and ecological rationality have shown that almost every known type of fallacy is a close neighbor to sound inferences or acceptable moves in a debate. Nonetheless, the kernel idea of a fallacy as an erroneous type of argument is still retained by most authors. We outline a destructive dilemma we refer to as the Fallacy Fork: on the one hand, if fallacies are construed as demonstrably invalid form of reasoning, then they have very limited applicability in real life (few actual instances). On the other hand, if our definitions of fallacies are sophisticated enough to capture real-life complexities, they can no longer be held up as an effective tool for discriminating good and bad forms of reasoning. As we bring our schematic “fallacies” in touch with reality, we seem to lose grip on normative questions. Even approaches that do not rely on argumentation schemes to identify fallacies (e.g., pragma-dialectics) fail to escape the Fallacy Fork, and run up against their own version of it." ] }
1707.06314
2739126885
Calcium imaging is a technique for observing neuron activity as a series of images showing indicator fluorescence over time. Manually segmenting neurons is time-consuming, leading to research on automated calcium imaging segmentation (ACIS). We evaluated several deep learning models for ACIS on the Neurofinder competition datasets and report our best model: U-Net2DS, a fully convolutional network that operates on 2D mean summary images. U-Net2DS requires minimal domain-specific pre post-processing and parameter adjustment, and predictions are made on full @math images at @math 9K images per minute. It ranks third in the Neurofinder competition ( @math ) and is the best model to exclusively use deep learning. We also demonstrate useful segmentations on data from outside the competition. The model's simplicity, speed, and quality results make it a practical choice for ACIS and a strong baseline for more complex models in the future.
Deep learning has been explored extensively for medical image analysis, covered thoroughly in @cite_16 . Fully convolutional networks with skip connections were developed for semantic segmentation of natural images @cite_13 and applied to 2D medical images @cite_20 and 3D volumes @cite_1 @cite_22 . An analysis of the role of skip connections in fully convolutional architectures for biomedical segmentation is offered by @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_1", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "", "", "2464708700", "2592929672", "1903029394", "2952232639" ], "abstract": [ "", "", "This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.", "Abstract Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskelet al. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL ." ] }
1707.06119
2737293151
In this work we propose a novel neural network architecture for the problem of human action recognition in videos. The proposed architecture expresses the processing steps of classical Fisher vector approaches, that is dimensionality reduction by principal component analysis (PCA) projection, Gaussian mixture model (GMM) and Fisher vector descriptor extraction, as network layers. By contrast to other methods where these steps are performed consecutively and the corresponding parameters are learned in an unsupervised manner, having them defined as a single neural network allows us to refine the whole model discriminatively in an end to end fashion. Furthermore, we show that the proposed architecture can be used as a replacement for the fully connected layers in popular convolutional networks achieving a comparable classification performance, or even significantly surpassing the performance of similar architectures while reducing the total number of trainable parameters by a factor of 5. We show that our method achieves significant improvements in comparison to the classical chain.
The first work that applied Fisher vector descriptors for the problem of action recognition in videos used HOG, HOF and MBH features @cite_9 extracted along dense trajectories as local features @cite_25 . The trajectories are extracted by defining a dense grid of points which are then tracked using optical flow that was estimated offline, this way including motion information in the pipeline. By encoding the extracted trajectory features with the Fisher vector descriptor, this approach and the improved version of @cite_28 achieved state of the art results for the action recognition problem.
{ "cite_N": [ "@cite_28", "@cite_9", "@cite_25" ], "mid": [ "2105101328", "1993229407", "2126574503" ], "abstract": [ "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.", "Local space-time features have recently become a popular video representation for action recognition. Several methods for feature localization and description have been proposed in the literature and promising recognition results were demonstrated for a number of action classes. The comparison of existing methods, however, is often limited given the different experimental settings used. The purpose of this paper is to evaluate and compare previously proposed space-time features in a common experimental setup. In particular, we consider four different feature detectors and six local feature descriptors and use a standard bag-of-features SVM approach for action recognition. We investigate the performance of these methods on a total of 25 action classes distributed over three datasets with varying difficulty. Among interesting conclusions, we demonstrate that regular sampling of space-time features consistently outperforms all tested space-time interest point detectors for human actions in realistic settings. We also demonstrate a consistent ranking for the majority of methods over different datasets and discuss their advantages and limitations.", "Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports." ] }
1707.06012
2736986829
NMT systems have problems with large vocabulary sizes. Byte-pair encoding (BPE) is a popular approach to solving this problem, but while BPE allows the system to generate any target-side word, it does not enable effective generalization over the rich vocabulary in morphologically rich languages with strong inflectional phenomena. We introduce a simple approach to overcome this problem by training a system to produce the lemma of a word and its morphologically rich POS tag, which is then followed by a deterministic generation step. We apply this strategy for English-Czech and English-German translation scenarios, obtaining improvements in both settings. We furthermore show that the improvement is not due to only adding explicit morphological information.
Generation of unseen morphological variants has been tackled in various ways in the context of phrase-based models and other SMT approaches. Notably, two-step SMT was proposed to address this problem @cite_1 @cite_13 @cite_0 . In two-step SMT, a separate prediction model (such as a linear-chain CRF) is used to either directly predict the surface form (as in ) or used to predict the grammatical features, following which morphological generation is performed (as in ). Our work differs from their work in that we do not use a separate prediction model, but instead rely on predicting the lemmas and surface-forms as a single sequence in a neural machine translation model.
{ "cite_N": [ "@cite_13", "@cite_0", "@cite_1" ], "mid": [ "2119741585", "", "2117642127" ], "abstract": [ "The paper describes our experiments with English-Czech machine translation for WMT10 in 2010. Focusing primarily on the translation to Czech, our additions to the standard Moses phrase-based MT pipeline include two-step translation to overcome target-side data sparseness and optimization towards SemPOS, a metric better suited for evaluating Czech. Unfortunately, none of the approaches bring a significant improvement over our standard setup.", "", "We improve the quality of statistical machine translation (SMT) by applying models that predict word forms from their stems using extensive morphological and syntactic information from both the source and target languages. Our inflection generation models are trained independently of the SMT system. We investigate different ways of combining the inflection prediction component with the SMT system by training the base MT system on fully inflected forms or on word stems. We applied our inflection generation models in translating English into two morphologically complex languages, Russian and Arabic, and show that our model improves the quality of SMT over both phrasal and syntax-based SMT systems according to BLEU and human judgements." ] }
1707.06006
2737150635
In this paper, we establish that, for statistically convex-cocompact actions, contracting elements are exponentially generic in counting measure. Among others, the following exponential genericity results are obtained as corollaries for the set of hyperbolic elements in relatively hyperbolic groups, the set of rank-1 elements in CAT(0) groups, and the set of pseudo-Anosov elements in mapping class groups. Regarding a proper action, the set of non-contracting elements is proven to be growth-negligible. In particular, for mapping class groups, the set of pseudo-Anosov elements is generic in a sufficiently large subgroup, provided that the subgroup has purely exponential growth. By Roblin's work, we obtain that the set of hyperbolic elements is generic in any discrete group action on CAT(-1) space with finite BMS measure. Applications to the number of conjugacy classes of non-contracting elements are given for non-rank-1 geodesics in CAT(0) groups with rank-1 elements.
This generalizes the recent result of Cumplido and Wiest @cite_24 in mapping class groups that a positive proportion of elements are pseudo-Anosov. See also Cumplido @cite_20 for a similar result in Artin-Tits groups.
{ "cite_N": [ "@cite_24", "@cite_20" ], "mid": [ "2952236103", "2691309097" ], "abstract": [ "In the Cayley graph of the mapping class group of a closed surface, with respect to any generating set, we look at a ball of large radius centered on the identity vertex, and at the proportion among the vertices in this ball representing pseudo-Anosov elements. A well-known conjecture states that this proportion should tend to one as the radius tends to infinity. We prove that it stays bounded away from zero. We also prove similar results for a large class of subgroups of the mapping class group.", "Artin-Tits groups act on a certain delta-hyperbolic complex, called the \"additional length complex\". For an element of the group, acting loxodromically on this complex is a property analogous to the property of being pseudo-Anosov for elements of mapping class groups. By analogy with a well-known conjecture about mapping class groups, we conjecture that \"most\" elements of Artin-Tits groups act loxodromically. More precisely, in the Cayley graph of a subgroup @math of an Artin-Tits group, the proportion of loxodromically acting elements in a ball of large radius should tend to one as the radius tends to infinity. In this paper, we give a condition guaranteeing that this proportion stays away from zero. This condition is satisfied e.g. for Artin-Tits groups of spherical type, their pure subgroups and some of their commutator subgroups." ] }
1707.06006
2737150635
In this paper, we establish that, for statistically convex-cocompact actions, contracting elements are exponentially generic in counting measure. Among others, the following exponential genericity results are obtained as corollaries for the set of hyperbolic elements in relatively hyperbolic groups, the set of rank-1 elements in CAT(0) groups, and the set of pseudo-Anosov elements in mapping class groups. Regarding a proper action, the set of non-contracting elements is proven to be growth-negligible. In particular, for mapping class groups, the set of pseudo-Anosov elements is generic in a sufficiently large subgroup, provided that the subgroup has purely exponential growth. By Roblin's work, we obtain that the set of hyperbolic elements is generic in any discrete group action on CAT(-1) space with finite BMS measure. Applications to the number of conjugacy classes of non-contracting elements are given for non-rank-1 geodesics in CAT(0) groups with rank-1 elements.
The structure of the rest of this paper is as follows. The preliminary recalls necessary results proved in @cite_28 . is first proved in section . A general theorem is stated in , from which we deduce , and Theorem , ;. Its proof is given by assuming an almost geodesic decomposition in Proposition , which is the goal of the following three sections , , . More preliminary is recalled in to give a brief introduction of projection complex and quasi-tree of spaces. They are used in the following to prove Lemma , which is the starting point of the proof of Proposition occupying the final .
{ "cite_N": [ "@cite_28" ], "mid": [ "2587472154" ], "abstract": [ "This paper presents a study of the asymptotic geometry of groups with contracting elements, with emphasis on a subclass of statistically convex-cocompact (SCC) actions. The class of SCC actions includes relatively hyperbolic groups, CAT(0) groups with rank-1 elements and mapping class groups, among others. We exploit an extension lemma to prove that a group with SCC actions contains large free sub-semigroups, has purely exponential growth and contains a class of barrier-free sets with a growth-tight property. Our study produces new results and recovers existing ones for many interesting groups through a unified and elementary approach." ] }
1707.06005
2736463398
In this paper, we propose a new framework for action localization that tracks people in videos and extracts full-body human tubes, i.e., spatio-temporal regions localizing actions, even in the case of occlusions or truncations. This is achieved by training a novel human part detector that scores visible parts while regressing full-body bounding boxes. The core of our method is a convolutional neural network which learns part proposals specific to certain body parts. These are then combined to detect people robustly in each frame. Our tracking algorithm connects the image detections temporally to extract full-body human tubes. We apply our new tube extraction method on the problem of human action localization, on the popular JHMDB dataset, and a very recent challenging dataset DALY (Daily Action Localization in YouTube), showing state-of-the-art results.
Recent work has leveraged the success of deep learning for vision tasks in the context of human action localization @cite_12 @cite_5 @cite_11 @cite_3 , by using successful object detectors, like region proposal-based convolutional neural networks @cite_9 . Region-CNNs (R-CNNs) are trained for both appearance and motion cues in these methods to classify region proposals in individual frames. Human tubes are then obtained by combining class-specific detections with either temporal linking based on proximity @cite_12 , or with a tracking-by-detection approach @cite_5 . State-of-the-art methods @cite_11 @cite_3 rely on an improved version of R-CNN, e.g., Faster R-CNN @cite_9 , trained on appearance and flow. These methods make extensive use of bounding box annotations in every frame for training the network. Although this scheme is accurate for short videos, it is not scalable to long videos with viewpoint changes and close-ups, such as the examples shown in Figure . Our method automatically determines the best part to track and infers the global localization of the person from the part alone. The merging step of this inference for each part refines the final bounding box proposed for the frame.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "2953106684", "2484328966", "2950966695", "1923332106", "2519080876" ], "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "In this work, we propose an approach to the spatiotemporal localisation (detection) and classification of multiple concurrent actions within temporally untrimmed videos. Our framework is composed of three stages. In stage 1, appearance and motion detection networks are employed to localise and score actions from colour images and optical flow. In stage 2, the appearance network detections are boosted by combining them with the motion detection scores, in proportion to their respective spatial overlap. In stage 3, sequences of detection boxes most likely to be associated with a single action instance, called action tubes, are constructed by solving two energy maximisation problems via dynamic programming. While in the first pass, action paths spanning the whole video are built by linking detection boxes over time using their class-specific scores and their spatial overlap, in the second pass, temporal trimming is performed by ensuring label consistency for all constituting detection boxes. We demonstrate the performance of our algorithm on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new state-of-the-art results across the board and significantly increasing detection speed at test time. We achieve a huge leap forward in action detection performance and report a 20 and 11 gain in mAP (mean average precision) on UCF-101 and J-HMDB-21 datasets respectively when compared to the state-of-the-art.", "We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15 , 7 and 12 respectively in mAP.", "We address the problem of action detection in videos. Driven by the latest progress in object detection from 2D images, we build action models using rich feature hierarchies derived from shape and kinematic cues. We incorporate appearance and motion in two ways. First, starting from image region proposals we select those that are motion salient and thus are more likely to contain the action. This leads to a significant reduction in the number of regions being processed and allows for faster computations. Second, we extract spatio-temporal feature representations to build strong classifiers using Convolutional Neural Networks. We link our predictions to produce detections consistent in time, which we call action tubes. We show that our approach outperforms other techniques in the task of action detection.", "We propose a multi-region two-stream R-CNN model for action detection in realistic videos. We start from frame-level action detection based on faster R-CNN [1], and make three contributions: (1) we show that a motion region proposal network generates high-quality proposals , which are complementary to those of an appearance region proposal network; (2) we show that stacking optical flow over several frames significantly improves frame-level action detection; and (3) we embed a multi-region scheme in the faster R-CNN model, which adds complementary information on body parts. We then link frame-level detections with the Viterbi algorithm, and temporally localize an action with the maximum subarray method. Experimental results on the UCF-Sports, J-HMDB and UCF101 action detection datasets show that our approach outperforms the state of the art with a significant margin in both frame-mAP and video-mAP." ] }
1707.06480
2737064181
Syllabification does not seem to improve word-level RNN language modeling quality when compared to character-based segmentation. However, our best syllable-aware language model, achieving performance comparable to the competitive character-aware model, has 18 -33 fewer parameters and is trained 1.2-2.2 times faster.
Much research has been done on subword-level and subword-aware LMs rely on subword-level inputs and make predictions at the level of subwords; LMs also rely on subword-level inputs but make predictions at the level of words. neural language modeling when subwords are characters @cite_25 @cite_6 @cite_5 or morphemes @cite_15 @cite_4 @cite_20 . However, not much work has been done on syllable-level or syllable-aware NLM. show that subword-level language models outperform character-level ones. Not to be confused with character-aware ones, see the previous footnote. They keep the most frequent words untouched and split all other words into syllable-like units. Our approach differs mainly in the following aspects: we make predictions at the word level, use a more linguistically sound syllabification algorithm, and consider a variety of more advanced neural architectures.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_5", "@cite_15", "@cite_25", "@cite_20" ], "mid": [ "2142377809", "1938755728", "2951450659", "2949679234", "2949563612", "" ], "abstract": [ "The techniques of using neural networks to learn distributed word representations (i.e., word embeddings) have been used to solve a variety of natural language processing tasks. The recently proposed methods, such as CBOW and Skip-gram, have demonstrated their effectiveness in learning word embeddings based on context information such that the obtained word embeddings can capture both semantic and syntactic relationships between words. However, it is quite challenging to produce high-quality word representations for rare or unknown words due to their insufficient context information. In this paper, we propose to leverage morphological knowledge to address this problem. Particularly, we introduce the morphological knowledge as both additional input representation and auxiliary supervision to the neural network framework. As a result, beyond word representations, the proposed neural network model will produce morpheme representations, which can be further employed to infer the representations of rare or unknown words based on their morphological structure. Experiments on an analogical reasoning task and several word similarity tasks have demonstrated the effectiveness of our method in producing high-quality words embeddings compared with the state-of-the-art methods.", "We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60 fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.", "We present a Character-Word Long Short-Term Memory Language Model which both reduces the perplexity with respect to a baseline word-level language model and reduces the number of parameters of the model. Character information can reveal structural (dis)similarities between words and can even be used when a word is out-of-vocabulary, thus improving the modeling of infrequent and unknown words. By concatenating word and character embeddings, we achieve up to 2.77 relative improvement on English compared to a baseline model with a similar amount of parameters and 4.57 on Dutch. Moreover, we also outperform baseline word-level models with a larger number of parameters.", "This paper presents a scalable method for integrating compositional morphological representations into a vector-based probabilistic language model. Our approach is evaluated in the context of log-bilinear language models, rendered suitably efficient for implementation inside a machine translation decoder by factoring the vocabulary. We perform both intrinsic and extrinsic evaluations, presenting results on a range of languages which demonstrate that our model learns morphological representations that both perform well on word similarity tasks and lead to substantial reductions in perplexity. When used for translation into morphologically rich languages with large vocabularies, our models obtain improvements of up to 1.2 BLEU points relative to a baseline system using back-off n-gram models.", "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our \"composed\" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).", "" ] }
1707.06480
2737064181
Syllabification does not seem to improve word-level RNN language modeling quality when compared to character-based segmentation. However, our best syllable-aware language model, achieving performance comparable to the competitive character-aware model, has 18 -33 fewer parameters and is trained 1.2-2.2 times faster.
We have recently come across a concurrent paper @cite_11 where the authors systematically compare different subword units (characters, character trigrams, BPE @cite_7 , morphemes) and different representation models (CNN, Bi-LSTM, summation) on languages with various morphological typology. However, they do not consider syllables, and they experiment with relatively small models on small data sets (0.6M--1.4M tokens).
{ "cite_N": [ "@cite_7", "@cite_11" ], "mid": [ "1816313093", "2611767671" ], "abstract": [ "Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character n-gram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.1 and 1.3 BLEU, respectively.", "Words can be represented by composing the representations of subword units such as word segments, characters, and or character n-grams. While such representations are effective and may capture the morphological regularities of words, they have not been systematically compared, and it is not understood how they interact with different morphological typologies. On a language modeling task, we present experiments that systematically vary (1) the basic unit of representation, (2) the composition of these representations, and (3) the morphological typology of the language modeled. Our results extend previous findings that character representations are effective across typologies, and we find that a previously unstudied combination of character trigram representations composed with bi-LSTMs outperforms most others. But we also find room for improvement: none of the character-level models match the predictive accuracy of a model with access to true morphological analyses, even when learned from an order of magnitude more data." ] }
1707.06022
2737601922
In the appstore-centric ecosystem, app developers have an urgent requirement to optimize their release strategy to maximize the success opportunity of their apps. To address this problem, we introduce an approach to assisting developers to select the proper release opportunity based on the purpose of the update and current condition of the app. Before that, we propose the interval of an update to its previous update to characterize release patterns, and find significance of the release opportunity through empirical analysis. We mined the update-history data of 17,820 apps from 33 categories in Google Play, over a period of 105 days. With 41,028 releases identified from these apps, we reveal important characteristics of update intervals and how these factors can influence update effects. We suggest developers to synthetically consider app ranking, rating trend, and what to update in addition to the opportunity before releasing an app version. We propose a Multinomial Naive Bayes model to help decide an optimal release opportunity to gain better user adoption.
Despite all the work discussed above on pushing for shorter release cycles, empirical evidence that directly shows its improvements on product quality is lacked. @cite_42 report that bugs are fixed faster under rapid release models, but proportionally less bugs are fixed compared to the traditional release model based on their study of release model of Firefox. @cite_14 found that short release cycles require a more steady flow of releases to control the number of reported bugs. Releasing too frequently not only decreases the time to run tests, but also might make customers weary of yet another update to install ( @cite_36 , @cite_17 ), which is in a way the same situation in app stores.
{ "cite_N": [ "@cite_36", "@cite_14", "@cite_42", "@cite_17" ], "mid": [ "", "2088566493", "2089472668", "2114734045" ], "abstract": [ "", "This paper tells the story of how the team managed to speed up delivery whilst raising quality at the same time, conject used to release complete modules of their Internet platform once every six months. Customers were forced to wait for months to get access to new features. Once they arrived, the big bang releases disrupted the user experience due to bugs lurking in the new software. Despite what seemed to be an impossible task, the team successfully transformed their software development process to deliver a stable and continuous flow of small releases. Now features are delivered to customers more quickly and with much higher quality.", "Nowadays, many software companies are shifting from the traditional 18-month release cycle to shorter release cycles. For example, Google Chrome and Mozilla Firefox release new versions every 6 weeks. These shorter release cycles reduce the users' waiting time for a new release and offer better marketing opportunities to companies, but it is unclear if the quality of the software product improves as well, since shorter release cycles result in shorter testing periods. In this paper, we empirically study the development process of Mozilla Firefox in 2010 and 2011, a period during which the project transitioned to a shorter release cycle. We compare crash rates, median uptime, and the proportion of post-release bugs of the versions that had a shorter release cycle with those having a traditional release cycle, to assess the relation between release cycle length and the software quality observed by the end user. We found that (1) with shorter release cycles, users do not experience significantly more post-release bugs and (2) bugs are fixed faster, yet (3) users experience these bugs earlier during software execution (the program crashes earlier).", "The decision for a young product software vendor to release a version of their product is dependent on different factors, such as development decisions (it feels right), sales decisions (the market needs it), and quality decisions (the product is stable). Customers of these products, however, are much more cost oriented when deciding whether to update their product or not, and will look mainly at the cost and value of an update. Product software vendors would gain tremendously if their release package planning method was supported by a similar cost value overview. This paper presents cost value functions for product software vendors to support their release package planning method. These cost value functions are supported by ten misconceptions encountered in seven case studies of product software vendors that these vendors had to adjust during their lifetime. Finally, a number of cost saving opportunities are presented to enable quicker adoption of a release and thus shorten release times and customer feedback cycles." ] }
1707.06170
2738675347
Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the "Imagination-based Planner", the first model-based, sequential decision-making agent that can learn to construct, evaluate, and execute plans. Before any action, it can perform a variable number of imagination steps, which involve proposing an imagined action and evaluating it with its model-based imagination. All imagined actions and outcomes are aggregated, iteratively, into a "plan context" which conditions future real and imagined actions. The agent can even decide how to imagine: testing out alternative imagined actions, chaining sequences of actions together, or building a more complex "imagination tree" by navigating flexibly among the previously imagined states using a learned policy. And our agent can learn to plan economically, jointly optimizing for external rewards and computational costs associated with using its imagination. We show that our architecture can learn to solve a challenging continuous control problem, and also learn elaborate planning strategies in a discrete maze-solving task. Our work opens a new direction toward learning the components of a model-based planning system and how to use them.
Planning with ground truth models has been studied heavily and led to remarkable advances. AlphaGo @cite_8 , the world champion computer Go system, trains a policy to decide how to expand the search tree using a known transition model. Planning in continuous domains with fixed models must usually exploit background knowledge to sample actions efficiently @cite_10 @cite_0 @cite_17 . Several recent efforts @cite_5 @cite_15 @cite_23 addressed planning in complex systems, however, the planning itself uses classical methods, e.g., Stochastic Optimal Control, trajectory optimization, and model-predictive control.
{ "cite_N": [ "@cite_8", "@cite_0", "@cite_23", "@cite_5", "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "2257979135", "1540821927", "2953317238", "834081922", "2281096776", "1626155273", "2096678000" ], "abstract": [ "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.", "If one possesses a model of a controlled deterministic system, then from any state, one may consider the set of all possible reachable states starting from that state and using any sequence of actions. This forms a tree whose size is exponential in the planning time horizon. Here we ask the question: given finite computational resources (e.g. CPU time), which may not be known ahead of time, what is the best way to explore this tree, such that once all resources have been used, the algorithm would be able to propose an action (or a sequence of actions) whose performance is as close as possible to optimality? The performance with respect to optimality is assessed in terms of the regret (with respect to the sum of discounted future rewards) resulting from choosing the action returned by the algorithm instead of an optimal action. In this paper we investigate an optimistic exploration of the tree, where the most promising states are explored first, and compare this approach to a naive uniform exploration. Bounds on the regret are derived both for uniform and optimistic exploration strategies. Numerical simulations illustrate the benefit of optimistic planning.", "A key challenge in scaling up robot learning to many skills and environments is removing the need for human supervision, so that robots can collect their own data and improve their own performance without being limited by the cost of requesting human feedback. Model-based reinforcement learning holds the promise of enabling an agent to learn to predict the effects of its actions, which could provide flexible predictive models for a wide range of tasks and environments, without detailed human supervision. We develop a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data. Our approach does not require a calibrated camera, an instrumented training set-up, nor precise sensing and actuation. Our results show that our method enables a real robot to perform nonprehensile manipulation -- pushing objects -- and can handle novel objects not seen during training.", "We introduce Embed to Control (E2C), a method for model learning and control of non-linear dynamical systems from raw pixel images. E2C consists of a deep generative model, belonging to the family of variational autoencoders, that learns to generate image trajectories from a latent space in which the dynamics is constrained to be locally linear. Our model is derived directly from an optimal control formulation in latent space, supports long-term prediction of image sequences and exhibits strong performance on a variety of complex control problems.", "Designing controllers for tasks with complex nonlinear dynamics is extremely challenging, time-consuming, and in many cases, infeasible. This difficulty is exacerbated in tasks such as robotic food-cutting, in which dynamics might vary both with environmental properties, such as material and tool class, and with time while acting. In this work, we present DeepMPC, an online real-time model-predictive control approach designed to handle such difficult tasks. Rather than hand-design a dynamics model for the task, our approach uses a novel deep architecture and learning algorithm, learning controllers for complex tasks directly from data. We validate our method in experiments on a large-scale dataset of 1488 material cuts for 20 diverse classes, and in 450 real-world robotic experiments, demonstrating significant improvement over several other approaches.", "From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems. However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence. Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications. The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work. Access the authors' website at www.dcsc.tudelft.nl rlbook for additional material, including computer code used in the studies and information concerning new developments.", "We consider a global optimization problem of a deterministic function f in a semi-metric space, given a finite budget of n evaluations. The function f is assumed to be locally smooth (around one of its global maxima) with respect to a semi-metric l We describe two algorithms based on optimistic exploration that use a hierarchical partitioning of the space at all scales. A first contribution is an algorithm, DOO, that requires the knowledge of l. We report a finite-sample performance bound in terms of a measure of the quantity of near-optimal states. We then define a second algorithm, SOO, which does not require the knowledge of the semi-metric l under which f is smooth, and whose performance is almost as good as DOO optimally-fitted." ] }
1707.06170
2738675347
Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the "Imagination-based Planner", the first model-based, sequential decision-making agent that can learn to construct, evaluate, and execute plans. Before any action, it can perform a variable number of imagination steps, which involve proposing an imagined action and evaluating it with its model-based imagination. All imagined actions and outcomes are aggregated, iteratively, into a "plan context" which conditions future real and imagined actions. The agent can even decide how to imagine: testing out alternative imagined actions, chaining sequences of actions together, or building a more complex "imagination tree" by navigating flexibly among the previously imagined states using a learned policy. And our agent can learn to plan economically, jointly optimizing for external rewards and computational costs associated with using its imagination. We show that our architecture can learn to solve a challenging continuous control problem, and also learn elaborate planning strategies in a discrete maze-solving task. Our work opens a new direction toward learning the components of a model-based planning system and how to use them.
There have also been various efforts to learn to plan. The classic Dyna'' algorithm learns a model, which is then used to train a policy @cite_12 . @cite_22 proposed a method that learns to initialize and update a plan, but which does not use a model and instead directly maps new observations to plan updates. The value iteration network @cite_25 and predictron @cite_20 both train deep networks to implicitly plan via iterative rollouts. However the former does not use a model, and the latter uses an abstract model which does not capture the world dynamics, and was only applied to learning Markov reward processes rather than solving Markov decision processes (MDPs).
{ "cite_N": [ "@cite_20", "@cite_25", "@cite_22", "@cite_12" ], "mid": [ "2567374473", "2258731934", "2442341664", "1980035368" ], "abstract": [ "One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple \"imagined\" planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths. The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures.", "We introduce the value iteration network (VIN): a fully differentiable neural network with a planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.", "We present a novel deep recurrent neural network architecture that learns to build implicit plans in an end-to-end manner by purely interacting with an environment in reinforcement learning setting. The network builds an internal plan, which is continuously updated upon observation of the next input from the environment. It can also partition this internal representation into contiguous sub- sequences by learning for how long the plan can be committed to - i.e. followed without re-planing. Combining these properties, the proposed model, dubbed STRategic Attentive Writer (STRAW) can learn high-level, temporally abstracted macro- actions of varying lengths that are solely learnt from data without any prior information. These macro-actions enable both structured exploration and economic computation. We experimentally demonstrate that STRAW delivers strong improvements on several ATARI games by employing temporally extended planning strategies (e.g. Ms. Pacman and Frostbite). It is at the same time a general algorithm that can be applied on any sequence data. To that end, we also show that when trained on text prediction task, STRAW naturally predicts frequent n-grams (instead of macro-actions), demonstrating the generality of the approach.", "Dyna is an AI architecture that integrates learning, planning, and reactive execution. Learning methods are used in Dyna both for compiling planning results and for updating a model of the effects of the agent's actions on the world. Planning is incremental and can use the probabilistic and ofttimes incorrect world models generated by learning processes. Execution is fully reactive in the sense that no planning intervenes between perception and action. Dyna relies on machine learning methods for learning from examples---these are among the basic building blocks making up the architecture---yet is not tied to any particular method. This paper briefly introduces Dyna and discusses its strengths and weaknesses with respect to other architectures." ] }
1707.06170
2738675347
Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the "Imagination-based Planner", the first model-based, sequential decision-making agent that can learn to construct, evaluate, and execute plans. Before any action, it can perform a variable number of imagination steps, which involve proposing an imagined action and evaluating it with its model-based imagination. All imagined actions and outcomes are aggregated, iteratively, into a "plan context" which conditions future real and imagined actions. The agent can even decide how to imagine: testing out alternative imagined actions, chaining sequences of actions together, or building a more complex "imagination tree" by navigating flexibly among the previously imagined states using a learned policy. And our agent can learn to plan economically, jointly optimizing for external rewards and computational costs associated with using its imagination. We show that our architecture can learn to solve a challenging continuous control problem, and also learn elaborate planning strategies in a discrete maze-solving task. Our work opens a new direction toward learning the components of a model-based planning system and how to use them.
Our approach is also related to classic work on meta-reasoning @cite_1 @cite_14 @cite_21 , in which an internal MDP schedules computations, which carry costs, in order to solve a task. More recently, neural networks have been trained to perform conditional'' and adaptive computation'' @cite_16 @cite_19 @cite_26 , which results in a dynamic computational graph.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_21", "@cite_1", "@cite_19", "@cite_16" ], "mid": [ "2952456051", "2325237720", "", "", "2179423374", "2951163624" ], "abstract": [ "Although many investigators affirm a desire to build reasoning systems that behave consistently with the axiomatic basis defined by probability theory and utility theory, limited resources for engineering and computation can make a complete normative analysis impossible. We attempt to move discussion beyond the debate over the scope of problems that can be handled effectively to cases where it is clear that there are insufficient computational resources to perform an analysis deemed as complete. Under these conditions, we stress the importance of considering the expected costs and benefits of applying alternative approximation procedures and heuristics for computation and knowledge acquisition. We discuss how knowledge about the structure of user utility can be used to control value tradeoffs for tailoring inference to alternative contexts. We address the notion of real-time rationality, focusing on the application of knowledge about the expected timewise-refinement abilities of reasoning strategies to balance the benefits of additional computation with the costs of acting with a partial result. We discuss the benefits of applying decision theory to control the solution of difficult problems given limitations and uncertainty in reasoning resources.", "This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.", "", "", "Deep learning has become the state-of-art tool in many applications, but the evaluation and training of deep models can be time-consuming and computationally expensive. The conditional computation approach has been proposed to tackle this problem (, 2013; Davis & Arel, 2013). It operates by selectively activating only parts of the network at a time. In this paper, we use reinforcement learning as a tool to optimize conditional computation policies. More specifically, we cast the problem of learning activation-dependent policies for dropping out blocks of units as a reinforcement learning problem. We propose a learning scheme motivated by computation speed, capturing the idea of wanting to have parsimonious activations while maintaining prediction accuracy. We apply a policy gradient algorithm for learning policies that optimize this loss function and propose a regularization mechanism that encourages diversification of the dropout policy. We present encouraging empirical results showing that this approach improves the speed of computation without impacting the quality of the approximation.", "Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts. Although the study of deep learning has already led to impressive theoretical results, learning algorithms and breakthrough experiments, several challenges lie ahead. This paper proposes to examine some of these challenges, centering on the questions of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data. It also proposes a few forward-looking research directions aimed at overcoming these challenges." ] }
1707.06170
2738675347
Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the "Imagination-based Planner", the first model-based, sequential decision-making agent that can learn to construct, evaluate, and execute plans. Before any action, it can perform a variable number of imagination steps, which involve proposing an imagined action and evaluating it with its model-based imagination. All imagined actions and outcomes are aggregated, iteratively, into a "plan context" which conditions future real and imagined actions. The agent can even decide how to imagine: testing out alternative imagined actions, chaining sequences of actions together, or building a more complex "imagination tree" by navigating flexibly among the previously imagined states using a learned policy. And our agent can learn to plan economically, jointly optimizing for external rewards and computational costs associated with using its imagination. We show that our architecture can learn to solve a challenging continuous control problem, and also learn elaborate planning strategies in a discrete maze-solving task. Our work opens a new direction toward learning the components of a model-based planning system and how to use them.
Recently @cite_18 trained a visual imagination'' model to control simulated billiards systems, though their system did not learn to plan. Our IBP was most inspired by 's imagination-based metacontroller'' (IBMC) @cite_2 , which learned an adaptive optimization policy for one-shot decision-making in contextual bandit problems. Our IBP, however, learns an adaptive planning policy in the more general and challenging class of sequential decision-making problems.Similar to our work is the study of @cite_4 that looks in detail at dealing with imperfect complex models of the world, working on pixels, in the discrete sequential decision making processes.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_2" ], "mid": [ "2271155703", "", "2613603362" ], "abstract": [ "The ability to plan and execute goal specific actions in varied, unexpected settings is a central requirement of intelligent agents. In this paper, we explore how an agent can be equipped with an internal model of the dynamics of the external world, and how it can use this model to plan novel actions by running multiple internal simulations (\"visual imagination\"). Our models directly process raw visual input, and use a novel object-centric prediction formulation based on visual glimpses centered on objects (fixations) to enforce translational invariance of the learned physical laws. The agent gathers training data through random interaction with a collection of different environments, and the resulting model can then be used to plan goal-directed actions in novel environments that the agent has not seen before. We demonstrate that our agent can accurately plan actions for playing a simulated billiards game, which requires pushing a ball into a target position or into collision with another ball.", "", "Many machine learning systems are built to solve the hardest examples of a particular task, which often makes them large and expensive to run---especially with respect to the easier examples, which might require much less computation. For an agent with a limited computational budget, this \"one-size-fits-all\" approach may result in the agent wasting valuable computation on easy examples, while not spending enough on hard examples. Rather than learning a single, fixed policy for solving all instances of a task, we introduce a metacontroller which learns to optimize a sequence of \"imagined\" internal simulations over predictive models of the world in order to construct a more informed, and more economical, solution. The metacontroller component is a model-free reinforcement learning agent, which decides both how many iterations of the optimization procedure to run, as well as which model to consult on each iteration. The models (which we call \"experts\") can be state transition models, action-value functions, or any other mechanism that provides information useful for solving the task, and can be learned on-policy or off-policy in parallel with the metacontroller. When the metacontroller, controller, and experts were trained with \"interaction networks\" (, 2016) as expert models, our approach was able to solve a challenging decision-making problem under complex non-linear dynamics. The metacontroller learned to adapt the amount of computation it performed to the difficulty of the task, and learned how to choose which experts to consult by factoring in both their reliability and individual computational resource costs. This allowed the metacontroller to achieve a lower overall cost (task loss plus computational cost) than more traditional fixed policy approaches. These results demonstrate that our approach is a powerful framework for using..." ] }
1707.06342
2737100304
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 @math FLOPs reduction and 16.63 @math compression on VGG-16, with only 0.52 @math top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1 @math top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
Many researchers have found that deep models suffer from heavy over-parameterization. For example, Denil al @cite_38 demonstrated that a network can be efficiently reconstructed with only a small subset of its original parameters. However, this redundancy seems during model training, since the highly non-convex optimization is hard to be solved with current techniques @cite_35 @cite_20 . Hence, there is a great need to reduce model size its training.
{ "cite_N": [ "@cite_38", "@cite_20", "@cite_35" ], "mid": [ "2952899695", "", "2167215970" ], "abstract": [ "We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95 of the weights of a network without any drop in accuracy.", "", "We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model." ] }
1707.06342
2737100304
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 @math FLOPs reduction and 16.63 @math compression on VGG-16, with only 0.52 @math top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1 @math top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
Some methods have been proposed to pursuit a balance between model size and accuracy. Han al @cite_31 proposed an iterative pruning method to remove the redundancy in deep models. Their main insight is that small-weight connectivity below a threshold should be discarded. In practice, this can be aided by applying @math or @math regularization to push connectivity values becoming smaller. The major weakness of this strategy is the loss of universality and flexibility, thus seems to be less practical in the real applications.
{ "cite_N": [ "@cite_31" ], "mid": [ "2963674932" ], "abstract": [ "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy." ] }
1707.06342
2737100304
We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 @math FLOPs reduction and 16.63 @math compression on VGG-16, with only 0.52 @math top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1 @math top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
In order to avoid these weaknesses, some attention has been focused on the group-wise sparsity. Lebedev and Lempitsky @cite_27 explored group-sparse convolution by introducing the group-sparsity regularization to the loss function, then some entire groups of weights would shrink to zeros, thus can be removed. Similarly, Wen al @cite_17 proposed the Structured Sparsity Learning (SSL) method to regularize filter, channel, filter shape and depth structures. In spite of their success, the original network structure has been destroyed. As a result, some dedicated libraries are needed for an efficient inference speed-up.
{ "cite_N": [ "@cite_27", "@cite_17" ], "mid": [ "566555209", "2513419314" ], "abstract": [ "We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation. Experimental results show that SSL achieves on average 5.1x and 3.1x speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25 to 92.60 , which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by around 1 . Open source code is in this https URL" ] }
1707.06065
2737615485
Layer normalization is a recently introduced technique for normalizing the activities of neurons in deep neural networks to improve the training speed and stability. In this paper, we introduce a new layer normalization technique called Dynamic Layer Normalization (DLN) for adaptive neural acoustic modeling in speech recognition. By dynamically generating the scaling and shifting parameters in layer normalization, DLN adapts neural acoustic models to the acoustic variability arising from various factors such as speakers, channel noises, and environments. Unlike other adaptive acoustic models, our proposed approach does not require additional adaptation data or speaker information such as i-vectors. Moreover, the model size is fixed as it dynamically generates adaptation parameters. We apply our proposed DLN to deep bidirectional LSTM acoustic models and evaluate them on two benchmark datasets for large vocabulary ASR experiments: WSJ and TED-LIUM release 2. The experimental results show that our DLN improves neural acoustic models in terms of transcription accuracy by dynamically adapting to various speakers and environments.
* Adaptive Acoustic Modeling Adaptive acoustic modeling can be broadly categorized into two groups: 1) auxiliary feature-based and 2) model-based adaptation. Most of auxiliary feature-based adaptation methods use i-vectors @cite_0 as auxiliary features in addition to input acoustic features. I-vectors can be considered as basis vectors spanning a subspace of speaker variability. In @cite_28 @cite_17 , i-vectors were used to augment the input acoustic features in DNN-based acoustic models and it was shown that appending i-vectors for each speaker resulted in improvements in the transcription accuracy. @cite_4 studied the speaker-aware training of LSTM acoustic models based on i-vectors.
{ "cite_N": [ "@cite_0", "@cite_28", "@cite_4", "@cite_17" ], "mid": [ "", "2079623482", "2406264770", "2015633636" ], "abstract": [ "", "We propose to adapt deep neural network (DNN) acoustic models to a target speaker by supplying speaker identity vectors (i-vectors) as input features to the network in parallel with the regular acoustic features for ASR. For both training and test, the i-vector for a given speaker is concatenated to every frame belonging to that speaker and changes across different speakers. Experimental results on a Switchboard 300 hours corpus show that DNNs trained on speaker independent features and i-vectors achieve a 10 relative improvement in word error rate (WER) over networks trained on speaker independent features only. These networks are comparable in performance to DNNs trained on speaker-adapted features (with VTLN and FMLLR) with the advantage that only one decoding pass is needed. Furthermore, networks trained on speaker-adapted features and i-vectors achieve a 5-6 relative improvement in WER after hessian-free sequence training over networks trained on speaker-adapted features only.", "Long Short-Term Memory (LSTM) is a particular type of recurrent neural network (RNN) that can model long term temporal dynamics. Recently it has been shown that LSTM-RNNs can achieve higher recognition accuracy than deep feed-forword neural networks (DNNs) in acoustic modelling. However, speaker adaption for LSTM-RNN based acoustic models has not been well investigated. In this paper, we study the LSTM-RNN speaker-aware training that incorporates the speaker information during model training to normalise the speaker variability. We first present several speaker-aware training architectures, and then empirically evaluate three types of speaker representation: I-vectors, bottleneck speaker vectors and speaking rate. Furthermore, to factorize the variability in the acoustic signals caused by speakers and phonemes respectively, we investigate the speaker-aware and phone-aware joint training under the framework of multi-task learning. In AMI meeting speech transcription task, speaker-aware training of LSTM-RNNs reduces word error rates by 6.5 relative to a very strong LSTM-RNN baseline, which uses FMLLR features.", "State of the art speaker recognition systems are based on the i-vector representation of speech segments. In this paper we show how this representation can be used to perform blind speaker adaptation of hybrid DNN-HMM speech recognition system and we report excellent results on a French language audio transcription task. The implemenation is very simple. An audio file is first diarized and each speaker cluster is represented by an i-vector. Acoustic feature vectors are augmented by the corresponding i-vectors before being presented to the DNN. (The same i-vector is used for all acoustic feature vectors aligned with a given speaker.) This supplementary information improves the DNN's ability to discriminate between phonetic events in a speaker independent way without having to make any modification to the DNN training algorithms. We report results on the ETAPE 2011 transcription task, and show that i-vector based speaker adaptation is effective irrespective of whether cross-entropy or sequence training is used. For cross-entropy training, we obtained a word error rate (WER) reduction from 22.16 to 20.67 whereas for sequence training the WER reduces from 19.93 to 18.40 ." ] }
1707.06065
2737615485
Layer normalization is a recently introduced technique for normalizing the activities of neurons in deep neural networks to improve the training speed and stability. In this paper, we introduce a new layer normalization technique called Dynamic Layer Normalization (DLN) for adaptive neural acoustic modeling in speech recognition. By dynamically generating the scaling and shifting parameters in layer normalization, DLN adapts neural acoustic models to the acoustic variability arising from various factors such as speakers, channel noises, and environments. Unlike other adaptive acoustic models, our proposed approach does not require additional adaptation data or speaker information such as i-vectors. Moreover, the model size is fixed as it dynamically generates adaptation parameters. We apply our proposed DLN to deep bidirectional LSTM acoustic models and evaluate them on two benchmark datasets for large vocabulary ASR experiments: WSJ and TED-LIUM release 2. The experimental results show that our DLN improves neural acoustic models in terms of transcription accuracy by dynamically adapting to various speakers and environments.
On the other hand, model-based adaptation directly updates neural acoustic model parameters based on adaptation data. Liao @cite_3 investigated speaker adaptation of DNN-based acoustic models using adaptation data through supervised and unsupervised adaptation and showed how L2 regularization on the speaker independent model improved generalization. @cite_7 @cite_14 , a speaker independent model was adapted to a specific speaker with speaker dependent parameters at each hidden layer. These parameters were estimated with adaptation data for each speaker and used to scale the hidden activations in the speaker independent model. Model-based adaptation typically brings more improvement than auxiliary feature-based adaptation as shown in @cite_10 . However, adaptation data needs to be collected for each new speaker and speaker-specific parameters must be maintained and estimated for each speaker, which results in an increased model size.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_7", "@cite_3" ], "mid": [ "", "2048526313", "2094147890", "2010362084" ], "abstract": [ "", "In acoustic modeling, speaker adaptive training (SAT) has been a long-standing technique for the traditional Gaussian mixture models (GMMs). Acoustic models trained with SAT become independent of training speakers and generalize better to unseen testing speakers. This paper ports the idea of SAT to deep neural networks (DNNs), and proposes a framework to perform feature-space SAT for DNNs. Using i-vectors as speaker representations, our framework learns an adaptation neural network to derive speaker-normalized features. Speaker adaptive models are obtained by fine-tuning DNNs in such a feature space. This framework can be applied to various feature types and network structures, posing a very general SAT solution. In this paper, we fully investigate how to build SAT-DNN models effectively and efficiently. First, we study the optimal configurations of SAT-DNNs for large-scale acoustic modeling tasks. Then, after presenting detailed comparisons between SAT-DNNs and the existing DNN adaptation methods, we propose to combine SAT-DNNs and model-space DNN adaptation during decoding. Finally, to accelerate learning of SAT-DNNs, a simple yet effective strategy, frame skipping, is employed to reduce the size of training data. Our experiments show that compared with a strong DNN baseline, the SAT-DNN model achieves 13.5 and 17.5 relative improvement on word error rates (WERs), without and with model-space adaptation applied respectively. Data reduction based on frame skipping results in 2 @math speed-up for SAT-DNN training, while causing negligible WER loss on the testing data.", "This paper proposes a simple yet effective model-based neural network speaker adaptation technique that learns speaker-specific hidden unit contributions given adaptation data, without requiring any form of speaker-adaptive training, or labelled adaptation data. An additional amplitude parameter is defined for each hidden unit; the amplitude parameters are tied for each speaker, and are learned using unsupervised adaptation. We conducted experiments on the TED talks data, as used in the International Workshop on Spoken Language Translation (IWSLT) evaluations. Our results indicate that the approach can reduce word error rates on standard IWSLT test sets by about 8–15 relative compared to unadapted systems, with a further reduction of 4–6 relative when combined with feature-space maximum likelihood linear regression (fMLLR). The approach can be employed in most existing feed-forward neural network architectures, and we report results using various hidden unit activation functions: sigmoid, maxout, and rectifying linear units (ReLU).", "There has been little work on examining how deep neural networks may be adapted to speakers for improved speech recognition accuracy. Past work has examined using a discriminatively trained affine transformation of the input features applied at a frame level or the re-training of the entire shallow network for a specific speaker. This work explores how deep neural networks may be adapted to speakers by re-training the input layer, the output layer or the entire network. We look at how L2 regularization using weight decay to the speaker independent model improves generalization. Other training factors are examined including the role momentum plays and stochastic mini-batch versus batch training. While improvements are significant for smaller networks, the largest show little gain from adaptation on a large vocabulary mobile speech recognition task." ] }
1707.05993
2739064080
The exponential growth of mobile data traffic is driving the deployment of dense wireless networks, which will not only impose heavy backhaul burdens, but also generate considerable power consumption. Introducing caches to the wireless network edge is a potential and cost-effective solution to address these challenges. In this paper, we will investigate the problem of minimizing the network power consumption of cache-enabled wireless networks, consisting of the base station (BS) and backhaul power consumption. The objective is to develop efficient algorithms that unify adaptive BS selection, backhaul content assignment and multicast beamforming, while taking account of user QoS requirements and backhaul capacity limitations. To address the NP-hardness of the network power minimization problem, we first propose a generalized layered group sparse beamforming (LGSBF) modeling framework, which helps to reveal the layered sparsity structure in the beamformers. By adopting the reweighted @math -norm technique, we further develop a convex approximation procedure for the LGSBF problem, followed by a three-stage iterative LGSBF framework to induce the desired sparsity structure in the beamformers. Simulation results validate the effectiveness of the proposed algorithm in reducing the network power consumption, and demonstrate that caching plays a more significant role in networks with higher user densities and less power-efficient backhaul links.
There is a growing concern on energy efficiency in wireless networks. Previous works include transmit power minimization via coordinated beamforming @cite_53 @cite_32 @cite_12 @cite_22 and adaptive selection of active BSs @cite_1 @cite_11 @cite_5 @cite_39 . After introducing edge caches, similar approaches have been extended to the cache-enabled wireless networks @cite_15 @cite_46 . With cell densification, backhaul power consumption will become a significant component of the total network power consumption @cite_35 . black In @cite_23 , energy efficiency for cache-aided networks was optimized by assuming constant transmit power for small cell BSs and wireless backhaul nodes. In @cite_29 , caching content placement and multicast association were optimized in order to minimize the overall energy cost. But it only considered the backhaul power of the macro BS and did not count small BSs. In order to minimize the network power consumption, joint beamforming and backhaul data assignment problem was investigated in @cite_22 @cite_38 @cite_17 @cite_13 @cite_19 @cite_7 . But a comprehensive consideration of traffic-dependent backhaul power consumption, active BS selection, multicast beamforming, and backhaul data assignment is still missing.
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_22", "@cite_7", "@cite_46", "@cite_53", "@cite_29", "@cite_1", "@cite_32", "@cite_17", "@cite_39", "@cite_19", "@cite_23", "@cite_5", "@cite_15", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2065969250", "2082133464", "2004101329", "2213621529", "", "2108662631", "2321936408", "2112170521", "2131822606", "2547161317", "", "2212194410", "2612681729", "2167146754", "2081893537", "1928152501", "2079587612", "2125620840" ], "abstract": [ "In this paper, we investigate the impact of limited backhaul capacity on user scheduling in the context of a heterogeneous network comprising a macro base station overlaid with small-cells. Under a global proportional fairness (PF) criteria, we show that this limited capacity has a fundamental impact on user scheduling. When user association and channel allocation are given, like in the infinite-capacity backhaul case, the global PF user scheduling problem can be decomposed into a set of independent local PF user scheduling problems. However, unlike the case with infinite backhaul where the local PF scheme is equivalent to giving equal time to each user, a local PF scheme with finite backhaul can be of one of three types. We completely characterize these three types and the conditions under which to use them. The results show that a backhaul-aware scheduling scheme is simple to implement and necessary to obtain the best performance.", "Multicell cooperation has recently attracted tremendous attention because of its ability to eliminate intercell interference and increase spectral efficiency. However, the enormous amount of information being exchanged, including channel state information and user data, over backhaul links may deteriorate the network performance in a realistic system. This paper adopts a backhaul cost metric that considers the number of active directional cooperation links, which gives a first order measurement of the backhaul loading required in asymmetric Multiple-Input Multiple-Output (MIMO) cooperation. We focus on a downlink scenario for multi-antenna base stations and single-antenna mobile stations. The design problem is minimizing the number of active directional cooperation links and jointly optimizing the beamforming vectors among the cooperative BSs subject to signal-to-interference-and-noise-ratio (SINR) constraints at the mobile station. This problem is non-convex and solving it requires combinatorial search. A practical algorithm based on smooth approximation and semidefinite relaxation is proposed to solve the combinatorial problem efficiently. We show that semidefinite relaxation is tight with probability 1 in our algorithm and stationary convergence is guaranteed. Simulation results show the saving of backhaul cost and power consumption is notable compared with several baseline schemes and its effectiveness is demonstrated.", "This paper considers a downlink cloud radio access network (C-RAN) in which all the base-stations (BSs) are connected to a central computing cloud via digital backhaul links with finite capacities. Each user is associated with a user-centric cluster of BSs; the central processor shares the user's data with the BSs in the cluster, which then cooperatively serve the user through joint beamforming. Under this setup, this paper investigates the user scheduling, BS clustering, and beamforming design problem from a network utility maximization perspective. Differing from previous works, this paper explicitly considers the per-BS backhaul capacity constraints. We formulate the network utility maximization problem for the downlink C-RAN under two different models depending on whether the BS clustering for each user is dynamic or static over different user scheduling time slots. In the former case, the user-centric BS cluster is dynamically optimized for each scheduled user along with the beamforming vector in each time-frequency slot, whereas in the latter case, the user-centric BS cluster is fixed for each user and we jointly optimize the user scheduling and the beamforming vector to account for the backhaul constraints. In both cases, the nonconvex per-BS backhaul constraints are approximated using the reweighted l 1 -norm technique. This approximation allows us to reformulate the per-BS backhaul constraints into weighted per-BS power constraints and solve the weighted sum rate maximization problem through a generalized weighted minimum mean square error approach. This paper shows that the proposed dynamic clustering algorithm can achieve significant performance gain over existing naive clustering schemes. This paper also proposes two heuristic static clustering schemes that can already achieve a substantial portion of the gain.", "This paper presents a content-centric transmission design in a cloud radio access network by incorporating multicasting and caching. Users requesting the same content form a multicast group and are served by a same cluster of base stations (BSs) cooperatively. Each BS has a local cache, and it acquires the requested contents either from its local cache or from the central processor via backhaul links. We investigate the dynamic content-centric BS clustering and multicast beamforming with respect to both channel condition and caching status. We first formulate a mixed-integer nonlinear programming problem of minimizing the weighted sum of backhaul cost and transmit power under the quality-of-service constraint for each multicast group. Theoretical analysis reveals that all the BSs caching a requested content can be included in the BS cluster of this content, regardless of the channel conditions. Then, we reformulate an equivalent sparse multicast beamforming (SBF) problem. By adopting smoothed @math -norm approximation and other techniques, the SBF problem is transformed into the difference of convex programs and effectively solved using the convex-concave procedure algorithms. Simulation results demonstrate significant advantage of the proposed content-centric transmission. The effects of heuristic caching strategies are also evaluated.", "", "A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination-to enhance the sum rate-and limited inter-cluster coordination-to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks.", "The landscape toward 5G wireless communication is currently unclear, and, despite the efforts of academia and industry in evolving traditional cellular networks, the enabling technology for 5G is still obscure. This paper puts forward a network paradigm toward next-generation cellular networks, targeting to satisfy the explosive demand for mobile data while minimizing energy expenditures. The paradigm builds on two principles; namely caching and multicast . On one hand, caching policies disperse popular content files at the wireless edge, e.g., pico-cells and femto-cells, hence shortening the distance between content and requester. On other hand, due to the broadcast nature of wireless medium, requests for identical files occurring at nearby times are aggregated and served through a common multicast stream. To better exploit the available cache space, caching policies are optimized based on multicast transmissions. We show that the multicast-aware caching problem is NP-hard and develop solutions with performance guarantees using randomized-rounding techniques. Trace-driven numerical results show that in the presence of massive demand for delay tolerant content, combining caching and multicast can indeed reduce energy costs. The gains over existing caching schemes are 19 when users tolerate delay of three minutes, increasing further with the steepness of content access pattern.", "In order to quantify the energy efficiency of a wireless network, the power consumption of the entire system needs to be captured. In this article, the necessary extensions with respect to existing performance evaluation frameworks are discussed. The most important addenda of the proposed energy efficiency evaluation framework (E3F) are a sophisticated power model for various base station types, as well as large-scale long-term traffic models. The BS power model maps the RF output power radiated at the antenna elements to the total supply power of a BS site. The proposed traffic model emulates the spatial distribution of the traffic demands over large geographical regions, including urban and rural areas, as well as temporal variations between peak and off-peak hours. Finally, the E3F is applied to quantify the energy efficiency of the downlink of a 3GPP LTE radio access network.", "In a cooperative multiple-antenna downlink cellular network, maximization of a concave function of user rates is considered. A new linear precoding technique called soft interference nulling (SIN) is proposed, which performs at least as well as zero-forcing (ZF) beamforming. All base stations share channel state information, but each user's message is only routed to those that participate in the user's coordination cluster. SIN precoding is particularly useful when clusters of limited sizes overlap in the network, in which case traditional techniques such as dirty paper coding or ZF do not directly apply. The SIN precoder is computed by solving a sequence of convex optimization problems. SIN under partial network coordination can outperform ZF under full network coordination at moderate SNRs. Under overlapping coordination clusters, SIN precoding achieves considerably higher throughput compared to myopic ZF, especially when the clusters are large.", "Caching at wireless access points is a promising approach to alleviate the backhaul burden in wireless networks. In this paper, we consider a cooperative wireless caching network where all the base stations (BSs) are connected to a central controller via backhaul links. In such a network, users can get the required data locally if they are cached at the BSs. Otherwise, the user data need to be assigned from the central controller to BSs via backhaul. In order to reduce the network cost, i.e., the back-haul cost and the transmit power cost, the data assignment for different BSs and the coordinated beamforming to serve different users need to be jointly designed. We formulate such a design problem as the minimization of the network cost, subject to the quality of service (QoS) constraint of each user and the transmit power constraint of each BS. This problem involves mixed-integer programming and is highly complicated. In order to provide an efficient solution, the connection between the data assignment and the sparsity-introducing norm is established. Low-complexity algorithms are then proposed to solve the joint optimization problem, which essentially decouple the data assignment and the transmit power minimization beamforming. Simulation results show that the proposed algorithms can effectively minimize the network cost and provide near optimal performance.", "", "The cloud radio access network (Cloud-RAN) has recently been proposed as one of the cost-effective and energy-efficient techniques for 5G wireless networks. By moving the signal processing functionality to a single baseband unit (BBU) pool, centralized signal processing and resource allocation are enabled in cloud-RAN, thereby providing the promise of improving the energy efficiency via effective network adaptation and interference management. In this paper, we propose a holistic sparse optimization framework to design green cloud-RAN by taking into consideration the power consumption of the fronthaul links, multicast services, as well as user admission control. Specifically, we first identify the sparsity structures in the solutions of both the network power minimization and user admission control problems, which call for adaptive remote radio head (RRH) selection and user admission. However, finding the optimal sparsity structures turns out to be NP-hard, with the coupled challenges of the @math -norm-based objective functions and the nonconvex quadratic QoS constraints due to multicast beamforming. In contrast to the previous works on convex but nonsmooth sparsity inducing approaches, e.g., the group sparse beamforming algorithm based on the mixed @math -norm relaxation, we adopt the nonconvex but smoothed @math -minimization ( @math @math $ -minimization problem from the SDR technique. We illustrate the effectiveness of the proposed algorithms with extensive simulations for network power minimization and user admission control in multicast cloud-RAN.", "Caching at the edge is a promising technique to cope with the increasing data demand in wireless networks. This paper analyzes the performance of cellular networks consisting of a tier macro-cell wireless backhaul nodes overlaid with a tier of cache-aided small cells. We consider both static and dynamic association policies for content delivery to the user terminals and analyze their performance. In particular, we derive closed-form expressions for the area spectral efficiency and the energy efficiency, which are used to optimize relevant design parameters such as the density of cache-aided small cells and the storage size. By means of this approach, we are able to draw useful design insights for the deployment of highly performing cache-aided tiered networks.", "This paper discusses how energy consumption can be significantly reduced in mobile networks by introducing discontinuous transmission (DTX) on the base station side. By introducing DTX on the downlink, or cell DTX, we show that it is possible to achieve significant energy reductions in an LTE network. Cell DTX is most efficient when the traffic load is low in a cell but even when realistic traffic statistics are considered the gains are impressive. The technology potential for a metropolitan area is shown to be 90 reduced energy consumption compared to no use of cell DTX. The paper also discusses different drives for the increased focus on energy efficient network operation and also provides insights on the impact of cell DTX from a life cycle assessment perspective.", "Consider media streaming in MIMO interference networks whereby multiple base stations (BS) simultaneously deliver media to their associated users using fixed data rates. The performance is fundamentally limited by the cross-link interference. We propose a cache-induced opportunistic cooperative MIMO (CoMP) for interference mitigation. By caching a portion of the media files, the BSs opportunistically employ CoMP to transform the cross-link interference into spatial multiplexing gain. We study a mixed-timescale optimization of MIMO precoding and cache control to minimize the transmit power under the rate constraint. The cache control is to create more CoMP opportunities and is adaptive to the long-term popularity of the media files. The precoding is to guarantee the rate requirement and is adaptive to the channel state information and cache state at the BSs. The joint stochastic optimization problem is decomposed into a short-term precoding and a long-term cache control problem. We propose a precoding algorithm which converges to a stationary point of the short-term problem. Based on this, we exploit the hidden convexity of the long-term problem and propose a low complexity and robust solution using stochastic subgradient. The solution has significant gains over various baselines and does not require explicit knowledge of the media popularity.", "A cloud radio access network (Cloud-RAN) is a network architecture that holds the promise of meeting the explosive growth of mobile data traffic. In this architecture, all the baseband signal processing is shifted to a single baseband unit (BBU) pool, which enables efficient resource allocation and interference management. Meanwhile, conventional powerful base stations can be replaced by low-cost low-power remote radio heads (RRHs), producing a green and low-cost infrastructure. However, as all the RRHs need to be connected to the BBU pool through optical transport links, the transport network power consumption becomes significant. In this paper, we propose a new framework to design a green Cloud-RAN, which is formulated as a joint RRH selection and power minimization beamforming problem. To efficiently solve this problem, we first propose a greedy selection algorithm, which is shown to provide near-optimal performance. To further reduce the complexity, a novel group sparse beamforming method is proposed by inducing the group-sparsity of beamformers using the weighted -norm minimization, where the group sparsity pattern indicates those RRHs that can be switched off. Simulation results will show that the proposed algorithms significantly reduce the network power consumption and demonstrate the importance of considering the transport link power consumption.", "We consider the interference management problem in a multicell MIMO heterogeneous network. Within each cell there is a large number of distributed micro pico base stations (BSs) that can be potentially coordinated for joint transmission. To reduce coordination overhead, we consider user-centric BS clustering so that each user is served by only a small number of (potentially overlapping) BSs. Thus, given the channel state information, our objective is to jointly design the BS clustering and the linear beamformers for all BSs in the network. In this paper, we formulate this problem from a sparse optimization perspective, and propose an efficient algorithm that is based on iteratively solving a sequence of group LASSO problems. A novel feature of the proposed algorithm is that it performs BS clustering and beamformer design jointly rather than separately as is done in the existing approaches for partial coordinated transmission. Moreover, the cluster size can be controlled by adjusting a single penalty parameter in the nonsmooth regularized utility function. The convergence of the proposed algorithm (to a stationary solution) is guaranteed, and its effectiveness is demonstrated via extensive simulation.", "The explosive development of Information and Communication Technology (ICT) has significantly enlarged both the energy demands and the CO 2 emissions, and consequently contributes to make the energy crisis and global warming problems worse. However, as the main force of the ICT field, the mobile networks, are currently focusing on the capacity, variety and stability of the communication services, without paying too much severe concerns on the energy efficiency. The escalating energy costs and environmental concerns have already created an urgent need for more energy-efficient \"green\" wireless communications. In this paper, we survey and discuss various remarkable techniques toward green mobile networks to date, mainly targeting mobile cellular networks. We also summarize the current research projects related to green mobile networks, along with the taxonomy of energy-efficiency metrics. We finally discuss and elaborate future research opportunities and design challenges for green mobile networks." ] }
1707.05993
2739064080
The exponential growth of mobile data traffic is driving the deployment of dense wireless networks, which will not only impose heavy backhaul burdens, but also generate considerable power consumption. Introducing caches to the wireless network edge is a potential and cost-effective solution to address these challenges. In this paper, we will investigate the problem of minimizing the network power consumption of cache-enabled wireless networks, consisting of the base station (BS) and backhaul power consumption. The objective is to develop efficient algorithms that unify adaptive BS selection, backhaul content assignment and multicast beamforming, while taking account of user QoS requirements and backhaul capacity limitations. To address the NP-hardness of the network power minimization problem, we first propose a generalized layered group sparse beamforming (LGSBF) modeling framework, which helps to reveal the layered sparsity structure in the beamformers. By adopting the reweighted @math -norm technique, we further develop a convex approximation procedure for the LGSBF problem, followed by a three-stage iterative LGSBF framework to induce the desired sparsity structure in the beamformers. Simulation results validate the effectiveness of the proposed algorithm in reducing the network power consumption, and demonstrate that caching plays a more significant role in networks with higher user densities and less power-efficient backhaul links.
There are some preliminary studies on developing sparsity-based approaches for designing wireless networks. Inspired by the success of sparse signal processing techniques such as compressed sensing @cite_30 @cite_40 , more structured sparsity patterns have been exploited, including group sparsity @cite_31 , overlapping group sparsity @cite_33 , and layered group sparsity @cite_36 @cite_50 , which yield efficient algorithms. Recent years have witnessed an increasing prevalence of applying sparse optimization to design wireless networks, such as the individual sparsity-inducing norm applied for user admission in @cite_19 black and link admission control in @cite_21 , and the group sparsity-inducing norm applied for active remote radio head selection of Cloud-RAN in @cite_13 . Sparse optimization is further applied to joint beamforming and backhaul data assignment design in caching networks @cite_17 @cite_7 , which may provide potential solutions for 5G wireless networks. As will be revealed in this paper, network energy minimization in cache-enabled wireless networks involves more complicated sparsity structures, and thus more thorough investigations will be needed.
{ "cite_N": [ "@cite_30", "@cite_33", "@cite_7", "@cite_36", "@cite_21", "@cite_19", "@cite_40", "@cite_50", "@cite_31", "@cite_13", "@cite_17" ], "mid": [ "", "1970554427", "2213621529", "2949385534", "2963359247", "2212194410", "2129638195", "2010430916", "2138019504", "1928152501", "2547161317" ], "abstract": [ "", "We propose a new penalty function which, when used as regularization for empirical risk minimization procedures, leads to sparse estimators. The support of the sparse vector is typically a union of potentially overlapping groups of co-variates defined a priori, or a set of covariates which tend to be connected to each other when a graph of covariates is given. We study theoretical properties of the estimator, and illustrate its behavior on simulated and breast cancer gene expression data.", "This paper presents a content-centric transmission design in a cloud radio access network by incorporating multicasting and caching. Users requesting the same content form a multicast group and are served by a same cluster of base stations (BSs) cooperatively. Each BS has a local cache, and it acquires the requested contents either from its local cache or from the central processor via backhaul links. We investigate the dynamic content-centric BS clustering and multicast beamforming with respect to both channel condition and caching status. We first formulate a mixed-integer nonlinear programming problem of minimizing the weighted sum of backhaul cost and transmit power under the quality-of-service constraint for each multicast group. Theoretical analysis reveals that all the BSs caching a requested content can be included in the BS cluster of this content, regardless of the channel conditions. Then, we reformulate an equivalent sparse multicast beamforming (SBF) problem. By adopting smoothed @math -norm approximation and other techniques, the SBF problem is transformed into the difference of convex programs and effectively solved using the convex-concave procedure algorithms. Simulation results demonstrate significant advantage of the proposed content-centric transmission. The effects of heuristic caching strategies are also evaluated.", "Sparse-Group Lasso (SGL) has been shown to be a powerful regression technique for simultaneously discovering group and within-group sparse patterns by using a combination of the @math and @math norms. However, in large-scale applications, the complexity of the regularizers entails great computational challenges. In this paper, we propose a novel Two-Layer Feature REduction method (TLFre) for SGL via a decomposition of its dual feasible set. The two-layer reduction is able to quickly identify the inactive groups and the inactive features, respectively, which are guaranteed to be absent from the sparse representation and can be removed from the optimization. Existing feature reduction methods are only applicable for sparse models with one sparsity-inducing regularizer. To our best knowledge, TLFre is the first one that is capable of dealing with multiple sparsity-inducing regularizers. Moreover, TLFre has a very low computational cost and can be integrated with any existing solvers. We also develop a screening method---called DPC (DecomPosition of Convex set)---for the nonnegative Lasso problem. Experiments on both synthetic and real data sets show that TLFre and DPC improve the efficiency of SGL and nonnegative Lasso by several orders of magnitude.", "In an interference limited network, joint power and admission control (JPAC) aims at supporting a maximum number of links at their specified signal-to-interference-plus-noise ratio (SINR) targets while using minimum total transmission power. Various convex approximation deflation approaches have been developed for the JPAC problem. In this paper, we propose an effective polynomial time non-convex approximation deflation approach for solving the problem. The approach is based on the non-convex l q (0 0 reformulation of the JPAC problem. We show that, for any instance of the JPAC problem, there exists a q ∈ (0,1) such that it can be exactly solved by solving its l q approximation problem with any q ∈ (0,q). We also show that finding the global solution of the l q approximation problem is NP-hard. Then, we propose a potential reduction interior-point algorithm, which can return an ∈-KKT solution of the NP-hard l q approximation problem in polynomial time. The returned solution can be used to check the simultaneous supportability of all links in the network and to guide an iterative link removal procedure, resulting in the polynomial time non-convex approximation deflation approach for the JPAC problem. Numerical simulations show that the proposed approach outperforms the existing convex approximation approaches in terms of the number of supported links and the total transmission power, particularly exhibiting a quite good performance in selecting which subset of links to support.", "The cloud radio access network (Cloud-RAN) has recently been proposed as one of the cost-effective and energy-efficient techniques for 5G wireless networks. By moving the signal processing functionality to a single baseband unit (BBU) pool, centralized signal processing and resource allocation are enabled in cloud-RAN, thereby providing the promise of improving the energy efficiency via effective network adaptation and interference management. In this paper, we propose a holistic sparse optimization framework to design green cloud-RAN by taking into consideration the power consumption of the fronthaul links, multicast services, as well as user admission control. Specifically, we first identify the sparsity structures in the solutions of both the network power minimization and user admission control problems, which call for adaptive remote radio head (RRH) selection and user admission. However, finding the optimal sparsity structures turns out to be NP-hard, with the coupled challenges of the @math -norm-based objective functions and the nonconvex quadratic QoS constraints due to multicast beamforming. In contrast to the previous works on convex but nonsmooth sparsity inducing approaches, e.g., the group sparse beamforming algorithm based on the mixed @math -norm relaxation, we adopt the nonconvex but smoothed @math -minimization ( @math @math $ -minimization problem from the SDR technique. We illustrate the effectiveness of the proposed algorithms with extensive simulations for network power minimization and user admission control in multicast cloud-RAN.", "Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p , where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K logN)-r, r=1 p-1 2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed", "We present a multi-layer group sparse coding framework for concurrent image classification and annotation. By leveraging the dependency between image class label and tags, we introduce a multi-layer group sparse structure of the reconstruction coefficients. Such structure fully encodes the mutual dependency between the class label, which describes the image content as a whole, and tags, which describe the components of the image content. Then we propose a multi-layer group based tag propagation method, which combines the class label and subgroups of instances with similar tag distribution to annotate test images. Moreover, we extend our multi-layer group sparse coding in the Reproducing Kernel Hilbert Space (RKHS) which captures the nonlinearity of features, and further improves performances of image classification and annotation. Experimental results on the LabelMe, UIUC-Sport and NUS-WIDE-Object databases show that our method outperforms the baseline methods, and achieves excellent performances in both image classification and annotation tasks.", "Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods.", "A cloud radio access network (Cloud-RAN) is a network architecture that holds the promise of meeting the explosive growth of mobile data traffic. In this architecture, all the baseband signal processing is shifted to a single baseband unit (BBU) pool, which enables efficient resource allocation and interference management. Meanwhile, conventional powerful base stations can be replaced by low-cost low-power remote radio heads (RRHs), producing a green and low-cost infrastructure. However, as all the RRHs need to be connected to the BBU pool through optical transport links, the transport network power consumption becomes significant. In this paper, we propose a new framework to design a green Cloud-RAN, which is formulated as a joint RRH selection and power minimization beamforming problem. To efficiently solve this problem, we first propose a greedy selection algorithm, which is shown to provide near-optimal performance. To further reduce the complexity, a novel group sparse beamforming method is proposed by inducing the group-sparsity of beamformers using the weighted -norm minimization, where the group sparsity pattern indicates those RRHs that can be switched off. Simulation results will show that the proposed algorithms significantly reduce the network power consumption and demonstrate the importance of considering the transport link power consumption.", "Caching at wireless access points is a promising approach to alleviate the backhaul burden in wireless networks. In this paper, we consider a cooperative wireless caching network where all the base stations (BSs) are connected to a central controller via backhaul links. In such a network, users can get the required data locally if they are cached at the BSs. Otherwise, the user data need to be assigned from the central controller to BSs via backhaul. In order to reduce the network cost, i.e., the back-haul cost and the transmit power cost, the data assignment for different BSs and the coordinated beamforming to serve different users need to be jointly designed. We formulate such a design problem as the minimization of the network cost, subject to the quality of service (QoS) constraint of each user and the transmit power constraint of each BS. This problem involves mixed-integer programming and is highly complicated. In order to provide an efficient solution, the connection between the data assignment and the sparsity-introducing norm is established. Low-complexity algorithms are then proposed to solve the joint optimization problem, which essentially decouple the data assignment and the transmit power minimization beamforming. Simulation results show that the proposed algorithms can effectively minimize the network cost and provide near optimal performance." ] }
1707.05938
2739099888
We propose an Ensemble of Robust Constrained Local Models for alignment of faces in the presence of significant occlusions and of any unknown pose and expression. To account for partial occlusions we introduce, Robust Constrained Local Models, that comprises of a deformable shape and local landmark appearance model and reasons over binary occlusion labels. Our occlusion reasoning proceeds by a hypothesize-and-test search over occlusion labels. Hypotheses are generated by Constrained Local Model based shape fitting over randomly sampled subsets of landmark detector responses and are evaluated by the quality of face alignment. To span the entire range of facial pose and expression variations we adopt an ensemble of independent Robust Constrained Local Models to search over a discretized representation of pose and expression. We perform extensive evaluation on a large number of face images, both occluded and unoccluded. We find that our face alignment system trained entirely on facial images captured "in-the-lab" exhibits a high degree of generalization to facial images captured "in-the-wild". Our results are accurate and stable over a wide spectrum of occlusions, pose and expression variations resulting in excellent performance on many real-world face datasets.
Active Shape Models (ASM) @cite_20 and Active Appearance Models (AAM) @cite_22 are the earliest and most widely-used approaches for shape fitting. In ASM landmarks along profile normals of a given shape are found, the shape is updated by the landmarks, and is iterated until convergence. AAM, a generative approach, finds shape and appearance parameters which minimize appearance error between an input image and generated appearance instances via optimization. Building upon the AAM, many algorithms have been proposed @cite_10 @cite_31 @cite_38 @cite_5 @cite_6 to address known problems like pose variations, illumination variations and image resolution. However due to their poor generalization capability, AAMs are prone to fail when the input image is different from the training set @cite_17 . Furthermore, while AAM based approaches @cite_5 @cite_1 using multiple shape models to span the large range of possible facial poses have been proposed, they still require pose estimation to select the right shape model.
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_1", "@cite_6", "@cite_5", "@cite_31", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "", "", "2029835507", "2034116383", "2153353865", "2128958079", "2152826865", "", "" ], "abstract": [ "", "", "Abstract We demonstrate that a small number of 2D linear statistical models are sufficient to capture the shape and appearance of a face from a wide range of viewpoints. Such models can be used to estimate head orientation and track faces through large angles. Given multiple images of the same face we can learn a coupled model describing the relationship between the frontal appearance and the profile of a face. This relationship can be used to predict new views of a face seen from one view and to constrain search algorithms which seek to locate a face in multiple views simultaneously.", "Most image registration problems are formulated in an asymmetric fashion. Given a pair of images, one is implicitly or explicitly regarded as a template and warped onto the other to match as well as possible. In this paper, we focus on this seemingly arbitrary choice of the roles and reveal how it may lead to biased warp estimates in the presence of relative scaling. We present a principled way of selecting the template and explain why only the correct asymmetric form, with the potential inclusion of a blurring step, can yield an unbiased estimator. We validate our analysis in the domain of model-based face tracking. We show how the usual active appearance model (AAM) formulation overlooks the asymmetry issue, causing the fitting accuracy to degrade quickly when the observed objects are smaller than their model. We formulate a novel, \"resolution-aware fitting\" (RAF) algorithm that respects the asymmetry and incorporates an explicit model of the blur caused by the camera's sensing elements into the fitting formulation. We compare the RAF algorithm against a state-of-the-art tracker across a variety of resolutions and AAM complexity levels. Experimental results show that RAF significantly improves the estimation accuracy of both shape and appearance parameters when fitting to low-resolution data. Recognizing and accounting for the asymmetry of image registration leads to tangible accuracy improvements in analyzing low-resolution imagery", "The active appearance model (AAM) is a well-known model that can represent a non-rigid object effectively. However, the fitting result is often unsatisfactory when an input image deviates from the training images due to its fixed shape and appearance model. To obtain more robust AAM fitting, we propose a tensor-based AAM that can handle a variety of subjects, poses, expressions, and illuminations in the tensor algebra framework, which consists of an image tensor and a model tensor. The image tensor estimates image variations such as pose, expression, and illumination of the input image using two different variation estimation techniques: discrete and continuous variation estimation. The model tensor generates variation-specific AAM basis vectors from the estimated image variations, which leads to more accurate fitting results. To validate the usefulness of the tensor-based AAM, we performed variation-robust face recognition using the tensor-based AAM fitting results. To do, we propose indirect AAM feature transformation. Experimental results show that tensor-based AAM with continuous variation estimation outperforms that with discrete variation estimation and conventional AAM in terms of the average fitting error and the face recognition rate.", "Active appearance models (AAMs) are generative models commonly used to model faces. Another closely related types of face models are 3D morphable models (3DMMs). Although AAMs are 2D, they can still be used to model 3D phenomena such as faces moving across pose. We first study the representational power of AAMs and show that they can model anything a 3DMM can, but possibly require more shape parameters. We quantify the number of additional parameters required and show that 2D AAMs can generate model instances that are not possible with the equivalent 3DMM. We proceed to describe how a non-rigid structure-from-motion algorithm can be used to construct the corresponding 3D shape modes of a 2D AAM. We then show how the 3D modes can be used to constrain the AAM so that it can only generate model instances that can also be generated with the 3D modes. Finally, we propose a real-time algorithm for fitting the AAM while enforcing the constraints, creating what we call a \"combined 2D+3D AAM\".", "We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.", "", "" ] }
1707.05938
2739099888
We propose an Ensemble of Robust Constrained Local Models for alignment of faces in the presence of significant occlusions and of any unknown pose and expression. To account for partial occlusions we introduce, Robust Constrained Local Models, that comprises of a deformable shape and local landmark appearance model and reasons over binary occlusion labels. Our occlusion reasoning proceeds by a hypothesize-and-test search over occlusion labels. Hypotheses are generated by Constrained Local Model based shape fitting over randomly sampled subsets of landmark detector responses and are evaluated by the quality of face alignment. To span the entire range of facial pose and expression variations we adopt an ensemble of independent Robust Constrained Local Models to search over a discretized representation of pose and expression. We perform extensive evaluation on a large number of face images, both occluded and unoccluded. We find that our face alignment system trained entirely on facial images captured "in-the-lab" exhibits a high degree of generalization to facial images captured "in-the-wild". Our results are accurate and stable over a wide spectrum of occlusions, pose and expression variations resulting in excellent performance on many real-world face datasets.
Recently, a few face alignment methods have been proposed that are robust to occlusions. Ghiasi and Fowlkes @cite_13 proposed a CLM based approach to account for occlusions at the learning stage by simulating facial occlusions. Burgos-Artizzu et. al. @cite_7 proposed a shape regression based approach that is explicitly designed to be robust to occlusions when facial landmark occlusion labels are available at training. These approaches require occluded landmarks, real or artificially simulated, for training their models in a purely discriminative manner. Our approach, in contrast, does not require landmark occlusion labels (which are usually unavailable, especially for dense landmarking schemes used in this paper) for training. We employ a generative shape model at inference and account for outlier landmark detections caused by occlusions, without being trained on occluded faces (real or simulated).
{ "cite_N": [ "@cite_13", "@cite_7" ], "mid": [ "2005264304", "2111372597" ], "abstract": [ "The presence of occluders significantly impacts performance of systems for object recognition. However, occlusion is typically treated as an unstructured source of noise and explicit models for occluders have lagged behind those for object appearance and shape. In this paper we describe a hierarchical deformable part model for face detection and keypoint localization that explicitly models occlusions of parts. The proposed model structure makes it possible to augment positive training data with large numbers of synthetically occluded instances. This allows us to easily incorporate the statistics of occlusion patterns in a discriminatively trained model. We test the model on several benchmarks for keypoint localization including challenging sets featuring significant occlusion. We find that the addition of an explicit model of occlusion yields a system that outperforms existing approaches in keypoint localization accuracy.", "Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80 40 precision recall." ] }
1707.05909
2560667801
In sensing applications, sensors cannot always measure the latent quantity of interest at the required resolution, sometimes they can only acquire a blurred version of it due the sensor's transfer function. To recover latent signals when only noisy mixed measurements of the signal are available, we propose the Gaussian process mixture of measurements (GPMM), which models the latent signal as a Gaussian process (GP) and allows us to perform Bayesian inference on such signal conditional to a set of noisy mixture of measurements. We describe how to train GPMM, that is, to find the hyperparameters of the GP and the mixing weights, and how to perform inference on the latent signal under GPMM; additionally, we identify the solution to the underdetermined linear system resulting from a sensing application as a particular case of GPMM. The proposed model is validated in the recovery of three signals: A smooth synthetic signal, a real-world heart-rate time series and a step function, where GPMM outperformed the standard GP in terms of estimation error, uncertainty representation, and recovery of the spectral content of the latent signal.
The GP framework is well suited to the sensing setting in fig. , since modelling latent signal as a GP results in the posterior distribution of the latent process (given the mixture of measurements) being Gaussian as well. Previous GP-based models for convolution processes @cite_3 @cite_12 @cite_9 @cite_11 model signals a convolution between a continuous-time filter and a white-noise process, which is unsuitable to represent the latent process in the sensing application where the spatial correlation of the process is fundamental. Conversely, @cite_8 allows GPs as latent functions but to address the multi-output case, where the aim is to perform inference on the outputs rather than the latent processes. Furthermore, these methods consider continuous-time convolution filters, which is computationally demanding and requires, e.g., variational approximations @cite_2 @cite_14 . Consequently, closed-form and computationally-efficient Bayesian reconstruction of the latent process is still an open problem in sensing applications.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_9", "@cite_3", "@cite_2", "@cite_12", "@cite_11" ], "mid": [ "2030656756", "2143672530", "2167986580", "2183406623", "137285897", "", "1534349507" ], "abstract": [ "In this paper, we propose a Gaussian process (GP) model for analysis of nonlinear time series. Formulation of our model is based on the consideration that the observed data are functions of latent variables, with the associated mapping between observations and latent representations modeled through GP priors. In addition, to capture the temporal dynamics in the modeled data, we assume that subsequent latent representations depend on each other on the basis of a hidden Markov prior imposed over them. Derivation of our model is performed by marginalizing out the model parameters in closed form using GP priors for observation mappings, and appropriate stick-breaking priors for the latent variable (Markovian) dynamics. This way, we eventually obtain a nonparametric Bayesian model for dynamical systems that accounts for uncertainty in the modeled data. We provide efficient inference algorithms for our model on the basis of a truncated variational Bayesian approximation. We demonstrate the efficacy of our approach considering a number of applications dealing with real-world data, and compare it with the related state-of-the-art approaches.", "We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP. Based on these latent functions, we establish an approximation scheme using a conditional independence assumption between the output processes, leading to an approximation of the full covariance which is determined by the locations at which the latent functions are evaluated. We show results of the proposed methodology for synthetic data and real world applications on pollution prediction and a sensor network.", "Gaussian processes are usually parameterised in terms of their covariance functions. However, this makes it difficult to deal with multiple outputs, because ensuring that the covariance matrix is positive definite is problematic. An alternative formulation is to treat Gaussian processes as white noise sources convolved with smoothing kernels, and to parameterise the kernel instead. Using this, we extend Gaussian processes to handle multiple, coupled outputs.", "We introduce the Gaussian Process Convolution Model (GPCM), a two-stage non-parametric generative procedure to model stationary signals as the convolution between a continuous-time white-noise process and a continuous-time linear filter drawn from Gaussian process. The GPCM is a continuous-time nonparametric-window moving average process and, conditionally, is itself a Gaussian process with a nonparametric kernel defined in a probabilistic fashion. The generative model can be equivalently considered in the frequency domain, where the power spectral density of the signal is specified using a Gaussian process. One of the main contributions of the paper is to develop a novel variational free-energy approach based on inter-domain inducing variables that efficiently learns the continuous-time linear filter and infers the driving white-noise process. In turn, this scheme provides closed-form probabilistic estimates of the covariance kernel and the noise-free signal both in denoising and prediction scenarios. Additionally, the variational inference procedure provides closed-form expressions for the approximate posterior of the spectral density given the observed data, leading to new Bayesian nonparametric approaches to spectrum estimation. The proposed GPCM is validated using synthetic and real-world signals.", "Sparse Gaussian process methods that use inducing variables require the selection of the inducing inputs and the kernel hyperparameters. We introduce a variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood. The key property of this formulation is that the inducing inputs are defined to be variational parameters which are selected by minimizing the Kullback-Leibler divergence between the variational distribution and the exact posterior distribution over the latent function values. We apply this technique to regression and we compare it with other approaches in the literature.", "", "A continuous spatial model can be constructed by convolving a very simple, perhaps independent, process with a kernel or point spread function. This approach for constructing a spatial process offers a number of advantages over specification through a spatial covariogram. In particular, this process convolution specification leads to computational simplifications and easily extends beyond simple stationary models. This paper uses process convolution models to build space and space-time models that are flexible and able to accommodate large amounts of data. Data from environmental monitoring is considered." ] }
1707.05821
2738832561
We propose an approach to discover class-specific pixels for the weakly-supervised semantic segmentation task. We show that properly combining saliency and attention maps allows us to obtain reliable cues capable of significantly boosting the performance. First, we propose a simple yet powerful hierarchical approach to discover the class-agnostic salient regions, obtained using a salient object detector, which otherwise would be ignored. Second, we use fully convolutional attention maps to reliably localize the class-specific regions in a given image. We combine these two cues to discover class-specific pixels which are then used as an approximate ground truth for training a CNN. While solving the weakly supervised semantic segmentation task, we ensure that the image-level classification task is also solved in order to enforce the CNN to assign at least one pixel to each object present in the image. Experimentally, on the PASCAL VOC12 val and test sets, we obtain the mIoU of 60.8 and 61.9 , achieving the performance gains of 5.1 and 5.2 compared to the published state-of-the-art results. The code is made publicly available.
@cite_44 employed Expectation-Maximization to solve weakly-supervised semantic segmentation using annotated bounding boxes and image labels. Similarly, @cite_0 also relied on an EM inspired approach, however, they used image labels and saliency masks for the supervision. @cite_41 make use of scribbles to train the segmentation network where scribbles provide few pixels for which the ground truth labels are known. Similarly, @cite_26 combines annotated points with objectness priors as the supervisory signals. Some approaches employ only image labels such as @cite_24 and @cite_33 . framed the segmentation problem as a constrained optimization problem, whereas, posed the problem as a multiple instance learning problem. @cite_11 proposed a simple to complex framework where a network is first trained using simple images (single object category) followed by training over complex ones (multiple objects). @cite_14 proposed to link semantic segmentation and object localization with proposal selection module, where generated proposals came from MCG @cite_8 . Kolesnikov and Lampert @cite_43 proposed multiple loss functions that can be combined to improve the training. Recently, @cite_45 proposed an adversarial erasing scheme in order to obtain better attention maps which in turn provide better cues for the training.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_33", "@cite_8", "@cite_41", "@cite_44", "@cite_0", "@cite_24", "@cite_43", "@cite_45", "@cite_11" ], "mid": [ "2949145768", "2520746254", "1945608308", "1991367009", "", "", "2949561789", "2952004933", "2951358285", "", "" ], "abstract": [ "The semantic image segmentation task presents a trade-off between test time accuracy and training-time annotation cost. Detailed per-pixel annotations enable training accurate models but are very time-consuming to obtain, image-level class labels are an order of magnitude cheaper but result in less accurate models. We take a natural step from image-level annotation towards stronger supervision: we ask annotators to point to an object if one exists. We incorporate this point supervision along with a novel objectness potential in the training loss function of a CNN model. Experimental results on the PASCAL VOC 2012 benchmark reveal that the combined effect of point-level supervision and objectness potential yields an improvement of 12.9 mIOU over image-level supervision. Further, we demonstrate that models trained with point-level supervision are more accurate than models trained with image-level, squiggle-level or full supervision given a fixed annotation budget.", "Training neural networks for semantic segmentation is data hungry. Meanwhile annotating a large number of pixel-level segmentation masks needs enormous human effort. In this paper, we propose a framework with only image-level supervision. It unifies semantic segmentation and object localization with important proposal aggregation and selection modules. They greatly reduce the notorious error accumulation problem that commonly arises in weakly supervised learning. Our proposed training algorithm progressively improves segmentation performance with augmented feedback in iterations. Our method achieves decent results on the PASCAL VOC 2012 segmentation data, outperforming previous image-level supervised methods by a large margin.", "We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches.", "We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.", "", "", "We consider the task of learning a classifier for semantic segmentation using weak supervision in the form of image labels which specify the object classes present in the image. Our method uses deep convolutional neural networks (CNNs) and adopts an Expectation-Maximization (EM) based approach. We focus on the following three aspects of EM: (i) initialization; (ii) latent posterior estimation (E-step) and (iii) the parameter update (M-step). We show that saliency and attention maps, our bottom-up and top-down cues respectively, of simple images provide very good cues to learn an initialization for the EM-based algorithm. Intuitively, we show that before trying to learn to segment complex images, it is much easier and highly effective to first learn to segment a set of simple images and then move towards the complex ones. Next, in order to update the parameters, we propose minimizing the combination of the standard softmax loss and the KL divergence between the true latent posterior and the likelihood given by the CNN. We argue that this combination is more robust to wrong predictions made by the expectation step of the EM method. We support this argument with empirical and visual results. Extensive experiments and discussions show that: (i) our method is very simple and intuitive; (ii) requires only image-level labels; and (iii) consistently outperforms other weakly-supervised state-of-the-art methods with a very high margin on the PASCAL VOC 2012 dataset.", "We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.", "We introduce a new loss function for the weakly-supervised training of semantic image segmentation models based on three guiding principles: to seed with weak localization cues, to expand objects based on the information about which classes can occur in an image, and to constrain the segmentations to coincide with object boundaries. We show experimentally that training a deep convolutional neural network using the proposed loss function leads to substantially better segmentations than previous state-of-the-art methods on the challenging PASCAL VOC 2012 dataset. We furthermore give insight into the working mechanism of our method by a detailed experimental study that illustrates how the segmentation quality is affected by each term of the proposed loss function as well as their combinations.", "", "" ] }
1707.05615
2758506095
We propose a novel formulation of robotic pick and place as a deep reinforcement learning (RL) problem. Whereas most deep RL approaches to robotic manipulation frame the problem in terms of low level states and actions, we propose a more abstract formulation. In this formulation, actions are target reach poses for the hand and states are a history of such reaches. We show this approach can solve a challenging class of pick-place and regrasping problems where the exact geometry of the objects to be handled is unknown. The only information our method requires is: 1) the sensor perception available to the robot at test time; 2) prior knowledge of the general class of objects for which the system was trained. We evaluate our method using objects belonging to two different categories, mugs and bottles, both in simulation and on real hardware. Results show a major improvement relative to a shape primitives baseline.
One early approach to manipulation of unknown objects is based on shape primitives. Miller explored this in the context of grasp synthesis @cite_21 . Others have extended these ideas to segmentation and manipulation problems @cite_4 @cite_3 @cite_19 . These methods have difficulty when the objects are not approximately cylindrical or cuboid and when the objects cannot be easily segmented. Our method performs much better than a cylinder-based shape primitives method, even when the objects involved (bottles and mugs) are nearly cylindrical.
{ "cite_N": [ "@cite_19", "@cite_21", "@cite_4", "@cite_3" ], "mid": [ "2043690242", "2118262422", "2125298120", "" ], "abstract": [ "This paper proposes a method for bin-picking for objects without assuming the precise geometrical model of objects. We consider the case where the shape of objects are not uniform but are similarly approximated by cylinders. By using the point cloud of a single object, we extract the probabilistic properties with respect to the difference between an object and a cylinder and consider applying the probabilistic properties to the pick-and-place motion planner of an object stacked on a table. By using the probabilistic properties, we can also realize the contact state where a finger maintain contact with the target object while avoiding contact with other objects. We further consider approximating the region occupied by fingers by a rectangular parallelepiped. The pick-and-place motion is planned by using a set of regions in combination with the probabilistic properties. Finally, the effectiveness of the proposed method is confirmed by some numerical examples and experimental result.", "Automatic grasp planning for robotic hands is a difficult problem because of the huge number of possible hand configurations. However, humans simplify the problem by choosing an appropriate prehensile posture appropriate for the object and task to be performed. By modeling an object as a set of shape primitives, such as spheres, cylinders, cones and boxes, we can use a set of rules to generate a set of grasp starting positions and pregrasp shapes that can then be tested on the object model. Each grasp is tested and evaluated within our grasping simulator \"GraspIt!\", and the best grasps are presented to the user. The simulator can also plan grasps in a complex environment involving obstacles and the reachability constraints of a robot arm.", "In this paper we present a framework for 3D geometric shape segmentation for close-range scenes used in mobile manipulation and grasping, out of sensed point cloud data. Our proposed approach proposes a robust geometric mapping pipeline for large input datasets that extracts relevant objects useful for a personal robotic assistant to perform manipulation tasks. The objects are segmented out from partial views and a reconstructed model is computed by fitting geometric primitive classes such as planes, spheres, cylinders, and cones. The geometric shape coefficients are then used to reconstruct missing data. Residual points are resampled and triangulated, to create smooth decoupled surfaces that can be manipulated. The resulted map is represented as a hybrid concept and is comprised of 3D shape coefficients and triangular meshes used for collision avoidance in manipulation routines.", "" ] }
1707.05615
2758506095
We propose a novel formulation of robotic pick and place as a deep reinforcement learning (RL) problem. Whereas most deep RL approaches to robotic manipulation frame the problem in terms of low level states and actions, we propose a more abstract formulation. In this formulation, actions are target reach poses for the hand and states are a history of such reaches. We show this approach can solve a challenging class of pick-place and regrasping problems where the exact geometry of the objects to be handled is unknown. The only information our method requires is: 1) the sensor perception available to the robot at test time; 2) prior knowledge of the general class of objects for which the system was trained. We evaluate our method using objects belonging to two different categories, mugs and bottles, both in simulation and on real hardware. Results show a major improvement relative to a shape primitives baseline.
Another approach to manipulating unknown objects is to estimate the object shape from a recent history of sensor feedback. For example, Dragiev and Toussaint explore an approach that models the implicit shape of an object as a Gaussian process @cite_24 . Mahler do something similar for the purposes of grasping while incorporating tactile feedback @cite_22 . These methods can run into trouble when there is not enough data to fit the implicit shape with high confidence. Both of the above approaches can be viewed as ways of estimating object shape and pose in order to facilitate traditional configuration space planning. The problem of object pose and shape estimation given various amounts of prior data remains an active area of research @cite_5 @cite_17 @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_24", "@cite_5", "@cite_17" ], "mid": [ "1909903157", "1559022906", "2107153006", "2101199297", "2207670488" ], "abstract": [ "Detecting poorly textured objects and estimating their 3D pose reliably is still a very challenging problem. We introduce a simple but powerful approach to computing descriptors for object views that efficiently capture both the object identity and 3D pose. By contrast with previous manifold-based approaches, we can rely on the Euclidean distance to evaluate the similarity between descriptors, and therefore use scalable Nearest Neighbor search methods to efficiently handle a large number of objects under a large range of poses. To achieve this, we train a Convolutional Neural Network to compute these descriptors by enforcing simple similarity and dissimilarity constraints between the descriptors. We show that our constraints nicely untangle the images from different objects and different views into clusters that are not only well-separated but also structured as the corresponding sets of poses: The Euclidean distance between descriptors is large when the descriptors are from different objects, and directly related to the distance between the poses when the descriptors are from the same object. These important properties allow us to outperform state-of-the-art object views representations on challenging RGB and RGB-D data.", "Computing grasps for an object is challenging when the object geometry is not known precisely. In this paper, we explore the use of Gaussian process implicit surfaces (GPISs) to represent shape uncertainty from RGBD point cloud observations of objects. We study the use of GPIS representations to select grasps on previously unknown objects, measuring grasp quality by the probability of force closure. Our main contribution is GP-GPIS-OPT, an algorithm for computing grasps for parallel-jaw grippers on 2D GPIS object representations. Specifically, our method optimizes an approximation to the probability of force closure subject to antipodal constraints on the parallel jaws using Sequential Convex Programming (SCP). We also introduce GPIS-Blur, a method for visualizing 2D GPIS models based on blending shape samples from a GPIS. We test the algorithm on a set of 8 planar objects with transparency, translucency, and specularity. Our experiments suggest that GP-GPIS-OPT computes grasps with higher probability of force closure than a planner that does not consider shape uncertainty on our test objects and may converge to a grasp plan up to 5.7×faster than using Monte-Carlo integration, a common method for grasp planning under shape uncertainty. Furthermore, initial experiments on the Willow Garage PR2 robot suggest that grasps selected with GP-GPIS-OPT are up to 90 more successful than those planned assuming a deterministic shape. Our dataset, code, and videos of our experiments are available at http: rll.berkeley.edu icra2015grasping .", "The choice of an adequate object shape representation is critical for efficient grasping and robot manipulation. A good representation has to account for two requirements: it should allow uncertain sensory fusion in a probabilistic way and it should serve as a basis for efficient grasp and motion generation. We consider Gaussian process implicit surface potentials as object shape representations. Sensory observations condition the Gaussian process such that its posterior mean defines an implicit surface which becomes an estimate of the object shape. Uncertain visual, haptic and laser data can equally be fused in the same Gaussian process shape estimate. The resulting implicit surface potential can then be used directly as a basis for a reach and grasp controller, serving as an attractor for the grasp end-effectors and steering the orientation of contact points. Our proposed controller results in a smooth reach and grasp trajectory without strict separation of phases. We validate the shape estimation using Gaussian processes in a simulation on randomly sampled shapes and the grasp controller on a real robot with 7DoF arm and 7DoF hand.", "We present a method for real-time 3D object instance detection that does not require a time-consuming training stage, and can handle untextured objects. At its core, our approach is a novel image representation for template matching designed to be robust to small image transformations. This robustness is based on spread image gradient orientations and allows us to test only a small subset of all possible pixel locations when parsing the image, and to represent a 3D object with a limited set of templates. In addition, we demonstrate that if a dense depth sensor is available we can extend our approach for an even better performance also taking 3D surface normal orientations into account. We show how to take advantage of the architecture of modern computers to build an efficient but very discriminant representation of the input images that can be used to consider thousands of templates in real time. We demonstrate in many experiments on real data that our method is much faster and more robust with respect to background clutter than current state-of-the-art methods.", "We propose a novel approach for real-time object pose detection and tracking that is highly scalable in terms of the number of objects tracked and the number of cameras observing the scene. Key to this scalability is a high degree of parallelism in the algorithms employed. The method maintains a single 3D simulated model of the scene consisting of multiple objects together with a robot operating on them. This allows for rapid synthesis of appearance, depth, and occlusion information from each camera viewpoint. This information is used both for updating the pose estimates and for extracting the low-level visual cues. The visual cues obtained from each camera are efficiently fused back into the single consistent scene representation using a constrained optimization method. The centralized scene representation, together with the reliability measures it enables, simplify the interaction between pose tracking and pose detection across multiple cameras. We demonstrate the robustness of our approach in a realistic manipulation scenario. We publicly release this work as a part of a general ROS software framework for real-time pose estimation, SimTrack, that can be integrated easily for different robotic applications." ] }
1707.05615
2758506095
We propose a novel formulation of robotic pick and place as a deep reinforcement learning (RL) problem. Whereas most deep RL approaches to robotic manipulation frame the problem in terms of low level states and actions, we propose a more abstract formulation. In this formulation, actions are target reach poses for the hand and states are a history of such reaches. We show this approach can solve a challenging class of pick-place and regrasping problems where the exact geometry of the objects to be handled is unknown. The only information our method requires is: 1) the sensor perception available to the robot at test time; 2) prior knowledge of the general class of objects for which the system was trained. We evaluate our method using objects belonging to two different categories, mugs and bottles, both in simulation and on real hardware. Results show a major improvement relative to a shape primitives baseline.
Recently, there has been much advancement in grasping novel objects. Bohg provide a survey @cite_0 . Most of these methods are trained in a supervised fashion to predict whether a grasp is stable or not. The present paper can be viewed as extending our prior work in grasp detection @cite_11 @cite_25 to pick-and-place and regrasping.
{ "cite_N": [ "@cite_0", "@cite_25", "@cite_11" ], "mid": [ "", "2962737955", "2290564286" ], "abstract": [ "", "Recently, a number of grasp detection methods have been proposed that can be used to localize robotic grasp configurations directly from sensor data without estimating object pose. The underlying idea is to treat grasp perception analogously to object detection in computer vision. These methods take as input a noisy and partially occluded RGBD image or point cloud and produce as output pose estimates of viable grasps, without assuming a known CAD model of the object. Although these methods generalize grasp knowledge to new objects well, they have not yet been demonstrated to be reliable enough for wide use. Many grasp detection methods achieve grasp success rates (grasp successes as a fraction of the total number of grasp attempts) between 75 and 95 for novel objects presented in isolation or in light clutter. Not only are these success rates too low for practical grasping applications, but the light clutter scenarios that are evaluated often do not reflect the realities of real-world grasping. This pap...", "This paper considers the problem of grasp pose detection in point clouds. We follow a general algorithmic structure that first generates a large set of 6-DOF grasp candidates and then classifies each of them as a good or a bad grasp. Our focus in this paper is on improving the second step by using depth sensor scans from large online datasets to train a convolutional neural network. We propose two new representations of grasp candidates, and we quantify the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models. Our analysis shows that a more informative grasp candidate representation as well as pretraining and prior knowledge significantly improve grasp detection. We evaluate our approach on a Baxter Research Robot and demonstrate an average grasp success rate of 93 in dense clutter. This is a 20 improvement compared to our prior work." ] }
1707.05615
2758506095
We propose a novel formulation of robotic pick and place as a deep reinforcement learning (RL) problem. Whereas most deep RL approaches to robotic manipulation frame the problem in terms of low level states and actions, we propose a more abstract formulation. In this formulation, actions are target reach poses for the hand and states are a history of such reaches. We show this approach can solve a challenging class of pick-place and regrasping problems where the exact geometry of the objects to be handled is unknown. The only information our method requires is: 1) the sensor perception available to the robot at test time; 2) prior knowledge of the general class of objects for which the system was trained. We evaluate our method using objects belonging to two different categories, mugs and bottles, both in simulation and on real hardware. Results show a major improvement relative to a shape primitives baseline.
RL has long been studied for use in robot control. Kober survey robotics applications that use RL @cite_14 . Since this survey, deep RL has become prominent in robotic manipulation @cite_16 @cite_12 @cite_1 . These methods operate on the motor torque or Cartesian motion command level of the robot controller whereas ours operates at a higher level.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_12", "@cite_16" ], "mid": [ "1977655452", "2962787403", "2964161785", "2601066903" ], "abstract": [ "Reinforcement learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors. Conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in reinforcement learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this article, we attempt to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots. We highlight both key challenges in robot reinforcement learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our paper lies on the choice between model-based and model-free as well as between value-function-based and policy-search methods. By analyzing a simple problem in some detail we demonstrate how reinforcement learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.", "", "Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.", "We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing." ] }
1707.05905
2737032267
Cloud computing is an important part of today's world because offloading computations is a method to reduce costs. In this paper, we investigate computing the Speeded Up Robust Features (SURF) using Fully Homomorphic Encryption (FHE). Performing SURF in FHE enables a method to offload the computations while maintaining security and privacy of the original data. In support of this research, we developed a framework to compute SURF via a rational number based compatible with FHE. Although floating point (R) to rational numbers (Q) conversion introduces error, our research provides tight bounds on the magnitude of error in terms of parameters of FHE. We empirically verified the proposed method against a set of images at different sizes and showed that our framework accurately computes most of the SURF keypoints in FHE.
Over the past decade growing interest in privacy has increased interest in secure signal processing. Troncoso-Pastoriza and Perez-Gonzalez have an excellent survey of the problems and solution techniques for privacy in cloud computing @cite_15 . There are many other examples among varying fields; secure signal processing @cite_5 , biometrics @cite_7 , and @cite_11 . Many specific examples exist that use the Paillier encryption scheme (see @cite_10 for scheme definition). This scheme is not an FHE scheme as it provides only additive homomorphic operations (i.e. no ciphertext-ciphertext multiplication). FHE schemes provide both additive and multiplicative operations over ciphertexts. Hsu, Lu, and Pei used Paillier to implement a privacy preserving SIFT @cite_10 . An encrypted version of SURF also exists using the Paillier scheme @cite_0 . The main difference here is that we use FHE instead of Paillier. A few other examples of using Paillier exist in signal processing @cite_16 @cite_14 . Other encryption techniques also exist; particularly with two party computations while hiding data from both sides, ex. @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_0", "@cite_2", "@cite_5", "@cite_15", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2079447070", "2032001704", "2089468229", "", "", "2007470995", "", "", "2072194534" ], "abstract": [ "Secret image sharing is a method for distributing a secret image amongst n data stores, each storing a shadow image of the secret, such that the original secret image can be recovered only if any k out of the n shares is available. Existing secret image sharing schemes, however, do not support scaling and cropping operations on the shadow image, which are useful for zooming on large images. In this paper, we propose an image sharing scheme that allows the user to retrieve a scaled or cropped version of the secret image by operating directly on the shadow images, therefore reducing the amount of data sent from the data stores to the user. Results and analyses show that our scheme is highly secure, requires low computational cost, and supports a large number of scale factors with arbitrary crop.", "We present a theoretical framework for the analysis of privacy and security trade-offs in secure biometric authentication systems. We use this framework to conduct a comparative information-theoretic analysis of two biometric systems that are based on linear error correction codes, namely fuzzy commitment and secure sketches. We derive upper bounds for the probability of false rejection (PFR) and false acceptance (PFA) for these systems. We use mutual information to quantify the information leaked about a user's biometric identity, in the scenario where one or multiple biometric enrollments of the user are fully or partially compromised. We also quantify the probability of successful attack (PSA) based on the compromised information. Our analysis reveals that fuzzy commitment and secure sketch systems have identical PFR, PFA, PSA, and information leakage, but secure sketch systems have lower storage requirements. We analyze both single-factor (keyless) and two-factor (key-based) variants of secure biometrics, and consider the most general scenarios in which a single user may provide noisy biometric enrollments at several access control devices, some of which may be subsequently compromised by an attacker. Our analysis highlights the revocability and reusability properties of key-based systems and exposes a subtle design trade-off between reducing information leakage from compromised systems and preventing successful attacks on systems whose data have not been compromised.", "Signal processing in the encrypted domain has become a hot research topic, which enable signal processing tasks in a secure and privacy-preserving manner. Taken the fact that SURF (Speeded Up Robust Feature) has been widely utilized in various applications into account, SURF feature extraction method in the encrypted domain has been proposed in this paper. Because all steps must be implemented in the encrypted domain, Paillier homomorphic encryption method is adopted. Experimental results demonstrate that the number and location of SURF features extracted from the encrypted data are the same as those from the plaintext data. And the error between the descriptors obtained from the plaintext data and the encrypted data is only 0.0002932 . We also provide security analysis and complexity analysis. The proposed method can be used in the encrypted domain based applications, such as secure image processing and image retrieval.", "", "", "In recent years, the paradigm of cloud computing has gained an increasing interest from the academic community as well as from the commercial point of view. The cloud is a very appealing concept both for the providers (who can benefit from hiring out their extra computation and storage resources) and for the users (who can avoid the initial investment on resources by outsourcing their processes and data to a cloud).", "", "", "In an increasing number of image and video processing problems, cryptographic techniques are used to enforce content access control, identity verification and authentication, and privacy protection. The combination of cryptography and signal processing is an exciting emerging field. This introductory paper gives an overview of approaches and challenges that exist in applying cryptographic primitives to important image and video processing problems, including (partial) content encryption, secure face recognition, and secure biometrics. This paper aims to help the community in appreciating the utility and challenges of cryptographic techniques in image and video processing." ] }
1707.05905
2737032267
Cloud computing is an important part of today's world because offloading computations is a method to reduce costs. In this paper, we investigate computing the Speeded Up Robust Features (SURF) using Fully Homomorphic Encryption (FHE). Performing SURF in FHE enables a method to offload the computations while maintaining security and privacy of the original data. In support of this research, we developed a framework to compute SURF via a rational number based compatible with FHE. Although floating point (R) to rational numbers (Q) conversion introduces error, our research provides tight bounds on the magnitude of error in terms of parameters of FHE. We empirically verified the proposed method against a set of images at different sizes and showed that our framework accurately computes most of the SURF keypoints in FHE.
FHE schemes have been evolving since the first scheme was developed by Gentry in 2009 @cite_17 . Over the next few years, the schemes have improved both the time and space complexity and went from binary number implementation to an integer ring @cite_9 @cite_13 @cite_6 @cite_1 . Our choice of the 2013 scheme is due to its ability to process ciphertexts within an integer ring. Other work has been done on using FHE in signal processing: Shortell and Shokoufandeh used FHE to implement a brightness contrast filter @cite_18 . This implementation uses scaling factor representation vice a rational representation. We will show that our rational representation is more practical in comparison.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_1", "@cite_6", "@cite_13", "@cite_17" ], "mid": [ "2207441427", "2226167778", "", "1992282993", "2152926062", "104209573" ], "abstract": [ "This paper investigates the problem of performing signal processing via remote execution methods while maintaining the privacy of the data. Primary focus on this problem is a situation where there are two parties; a client with data or a signal that needs to be processed and a server with computational resources. Revealing the signal unencrypted causes a violation of privacy for the client. One solution to this problem is to process the data or signal while encrypted. Problems of this type have been attracting attention recently; particularly with the growing capabilities of cloud computing. We contribute to solving this type of problem by processing the signals in an encrypted form, using fully homomorphic encryption FHE. Three additional contributions of this manuscript includes 1 extending FHE to real numbers, 2 bounding the error related to the FHE process against the unencrypted variation of the process, and 3 increasing the practicality of FHE as a tool by using graphical processing units GPU. We demonstrate our contributions by applying these ideas to two classical problems: natural logarithm calculation and signal prbrightness contrast filter.", "We present a new tensoring technique for LWE-based fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically @math with every multiplication before \"refreshing\", our noise only grows linearly @math . We use this technique to construct a scale-invariant fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus q and the initial noise level B, and not on their absolute values. Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process no need for \"modulus switching\", and this modulus can take arbitrary form. In addition, security can be classically reduced from the worst-case hardness of the GapSVP problem with quasi-polynomial approximation factor, whereas previous constructions could only exhibit a quantum reduction from GapSVP.", "", "We present a fully homomorphic encryption scheme that is based solely on the(standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of short vector problems'' on arbitrary lattices. Our construction improves on previous works in two aspects: We show that somewhat homomorphic'' encryption can be based on LWE, using a new re-linearization technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. We deviate from the \"squashing paradigm'' used in all previous works. We introduce a new dimension-modulus reduction technique, which shortens the cipher texts and reduces the decryption complexity of our scheme, without introducing additional assumptions . Our scheme has very short cipher texts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is @math bits per single-bit query (here, @math is a security parameter).", "We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry's bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2λ security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with O(λ · L3) per-gate computation -- i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is O(λ2), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results to the above for LWE, but with worse performance. Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width -- e.g., where a constant fraction of levels have width at least λ -- we can reduce the per-gate computation of the bootstrapped version to O(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω(λ3.5) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).", "We propose the first fully homomorphic encryption scheme, solving an old open problem. Such a scheme allows one to compute arbitrary functions over encrypted data without the decryption key—i.e., given encryptions E(m1), ..., E( mt) of m1, ..., m t, one can efficiently compute a compact ciphertext that encrypts f(m1, ..., m t) for any efficiently computable function f. Fully homomorphic encryption has numerous applications. For example, it enables encrypted search engine queries—i.e., a search engine can give you a succinct encrypted answer to your (boolean) query without even knowing what your query was. It also enables searching on encrypted data; you can store your encrypted data on a remote server, and later have the server retrieve only files that (when decrypted) satisfy some boolean constraint, even though the server cannot decrypt the files on its own. More broadly, it improves the efficiency of secure multiparty computation. In our solution, we begin by designing a somewhat homomorphic \"boostrappable\" encryption scheme that works when the function f is the scheme's own decryption function. We then show how, through recursive self-embedding, bootstrappable encryption gives fully homomorphic encryption." ] }
1707.05635
2736946518
Representing texts as fixed-length vectors is central to many language processing tasks. Most traditional methods build text representations based on the simple Bag-of-Words (BoW) representation, which loses the rich semantic relations between words. Recent advances in natural language processing have shown that semantically meaningful representations of words can be efficiently acquired by distributed models, making it possible to build text representations based on a better foundation called the Bag-of-Word-Embedding (BoWE) representation. However, existing text representation methods using BoWE often lack sound probabilistic foundations or cannot well capture the semantic relatedness encoded in word vectors. To address these problems, we introduce the Spherical Paragraph Model (SPM), a probabilistic generative model based on BoWE, for text representation. SPM has good probabilistic interpretability and can fully leverage the rich semantics of words, the word co-occurrence information as well as the corpus-wide information to help the representation learning of texts. Experimental results on topical classification and sentiment analysis demonstrate that SPM can achieve new state-of-the-art performances on several benchmark datasets.
To address this shortcoming, several dimensionality reduction methods have been proposed based on BoW, including matrix factorization methods such as LSI @cite_25 and NMF @cite_28 , and probabilistic topical models such as PLSI @cite_15 and LDA @cite_5 . The key idea of LSI is to map texts to a vector space of reduced dimensionality (, the latent semantic space), based on a Singular Value Decomposition (SVD) over the term-document co-occurrence matrix. NMF is distinguished from the other methods by its non-negativity constraints, which leads to a parts-based representation because they allow only additive, not subtractive combinations. In PLSI, each word is generated from a single topic, and different words in a document may be generated from different topics. LDA is proposed by introducing a complete generative process over the documents, and demonstrated as a state-of-the-art document representation method. However, as built upon the BoW representation, all these methods do not leverage the rich semantics of the words, and learn the text representations purely based on the word-by-text co-occurrence information.
{ "cite_N": [ "@cite_28", "@cite_5", "@cite_15", "@cite_25" ], "mid": [ "1902027874", "1880262756", "", "2147152072" ], "abstract": [ "Is perception of the whole based on perception of its parts? There is psychological1 and physiological2,3 evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations4,5. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "", "A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising." ] }
1707.05635
2736946518
Representing texts as fixed-length vectors is central to many language processing tasks. Most traditional methods build text representations based on the simple Bag-of-Words (BoW) representation, which loses the rich semantic relations between words. Recent advances in natural language processing have shown that semantically meaningful representations of words can be efficiently acquired by distributed models, making it possible to build text representations based on a better foundation called the Bag-of-Word-Embedding (BoWE) representation. However, existing text representation methods using BoWE often lack sound probabilistic foundations or cannot well capture the semantic relatedness encoded in word vectors. To address these problems, we introduce the Spherical Paragraph Model (SPM), a probabilistic generative model based on BoWE, for text representation. SPM has good probabilistic interpretability and can fully leverage the rich semantics of words, the word co-occurrence information as well as the corpus-wide information to help the representation learning of texts. Experimental results on topical classification and sentiment analysis demonstrate that SPM can achieve new state-of-the-art performances on several benchmark datasets.
With rich semantics encoded in word vectors, a natural question is how to obtain the text representation based on word vectors. A simple approach is to use a weighted average @cite_3 or sum of all the word vectors. Besides, Fisher Vector (FV) @cite_3 transforms the variable-cardinality word vectors into a fixed-length text representation based on the Fisher kernel framework @cite_12 . However, these methods often lack sound probabilistic foundations. Meanwhile, simple weighted sum or aggregation using fisher kernel cannot well capture the semantic relatedness encoded in word vectors, which is typically revealed by the distance (or similarity) between word vectors. Later, Paragraph Vector (PV) which has two different model architectures (, PV-DM and PV-DBOW) @cite_11 is introduced to jointly learn the word and text representations. Although these models seem to work well in practice, there is a strong independence assumption between texts in these methods, limiting their ability to leverage the corpus-wide information to help the representation learning of each piece of text.
{ "cite_N": [ "@cite_12", "@cite_3", "@cite_11" ], "mid": [ "", "2250434988", "2949547296" ], "abstract": [ "", "While words in documents are generally treated as discrete entities, they can be embedded in a Euclidean space which reflects an a priori notion of similarity between them. In such a case, a text document can be viewed as a bag-ofembedded-words (BoEW): a set of realvalued vectors. We propose a novel document representation based on such continuous word embeddings. It consists in non-linearly mapping the wordembeddings in a higher-dimensional space and in aggregating them into a documentlevel representation. We report retrieval and clustering experiments in the case where the word-embeddings are computed from standard topic models showing significant improvements with respect to the original topic models.", "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks." ] }
1707.05635
2736946518
Representing texts as fixed-length vectors is central to many language processing tasks. Most traditional methods build text representations based on the simple Bag-of-Words (BoW) representation, which loses the rich semantic relations between words. Recent advances in natural language processing have shown that semantically meaningful representations of words can be efficiently acquired by distributed models, making it possible to build text representations based on a better foundation called the Bag-of-Word-Embedding (BoWE) representation. However, existing text representation methods using BoWE often lack sound probabilistic foundations or cannot well capture the semantic relatedness encoded in word vectors. To address these problems, we introduce the Spherical Paragraph Model (SPM), a probabilistic generative model based on BoWE, for text representation. SPM has good probabilistic interpretability and can fully leverage the rich semantics of words, the word co-occurrence information as well as the corpus-wide information to help the representation learning of texts. Experimental results on topical classification and sentiment analysis demonstrate that SPM can achieve new state-of-the-art performances on several benchmark datasets.
Besides these unsupervised representation learning methods, there have been many supervised deep models which directly learn text representations for the prediction tasks. Recursive Neural Network @cite_20 has been proven to be efficient in terms of constructing sentence representations. Recurrent Neural Network @cite_0 can be viewed as an extremely deep neural network with weight sharing across time. Convolution Neural Network @cite_8 can fairly determine discriminative phrases in a text with a max-pooling layer. However, these deep models are usually task dependent and time-consuming in training due to the complex model structures.
{ "cite_N": [ "@cite_0", "@cite_20", "@cite_8" ], "mid": [ "196214544", "2251939518", "" ], "abstract": [ "Recurrent Neural Networks (RNNs) are very powerful sequence models that do not enjoy widespread use because it is extremely difficult to train them properly. Fortunately, recent advances in Hessian-free optimization have been able to overcome the difficulties associated with training RNNs, making it possible to apply them successfully to challenging sequence problems. In this paper we demonstrate the power of RNNs trained with the new Hessian-Free optimizer (HF) by applying them to character-level language modeling tasks. The standard RNN architecture, while effective, is not ideally suited for such tasks, so we introduce a new RNN variant that uses multiplicative (or \"gated\") connections which allow the current input character to determine the transition matrix from one hidden state vector to the next. After training the multiplicative RNN with the HF optimizer for five days on 8 high-end Graphics Processing Units, we were able to surpass the performance of the best previous single method for character-level language modeling – a hierarchical non-parametric sequence model. To our knowledge this represents the largest recurrent neural network application to date.", "Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.", "" ] }
1707.05635
2736946518
Representing texts as fixed-length vectors is central to many language processing tasks. Most traditional methods build text representations based on the simple Bag-of-Words (BoW) representation, which loses the rich semantic relations between words. Recent advances in natural language processing have shown that semantically meaningful representations of words can be efficiently acquired by distributed models, making it possible to build text representations based on a better foundation called the Bag-of-Word-Embedding (BoWE) representation. However, existing text representation methods using BoWE often lack sound probabilistic foundations or cannot well capture the semantic relatedness encoded in word vectors. To address these problems, we introduce the Spherical Paragraph Model (SPM), a probabilistic generative model based on BoWE, for text representation. SPM has good probabilistic interpretability and can fully leverage the rich semantics of words, the word co-occurrence information as well as the corpus-wide information to help the representation learning of texts. Experimental results on topical classification and sentiment analysis demonstrate that SPM can achieve new state-of-the-art performances on several benchmark datasets.
The vMF distribution has properties analogous to those of the multi-variate Gaussian distribution for data in @math , parameterized by cosine similarity rather than Euclidean distance. Evidence suggests that this type of directional measure (, cosine similarity) is often superior to Euclidean distance in high dimensions @cite_1 @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_1" ], "mid": [ "1981081578", "1574901103" ], "abstract": [ "This paper presents a detailed empirical study of 12 generative approaches to text clustering, obtained by applying four types of document-to-cluster assignment strategies (hard, stochastic, soft and deterministic annealing (DA) based assignments) to each of three base models, namely mixtures of multivariate Bernoulli, multinomial, and von Mises-Fisher (vMF) distributions. A large variety of text collections, both with and without feature selection, are used for the study, which yields several insights, including (a) showing situations wherein the vMF-centric approaches, which are based on directional statistics, fare better than multinomial model-based methods, and (b) quantifying the trade-off between increased performance of the soft and DA assignments and their increased computational demands. We also compare all the model-based algorithms with two state-of-the-art discriminative approaches to document clustering based, respectively, on graph partitioning (CLUTO) and a spectral coclustering method. Overall, DA and CLUTO perform the best but are also the most computationally expensive. The vMF models provide good performance at low cost while the spectral coclustering algorithm fares worse than vMF-based methods for a majority of the datasets.", "Statistical approaches to processing natural language text have become dominant in recent years. This foundational text is the first comprehensive introduction to statistical natural language processing (NLP) to appear. The book contains all the theory and algorithms needed for building NLP tools. It provides broad but rigorous coverage of mathematical and linguistic foundations, as well as detailed discussion of statistical methods, allowing students and researchers to construct their own implementations. The book covers collocation finding, word sense disambiguation, probabilistic parsing, information retrieval, and other applications." ] }
1707.05978
2963164350
In this paper, the Relative Pose based Redundancy Removal (RPRR) scheme is presented, which has been designed for mobile RGB-D sensor networks operating under bandwidth-constrained operational scenarios. The scheme considers a multiview scenario in which pairs of sensors observe the same scene from different viewpoints, and detect the redundant visual and depth information to prevent their transmission leading to a significant improvement in wireless channel usage efficiency and power savings. We envisage applications in which the environment is static, and rapid 3D mapping of an enclosed area of interest is required, such as disaster recovery and support operations after earthquakes or industrial accidents. Experimental results show that wireless channel utilization is improved by 250 and battery consumption is halved when the RPRR scheme is used instead of sending the sensor images independently.
The collaborative compression and transmission methods @cite_33 @cite_70 @cite_58 @cite_76 @cite_24 jointly encode the captured multi-view images. The spatial correlation is explored and removed at encoders by image registration algorithms. Only the uncorrelated visual content is delivered in the network after being jointly encoded by some recent coding techniques (e.g., Multiview Video Coding (MVC) @cite_14 @cite_79 ) and compressive sensing approaches @cite_39 @cite_2 . However, at least one node in the network is required to have the full set of images captured by the other sensors in order to perform image registration. This means that the redundant information cannot be removed completely and still needs to be transmitted at least once. Moreover, as color images do not contain a full 3-D representation of a scene, these methods introduce distortions and errors when the relative poses (location and orientation) between sensors are not pure rotation or translation, or the scenes have complex geometrical structures and occlusions.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_70", "@cite_58", "@cite_39", "@cite_24", "@cite_79", "@cite_2", "@cite_76" ], "mid": [ "2142536989", "", "2150490746", "2576041797", "1988077085", "1485536093", "", "2324283989", "2037092248" ], "abstract": [ "Significant improvements in video compression capability have been demonstrated with the introduction of the H.264 MPEG-4 advanced video coding (AVC) standard. Since developing this standard, the Joint Video Team of the ITU-T Video Coding Experts Group (VCEG) and the ISO IEC Moving Picture Experts Group (MPEG) has also standardized an extension of that technology that is referred to as multiview video coding (MVC). MVC provides a compact representation for multiple views of a video scene, such as multiple synchronized video cameras. Stereo-paired video for 3-D viewing is an important special case of MVC. The standard enables inter-view prediction to improve compression capability, as well as supporting ordinary temporal and spatial prediction. It also supports backward compatibility with existing legacy systems by structuring the MVC bitstream to include a compatible “base view.” Each other view is encoded at the same picture resolution as the base view. In recognition of its high-quality encoding capability and support for backward compatibility, the stereo high profile of the MVC extension was selected by the Blu-Ray Disc Association as the coding format for 3-D video with high-definition resolution. This paper provides an overview of the algorithmic design used for extending H.264 MPEG-4 AVC towards MVC. The basic approach of MVC for enabling inter-view prediction and view scalability in the context of H.264 MPEG-4 AVC is reviewed. Related supplemental enhancement information (SEI) metadata is also described. Various “frame compatible” approaches for support of stereo-view video as an alternative to MVC are also discussed. A summary of the coding performance achieved by MVC for both stereo- and multiview video is also provided. Future directions and challenges related to 3-D video are also briefly discussed.", "", "Due to the limited Field-Of-View (FOV) of a single camera, it is sometimes desired to extend the FOV using multiple cameras. Image stitching is one of the methods that can be used to exploit and remove the redundancy created by the overlapping FOV. However, the memory requirement and the amount of computation for conventional implementation of image stitching are very high. In this paper, this problem is resolved by performing the image stitching and compression in a strip-by-strip manner. First, the stitching parameters are determined by transmitting two reference images to an intermediate node to perform the processing. Then, these parameters are transmitted back to the visual node and stored in there. These parameters will be used to determine the way of stitching the incoming images in a strip-by-strip manner. After the stitching of a strip is done, it can be further compressed using a strip-based compression technique.", "In this paper, we design a novel multiview video encoding decoding architecture for wirelessly multiview video streaming applications, e.g., 360 degrees video, Internet of Things (IoT) multimedia sensing, among others, based on distributed video coding and compressed sensing principles. Specifically, we focus on joint decoding of independently encoded compressively sampled multiview video streams. We first propose a novel side-information (SI) generation method based on a new interview motion compensation algorithm for multiview video joint reconstruction at the decoder end. Then, we propose a technique to fuse the received measurements with resampled measurements from the generated SI to perform the final recovery. Based on the proposed joint reconstruction method, we also derive a blind video quality estimation technique that can be used to adapt online the video encoding rate at the sensors to guarantee desired quality levels in multiview video streaming. Extensive simulation results of real multiview video traces show the effectiveness of the proposed fusion reconstruction method with the assistance of SI generated by an interview motion compensation method. Moreover, they also illustrate that the blind quality estimation algorithm can accurately estimate the reconstruction quality.", "In this article, a multiview image compression framework, which involves the use of Block-based Compressive Sensing (BCS) and Joint Multiphase Decoding (JMD), is proposed for a Visual Sensor Network (VSN). In the proposed framework, one of the sensor nodes is configured to serve as the reference node, the others as nonreference nodes. The images are encoded independently using the BCS to produce two observed measurements that are transmitted to the host workstation. In this case, the nonreference nodes always encoded the images (INR) at a lower subrate when compared with the images from the reference nodes (IR). The idea is to improve the reconstruction of INR using IR. After the two observed measurements are received by the host workstation, they are first decoded independently, then image registration is applied to align IR onto the same plane of INR. The aligned IR is then fused with INR, using wavelets to produce the projected image IP. Subsequently, the difference between the measurements of the IP and INR is calculated. The difference is then decoded and added to IP to produce the final reconstructed INR. The simulation results show that the proposed framework is able to improve the quality of INR on average by 2dB to 3dB at lower subrates when compared with other Compressive Sensing (CS)--based multiview image compression frameworks.", "A major fraction of multimedia stream contents tends to be redundant and leads to wastage of storage capacity and channel bandwidth. In order to eliminate surplus data, standard video compression algorithms exploit spatial and temporal correlation present in video sequence. However, in case of a multisensor network, intersensor statistical redundancy is the most significant factor in acquiring efficient link utilization as well as making resultant findings valuable to the end user. In this paper, an extension to our previously proposed scheme has been presented to accomplish performance goals of a multisensor environment. Standard MPEG codec has been used to accomplish distributed motion compensation in prespecified directions known as directional correlation. Video frame correlation has been estimated locally at the camera node as well as across different nodes, defined as node communication strategies. Further, receiver feedback assists in quality control after reconstitution by decoder assessment. Results estimated have been analyzed for saving ratios and multimedia quality. Results analysis illustrates increased gains in frame quality and compression saving, achieved through reducing node displacement from the reference node (NR).", "", "Compressed sensing (CS) based image compression can achieve a very low sampling rate, which is ideal for wireless sensor networks with respect to their energy consumption and data transmission. In this paper, an adaptive compressed sensing rate assignment algorithm that is based on the standard deviations of image blocks is proposed. Specifically, each image block is first assigned a fixed sampling rate. In addition to the fixed sampling rate, an adaptive sampling rate is then given to each block based on the standard deviation of the block. With this adaptive sampling strategy, higher sampling rates are assigned to blocks that are less compressible (e.g., blocks with complex textures are less compressible than blocks with a smooth background). The sensing matrix is constructed based on the assigned sampling rate. The fixed measurements and the adaptive measurements are concatenated to form the final measurements. Finally, the measurements are used to reconstruct the image on the decoding side. The experimental results demonstrate that the proposed algorithm can achieve image progressive transmission and improve the reconstruction quality of the images.", "The imaging sensors are able to provide intuitive visual information for quick recognition and decision. However, imaging sensors usually generate vast amount of data. Therefore, processing and coding of image data collected in a sensor network for the purpose of energy efficient transmission poses a significant technical challenge. In particular, multiple sensors may be collecting similar visual information simultaneously. We propose in this paper a novel collaborative image coding and transmission scheme to minimize the energy for data transmission. First, we apply a shape matching method to coarsely register images to find out maximal overlap to exploit the spatial correlation between images acquired from neighboring sensors. For a given image sequence, we transmit background image only once. A lightweight and efficient background subtraction method is employed to detect targets. Only the regions of target and their spatial locations are transmitted to the monitoring center. The whole image can then be reconstructed by fusing the background and the target images as well as their spatial locations. Experimental results show that the energy for image transmission can indeed be greatly reduced with collaborative image coding and transmission." ] }
1707.05978
2963164350
In this paper, the Relative Pose based Redundancy Removal (RPRR) scheme is presented, which has been designed for mobile RGB-D sensor networks operating under bandwidth-constrained operational scenarios. The scheme considers a multiview scenario in which pairs of sensors observe the same scene from different viewpoints, and detect the redundant visual and depth information to prevent their transmission leading to a significant improvement in wireless channel usage efficiency and power savings. We envisage applications in which the environment is static, and rapid 3D mapping of an enclosed area of interest is required, such as disaster recovery and support operations after earthquakes or industrial accidents. Experimental results show that wireless channel utilization is improved by 250 and battery consumption is halved when the RPRR scheme is used instead of sending the sensor images independently.
Distributed source coding (DSC) algorithms @cite_10 @cite_34 @cite_66 @cite_36 @cite_17 are other promising approaches that can be used to reduce the redundant data in multiview VSN scenarios. Each DSC encoder operates independently, but at the same time, relies on joint decoding operations at the sink (remote monitoring station). The advantage of these approaches is that the camera sensors do not need to directly communicate the captured visual information with others in the network. Furthermore, these algorithms shift the computational complexity from the sensor nodes to the remote monitoring station, which fits the needs of VSNs well. However, the side information must be predicted as accurately as possible and the correlation structure should be able to be identified at the decoder side (remote monitoring station), without an accurate knowledge of the network topology and the poses of the sensors. These are the main disadvantages that prevent DSC algorithms from being widely implemented. A detailed discussion on multi-view image compression and transmission schemes in VSNs is presented in @cite_38 .
{ "cite_N": [ "@cite_38", "@cite_36", "@cite_34", "@cite_10", "@cite_66", "@cite_17" ], "mid": [ "", "2388381438", "2145419160", "2111698171", "2774676694", "" ], "abstract": [ "", "Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264 AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.", "We present a novel framework for robustly delivering video data from distributed wireless camera networks that are characterized by packet drops. The main focus in this work is on robustness which is imminently needed in a wireless setting. We propose two alternative models to capture interview correlation among cameras with overlapping views. The view-synthesis-based correlation model requires at least two other camera views and relies on both disparity estimation and view interpolation. The disparity-based correlation model requires only one other camera view and makes use of epipolar geometry. With the proposed models, we show how interview correlation can be exploited for robustness through the use of distributed source coding. The proposed approach has low encoding complexity, is robust while satisfying tight latency constraints and requires no intercamera communication. Our experiments show that on bursty packet erasure channels, the proposed H.263+1 based method outperforms baseline methods such as H.263+ with forward error correction and H.263+ with intra refresh by up to 2.5 dB. Empirical results further support the relative insensitivity of our proposed approach to the number of additional available camera views or their placement density.", "Wireless video communications promote promising opportunities involving commercial applications on a grand scale as well as highly specialized niche markets. In this regard, the design of efficient video coding systems, meeting such key requirements as low power, mobility and low complexity, is a challenging problem. The solution can be found in fundamental information theoretic results, which gave rise to the distributed video coding (DVC) paradigm, under which lightweight video encoding schemes can be engineered. This article presents a new hash-based DVC architecture incorporating a novel motion-compensated multi-hypothesis prediction technique. The presented method is able to adapt to the regional variations in temporal correlation in a frame. The proposed codec enables scalable Wyner-Ziv video coding and provides state-of-the-art distributed video compression performance. The key novelty of this article is the expansion of the application domain of DVC from conventional video material to medical imaging. Wireless capsule endoscopy in particular, which is essentially wireless video recording in a pill, is proven to be an important application field. The low complexity encoding characteristics, the ability of the novel motion-compensated multi-hypothesis prediction technique to adapt to regional degrees of temporal correlation (which is of crucial importance in the context of endoscopic video content), and the high compression performance make the proposed distributed video codec a strong candidate for future lightweight (medical) imaging applications.", "In a wireless multimedia sensor network (WMSN), the minimization of network energy consumption is a crucial task not just for scalar data but also for multimedia. In this network, a camera node (CN) captures images and transmits them to a base station (BS). Several sensor nodes (SNs) are also placed throughout the network to facilitate the proper functioning of the network. Transmitting an image requires a large amount of energy due to the image size and distance; however, SNs are resource constrained. Image compression is used to scale down image size; however, it is accompanied by a computational complexity trade-off. Moreover, direct image transmission to a BS requires more energy. Thus, in this paper, we present a distributed image compression architecture over WMSN for prolonging the overall network lifetime (at high throughput). Our scheme consists of three subtasks: determining the optimal camera radius for prolonging the CN lifetime, distributing image compression tasks among the potential SNs to balance the energy, and, finally, adopting a multihop hierarchical routing scheme to reduce the long-distance transmission energy. Simulation results show that our scheme can prolong the overall network lifetime and achieve high throughput, in comparison with a traditional routing scheme and its state-of-the-art variants.", "" ] }
1707.05978
2963164350
In this paper, the Relative Pose based Redundancy Removal (RPRR) scheme is presented, which has been designed for mobile RGB-D sensor networks operating under bandwidth-constrained operational scenarios. The scheme considers a multiview scenario in which pairs of sensors observe the same scene from different viewpoints, and detect the redundant visual and depth information to prevent their transmission leading to a significant improvement in wireless channel usage efficiency and power savings. We envisage applications in which the environment is static, and rapid 3D mapping of an enclosed area of interest is required, such as disaster recovery and support operations after earthquakes or industrial accidents. Experimental results show that wireless channel utilization is improved by 250 and battery consumption is halved when the RPRR scheme is used instead of sending the sensor images independently.
The algorithms mentioned above focus only on color (RGB) data. Only a few studies have been reported @cite_37 @cite_26 @cite_32 that use RGB-D sensors in VSNs, as their use in networked robotics scenarios has not yet become widespread. Consequently, our extensive review of the research literature has not identified any earlier studies that attempt to develop an efficient coding system that aims to maximize the bandwidth usage and minimize the energy consumption for RGB-D equipped VSNs.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_32" ], "mid": [ "", "2017522696", "2000462596" ], "abstract": [ "", "Mirrors are indispensable objects in our lives. The capability of simulating a mirror on a computer display, augmented with virtual scenes and objects, opens the door to many interesting and useful applications from fashion design to medical interventions. Realistic simulation of a mirror is challenging as it requires accurate viewpoint tracking and rendering, wide-angle viewing of the environment, as well as real-time performance to provide immediate visual feedback. In this paper, we propose a virtual mirror rendering system using a network of commodity structured-light RGB-D cameras. The depth information provided by the RGB-D cameras can be used to track the viewpoint and render the scene from different prospectives. Missing and erroneous depth measurements are common problems with structured-light cameras. A novel depth denoising and completion algorithm is proposed in which the noise removal and interpolation procedures are guided by the foreground background label at each pixel. The foreground background label is estimated using a probabilistic graphical model that considers color, depth, background modeling, depth noise modeling, and spatial constraints. The wide viewing angle of the mirror system is realized by combining the dynamic scene, captured by the static camera network with a 3-D background model created off-line, using a color-depth sequence captured by a movable RGB-D camera. To ensure a real-time response, a scalable client-and-server architecture is used with the 3-D point cloud processing, the viewpoint estimate, and the mirror image rendering are all done on the client side. The mirror image and the viewpoint estimate are then sent to the server for final mirror view synthesis and viewpoint refinement. Experimental results are presented to show the accuracy and effectiveness of each component and the entire system.", "This paper presents an architecture, protocol, and parallel algorithms for collaborative 3D mapping in the cloud with low-cost robots. The robots run a dense visual odometry algorithm on a smartphone-class processor. Key-frames from the visual odometry are sent to the cloud for parallel optimization and merging with maps produced by other robots. After optimization the cloud pushes the updated poses of the local key-frames back to the robots. All processes are managed by Rapyuta, a cloud robotics framework that runs in a commercial data center. This paper includes qualitative visualization of collaboratively built maps, as well as quantitative evaluation of localization accuracy, bandwidth usage, processing speeds, and map storage." ] }
1707.05950
2738622480
In this paper, we propose relative projective differential invariants (RPDIs) which are invariant to general projective transformations. By using RPDIs and the structural frame of integral invariant, projective weighted moment invariants (PIs) can be constructed very easily. It is first proved that a kind of projective invariants exists in terms of weighted integration of images, with relative differential invariants as the weight functions. Then, some simple instances of PIs are given. In order to ensure the stability and discriminability of PIs, we discuss how to calculate partial derivatives of discrete images more accurately. Since the number of pixels in discrete images before and after the geometric transformation may be different, we design the method to normalize the number of pixels. These ways enhance the performance of PIs. Finally, we carry out some experiments based on synthetic and real image datasets. We choose commonly used moment invariants for comparison. The results indicate that PIs have better performance than other moment invariants in image retrieval and classification. With PIs, one can compare the similarity between images under the projective transformation without knowing the parameters of the transformation, which provides a good tool to shape analysis in image processing, computer vision and pattern recognition.
From the viewpoint of projective geometry, the only invariant property for general projective transformations is the cross ratio, which can be expressed in several ways. The cross ratio is defined locally for points on a straight line or line bundles and is not easy to directly be applied to images @cite_18 . As a result, the researchers began to construct new projective invariants of images and achieved some results. In this section, some previous work directly related to this paper will be described. Also, we will point out their limitations and mistakes.
{ "cite_N": [ "@cite_18" ], "mid": [ "2169009293" ], "abstract": [ "This paper presents geometric invariants of points and their applications under central catadioptric camera model. Although the image has severe distortions under the model, we establish some accurate projective geometric invariants of scene points and their image points. These invariants, being functions of principal point, are useful, from which a method for calibrating the camera principal point and a method for recovering planar scene structures are proposed. The main advantage of using these in variants for plane reconstruction is that neither camera motion nor the intrinsic parameters, except for the principal point, is needed. The theoretical correctness of the established invariants and robustness of the proposed methods are demonstrated by experiments. In addition, our results are found to be applicable to some more general camera models other than the catadioptric one" ] }
1707.05950
2738622480
In this paper, we propose relative projective differential invariants (RPDIs) which are invariant to general projective transformations. By using RPDIs and the structural frame of integral invariant, projective weighted moment invariants (PIs) can be constructed very easily. It is first proved that a kind of projective invariants exists in terms of weighted integration of images, with relative differential invariants as the weight functions. Then, some simple instances of PIs are given. In order to ensure the stability and discriminability of PIs, we discuss how to calculate partial derivatives of discrete images more accurately. Since the number of pixels in discrete images before and after the geometric transformation may be different, we design the method to normalize the number of pixels. These ways enhance the performance of PIs. Finally, we carry out some experiments based on synthetic and real image datasets. We choose commonly used moment invariants for comparison. The results indicate that PIs have better performance than other moment invariants in image retrieval and classification. With PIs, one can compare the similarity between images under the projective transformation without knowing the parameters of the transformation, which provides a good tool to shape analysis in image processing, computer vision and pattern recognition.
Recently, Wang et al. proposed a kind of projective invariants in @cite_26 . Let an image @math be transformed by ) into the image @math . @math and @math in @math are the corresponding points of @math and @math in @math . Suppose that both @math and @math have the first-order partial derivatives.
{ "cite_N": [ "@cite_26" ], "mid": [ "2171023081" ], "abstract": [ "This paper presents a novel method to derive invariants of 2D grayscale images under projective transformation. Invariants of images are good features for object recognition and have attracted extensive attention. Although geometric invariants of point locations such as cross ratios are well known for centuries, we have found no reported invariants for grayscale images that remain the same under projective transformation. It has even been proven that projective invariants of images cannot be derived from the standard geometric moments of images. However, this does not mean that there is no projective invariant of images in other forms. We will prove in this paper that projective invariants of images do exist as functions of the generalized moments of images. We first derive some projective invariant relations between an image function and its derivative functions. Next, we extend the traditional definition of moments by considering both the image function and its derivative functions. Then we derive a set of functions of the generalized moments that are projective invariant. Experimental results indicate that the proposed invariants have certain discriminating power for object recognition." ] }
1707.05950
2738622480
In this paper, we propose relative projective differential invariants (RPDIs) which are invariant to general projective transformations. By using RPDIs and the structural frame of integral invariant, projective weighted moment invariants (PIs) can be constructed very easily. It is first proved that a kind of projective invariants exists in terms of weighted integration of images, with relative differential invariants as the weight functions. Then, some simple instances of PIs are given. In order to ensure the stability and discriminability of PIs, we discuss how to calculate partial derivatives of discrete images more accurately. Since the number of pixels in discrete images before and after the geometric transformation may be different, we design the method to normalize the number of pixels. These ways enhance the performance of PIs. Finally, we carry out some experiments based on synthetic and real image datasets. We choose commonly used moment invariants for comparison. The results indicate that PIs have better performance than other moment invariants in image retrieval and classification. With PIs, one can compare the similarity between images under the projective transformation without knowing the parameters of the transformation, which provides a good tool to shape analysis in image processing, computer vision and pattern recognition.
These projective invariants can be represented as polynomials of D-moment which was defined by where @math . Obviously, ) is a kind of weighted moments. Also, there are two problems we have to pay attention to. In @cite_26 , all experimental results were obtained by using the instances of @math , when @math . Unfortunately, similarly to ), ) is always zero. Because we can find that @math . Therefore, the results in @cite_26 are not valid. The definition of @math is theoretically correct. But, only @math can be used in practice. When @math , the expansion of @math contains more than 70 million terms. And as @math grows, the number of terms increases exponentially. This means that ) only constructs one projective invariant.
{ "cite_N": [ "@cite_26" ], "mid": [ "2171023081" ], "abstract": [ "This paper presents a novel method to derive invariants of 2D grayscale images under projective transformation. Invariants of images are good features for object recognition and have attracted extensive attention. Although geometric invariants of point locations such as cross ratios are well known for centuries, we have found no reported invariants for grayscale images that remain the same under projective transformation. It has even been proven that projective invariants of images cannot be derived from the standard geometric moments of images. However, this does not mean that there is no projective invariant of images in other forms. We will prove in this paper that projective invariants of images do exist as functions of the generalized moments of images. We first derive some projective invariant relations between an image function and its derivative functions. Next, we extend the traditional definition of moments by considering both the image function and its derivative functions. Then we derive a set of functions of the generalized moments that are projective invariant. Experimental results indicate that the proposed invariants have certain discriminating power for object recognition." ] }
1707.05710
2736958405
We consider the spatially inhomogeneous Landau equation with initial data that is bounded by a Gaussian in the velocity variable. In the case of moderately soft potentials, we show that weak solutions immediately become smooth and remain smooth as long as the mass, energy, and entropy densities remain under control. For very soft potentials, we obtain the same conclusion with the additional assumption that a sufficiently high moment of the solution in the velocity variable remains bounded. Our proof relies on the iteration of local Schauder-type estimates.
Regarding the existence theory for , global-in-time classical solutions have only been constructed in the close-to-equilibrium setting: see the work of Guo @cite_7 in the @math -periodic case, and Mouhot-Neumann @cite_0 in the whole space. For general initial data, Villani @cite_15 constructed so-called renormalized solutions with defect measure for the Landau equation. More recently, He-Yang @cite_20 established the short-time existence of spatially periodic classical solutions to in the Coulomb case ( @math ) with initial data in a weighted @math space, by taking the grazing collisions limit in their estimates on the Boltzmann collision operator. They assume that the mass density of the initial data is uniformly bounded away from zero. Since this lower bound along with the bounds , , , and can be shown to propagate for a short time, our Theorem combined with @cite_20 provides a @math solution to the Cauchy problem for suitable initial data. However, on physical grounds, the equation should be expected to be well-posed even with vacuum regions in the initial data. We explore this issue, as well as short-time existence for a broader range of @math , in a forthcoming paper.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_20", "@cite_7" ], "mid": [ "1990233096", "2297183116", "2000381089", "2039481916" ], "abstract": [ "For a general class of linear collisional kinetic models in the torus, including in particular the linearized Boltzmann equation for hard spheres, the linearized Landau equation with hard and moderately soft potentials and the semi-classical linearized fermionic and bosonic relaxation models, we prove explicit coercivity estimates on the associated integro-differential operator for some modified Sobolev norms. We deduce the existence of classical solutions near equilibrium for the full nonlinear models associated with explicit regularity bounds, and we obtain explicit estimates on the rate of exponential convergence towards equilibrium in this perturbative setting. The proof is based on a linear energy method which combines the coercivity property of the collision operator in the velocity space with transport effects, in order to deduce coercivity estimates in the whole phase space.", "", "In the present work, we consider the well-posedness and asymptotics of grazing collisions limit of the spatially inhomogeneous Boltzmann equation with Coulomb interaction. Under the screening hypothesis on the cross-section @math of the Boltzmann collision operator, that is, @math , where @math , we prove that there exists a common lifespan @math such that for any @math , the Boltzmann equation admits a unique nonnegative solution @math in the function space @math when the initial data belongs to the weighted Sobolev space @math with @math and @math . Moreover, it is proved that the solution @math is uniformly bounded with respect to the parameter @math in the above function spaces. As a consequence, by using the ...", "The Landau equation, which was proposed by Landau in 1936, is a fundamental equation to describe collisions among charged particles interacting with their Coulombic force. In this article, global in time classical solutions near Maxwellians are constructed for the Landau equation in a periodic box. Our result also covers a class of generalized Landau equations, which describes grazing collisions in a dilute gas." ] }
1707.05710
2736958405
We consider the spatially inhomogeneous Landau equation with initial data that is bounded by a Gaussian in the velocity variable. In the case of moderately soft potentials, we show that weak solutions immediately become smooth and remain smooth as long as the mass, energy, and entropy densities remain under control. For very soft potentials, we obtain the same conclusion with the additional assumption that a sufficiently high moment of the solution in the velocity variable remains bounded. Our proof relies on the iteration of local Schauder-type estimates.
For the spatially homogeneous Landau equation, @math smoothing was established in @cite_2 in the case @math and @cite_4 in the @math case. For @math , the upper bounds of @cite_18 also imply smoothing via parabolic regularity theory. For @math , the result of Theorem is new even in the space homogeneous case, to the best of our knowledge.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_2" ], "mid": [ "2964023345", "2062484908", "2066588103" ], "abstract": [ "Abstract We consider a parabolic equation in nondivergence form, defined in the full space [ 0 , ∞ ) × R d , with a power nonlinearity as the right-hand side. We obtain an upper bound for the solution in terms of a weighted control in L p . This upper bound is applied to the homogeneous Landau equation with moderately soft potentials. We obtain an estimate in L ∞ ( R d ) for the solution of the Landau equation, for positive time, which depends only on the mass, energy and entropy of the initial data.", "We establish a simplified form for the Landau equation with Maxwellian-type molecules. We study in details the Cauchy problem associated to this equation, and some qualitative features of the solution. Explicit solutions are given.", "We study the Cauchy problem for the homogeneous Landau equation of kinetic theory, in the case of hard potentials. We prove that for a large class of initial data, there exists a unique weak solution to this problem, which becomes immediately smooth and rapidly decaying at infinity." ] }
1707.05674
2738538347
We present a method for estimating conditionally Gaussian random vectors with random covariance matrices, which uses techniques from the field of machine learning. Such models are typical in communication systems, where the covariance matrix of the channel vector depends on random parameters, e.g., angles of propagation paths. If the covariance matrices exhibit certain Toeplitz and shift-invariance structures, the complexity of the minimum mean squared error (MMSE) channel estimator can be reduced to @math floating point operations, where @math is the channel dimension. While in the absence of structure the complexity is much higher, we obtain a similarly efficient (but suboptimal) estimator by using the MMSE estimator of the structured model as a blueprint for the architecture of a neural network. This network learns the MMSE estimator for the unstructured model, but only within the given class of estimators that contains the MMSE estimator for the structured model. Numerical simulations with typical spatial channel models demonstrate the generalization properties of the chosen class of estimators to realistic channel models.
If we have a ULA at the base station, we know that the covariance matrix has to be a Toeplitz matrix. Thus, we should choose @math as the set of positive semi-definite Toeplitz matrices. In this case, the ML estimate can no longer be given in closed form and iterative methods have to be used @cite_23 @cite_11 @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_23", "@cite_11" ], "mid": [ "2962739446", "1999640431", "2004064635" ], "abstract": [ "Massive MIMO is a variant of multiuser MIMO where the number of base-station antennas M is very large (typically ≈ 100), and generally much larger than the number of spatially multiplexed data streams (typically ≈ 10). The benefits of such approach have been intensively investigated in the past few years, and all-digital experimental implementations have also been demonstrated. Unfortunately, the front-end A D conversion necessary to drive hundreds of antennas, with a signal bandwidth of the order of 10 to 100 MHz, requires very large sampling bitrate and power consumption. In order to reduce such implementation requirements, Hybrid Digital-Analog architectures have been proposed. In particular, our work in this paper is motivated by one of such schemes named Joint Spatial Division and Multiplexing (JSDM), where the downlink precoder (resp., uplink linear receiver) is split into the product of a baseband linear projection (digital) and an RF reconfigurable beamforming network (analog), such that only a reduced number m M of A D converters and RF modulation demodulation chains is needed. In JSDM, users are grouped according to similarity of their channel dominant subspaces, and these groups are separated by the analog beamforming stage, where multiplexing gain in each group is achieved using the digital precoder. Therefore, it is apparent that extracting the channel subspace information of the M -dim channel vectors from snapshots of m-dim projections, with m M , plays a fundamental role in JSDM implementation. In this paper, we develop novel efficient algorithms that require sampling only m = O(2 √ M) specific array elements according to a coprime sampling scheme, and for a given p M , return a p-dim beamformer that has a performance comparable with the best p-dim beamformer that can be designed from the full knowledge of the exact channel covariance matrix. We assess the performance of our proposed estimators both analytically and empirically via numerical simulations. We also demonstrate by simulation that the proposed subspace estimation methods provide near-ideal performance for a massive MIMO JSDM system, by comparing with the case where the user channel covariances are perfectly known.", "Covariance matrices from stationary time series are Toeplitz. Multichannel and multidimensional processes have covariance matrices of block Toeplitz form. In these cases and many other situations, one knows that the actual covariance matrix belongs to a particular subclass of covariance matrices. This paper discusses a method for estimating a covariance matrix of specified structure from vector samples of the random process. The theoretical foundation of the method is to assume that the random process is zero-mean multivariate Gaussian, and to find the maximum-likelihood covariance matrix that has the specified structure. An existence proof is given and the solution is interpreted in terms of a minimum-entropy principle. The necessary gradient conditions that must be satisfied by the maximum-likelihood solution are derived and unique and nonunique analytic solutions for some simple problems are presented. A major contribution of this paper is an iterative algorithm that solves the necessary gradient equations for moderate-sized problems with reasonable computational ease. Theoretical convergence properties of the basic algorithm are investigated and robust modifications discussed. In doing maximum-entropy spectral analysis of a sine wave in white noise from a single vector sample, this new estimation procedure causes no splitting of the spectral line in contrast to the Burg technique.", "" ] }
1707.05674
2738538347
We present a method for estimating conditionally Gaussian random vectors with random covariance matrices, which uses techniques from the field of machine learning. Such models are typical in communication systems, where the covariance matrix of the channel vector depends on random parameters, e.g., angles of propagation paths. If the covariance matrices exhibit certain Toeplitz and shift-invariance structures, the complexity of the minimum mean squared error (MMSE) channel estimator can be reduced to @math floating point operations, where @math is the channel dimension. While in the absence of structure the complexity is much higher, we obtain a similarly efficient (but suboptimal) estimator by using the MMSE estimator of the structured model as a blueprint for the architecture of a neural network. This network learns the MMSE estimator for the unstructured model, but only within the given class of estimators that contains the MMSE estimator for the structured model. Numerical simulations with typical spatial channel models demonstrate the generalization properties of the chosen class of estimators to realistic channel models.
The ML-based channel estimation techniques exploit the Toeplitz structure of the covariance matrix, which is a result of regular array geometries and the model with a continuous power density function @math . In the 3GPP models, this power density function usually has a very limited angular support, i.e., @math is approximately zero except for @math in the vicinity of the cluster centers @math . The resulting covariance matrices have a very low numerical rank @cite_20 . As a consequence, under such a model, any given realization of a channel vector admits a sparse approximation in a given dictionary @math , where all but @math entries of @math are zero. The vector @math can be found by solving the sparse approximation problem where @math denotes the number of nonzero entries of @math . This combinatorial optimization problem can be solved efficiently with methods from the area of compressive sensing, e.g., the orthogonal matching pursuit (OMP) algorithm @cite_8 or iterative hard thresholding (IHT) @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_20", "@cite_8" ], "mid": [ "2963322354", "2730373579", "2106448014" ], "abstract": [ "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper)", "", "The problem of optimal approximation of members of a vector space by a linear combination of members of a large overcomplete library of vectors is of importance in many areas including image and video coding, image analysis, control theory, and statistics. Finding the optimal solution in the general case is mathematically intractable. Matching pursuit, and its orthogonal version, provide greedy solutions to this problem. Orthogonal matching pursuit typically provides significantly better solution compared to the nonorthogonal version, but requires much more computation. This paper presents a fast algorithm for implementation of orthogonal matching pursuit which for many coding applications has a computational complexity very close to that of the nonorthogonal version." ] }
1707.05733
2566550653
Object detection is an essential task for autonomous robots operating in dynamic and changing environments. A robot should be able to detect objects in the presence of sensor noise that can be induced by changing lighting conditions for cameras and false depth readings for range sensors, especially RGB-D cameras. To tackle these challenges, we propose a novel adaptive fusion approach for object detection that learns weighting the predictions of different sensor modalities in an online manner. Our approach is based on a mixture of convolutional neural network (CNN) experts and incorporates multiple modalities including appearance, depth and motion. We test our method in extensive robot experiments, in which we detect people in a combined indoor and outdoor scenario from RGB-D data, and we demonstrate that our method can adapt to harsh lighting changes and severe camera motion blur. Furthermore, we present a new RGB-D dataset for people detection in mixed in- and outdoor environments, recorded with a mobile robot.
There has been a large body of research targeting pedestrian detection in the vision community @cite_14 @cite_3 . For a concise overview of pedestrian detection we refer to a recent discussion by Benenson al @cite_8 . They conclude that improved detection performance has shown to be driven by the design of better features, but also complemented by additional data, such as image context and motion. To demonstrate the quality of features learned by a convolutional neural network, Hosang al @cite_10 reported improved results for pedestrian detection, using pre-trained off-the-shelf CNNs. More recently Angelova al @cite_17 presented a convolutional network architecture that processes larger areas of the RGB input image and detects multiple pedestrians simultaneously, leading to a significant speedup at test time. However, none of these approaches make use of multiple modalities.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_3", "@cite_10", "@cite_17" ], "mid": [ "1903127635", "1650122911", "2156547346", "1976818984", "1526734559" ], "abstract": [ "Deep learning methods have achieved great successes in pedestrian detection, owing to its ability to learn discriminative features from raw pixels. However, they treat pedestrian detection as a single binary classification task, which may confuse positive with hard negative samples (Fig.1 (a)). To address this ambiguity, this work jointly optimize pedestrian detection with semantic tasks, including pedestrian attributes (e.g. ‘carrying backpack’) and scene attributes (e.g. ‘vehicle’, ‘tree’, and ‘horizontal’). Rather than expensively annotating scene attributes, we transfer attributes information from existing scene segmentation datasets to the pedestrian dataset, by proposing a novel deep model to learn high-level features from multiple tasks and multiple data sources. Since distinct tasks have distinct convergence rates and data from different datasets have different distributions, a multi-task deep model is carefully designed to coordinate tasks and reduce discrepancies among datasets. Extensive evaluations show that the proposed approach outperforms the state-of-the-art on the challenging Caltech [9] and ETH [10] datasets where it reduces the miss rates of previous deep models by 17 and 5.5 percent, respectively.", "Paper-by-paper results make it easy to miss the forest for the trees.We analyse the remarkable progress of the last decade by dis- cussing the main ideas explored in the 40+ detectors currently present in the Caltech pedestrian detection benchmark. We observe that there exist three families of approaches, all currently reaching similar detec- tion quality. Based on our analysis, we study the complementarity of the most promising ideas by combining multiple published strategies. This new decision forest detector achieves the current best known performance on the challenging Caltech-USA dataset.", "Feature extraction, deformation handling, occlusion handling, and classification are four important components in pedestrian detection. Existing methods learn or design these components either individually or sequentially. The interaction among these components is not yet well explored. This paper proposes that they should be jointly learned in order to maximize their strengths through cooperation. We formulate these four components into a joint deep learning framework and propose a new deep network architecture. By establishing automatic, mutual interaction among components, the deep model achieves a 9 reduction in the average miss rate compared with the current best-performing pedestrian detection approaches on the largest Caltech benchmark dataset.", "In this paper we study the use of convolutional neural networks (convnets) for the task of pedestrian detection. Despite their recent diverse successes, convnets historically underperform compared to other pedestrian detectors. We deliberately omit explicitly modelling the problem into the network (e.g. parts or occlusion modelling) and show that we can reach competitive performance without bells and whistles. In a wide range of experiments we analyse small and big convnets, their architectural choices, parameters, and the influence of different training data, including pretraining on surrogate tasks. We present the best convnet detectors on the Caltech and KITTI dataset. On Caltech our convnets reach top performance both for the Caltech1x and Caltech10x training setup. Using additional data at training time our strongest convnet model is competitive even to detectors that use additional data (optical flow) at test time.", "Pedestrian detection is of crucial importance to autonomous driving applications. Methods based on deep learning have shown significant improvements in accuracy, which makes them particularly suitable for applications, such as pedestrian detection, where reducing the miss rate is very important. Although they are accurate, their runtime has been at best in seconds per image, which makes them not practical for onboard applications. We present a Large-Field-Of-View (LFOV) deep network for pedestrian detection, that can achieve high accuracy and is designed to make deep networks work faster for detection problems. The idea of the proposed Large-Field-of-View deep network is to learn to make classification decisions simultaneously and accurately at multiple locations. The LFOV network processes larger image areas at much faster speeds than typical deep networks have been able to, and can intrinsically reuse computations. Our pedestrian detection solution, which is a combination of a LFOV network and a standard deep network, works at 280 ms per image on GPU and achieves 35.85 average miss rate on the Caltech Pedestrian Detection Benchmark." ] }
1707.05712
2739150435
We provide two main contributions in PAC-Bayesian theory for domain adaptation where the objective is to learn, from a source distribution, a well-performing majority vote on a different, but related, target distribution. Firstly, we propose an improvement of the previous approach we proposed in (2013), which relies on a novel distribution pseudodistance based on a disagreement averaging, allowing us to derive a new tighter domain adaptation bound for the target risk. While this bound stands in the spirit of common domain adaptation works, we derive a second bound (recently introduced in , 2016) that brings a new perspective on domain adaptation by deriving an upper bound on the target risk where the distributions' divergence—expressed as a ratio—controls the trade-off between a source error measure and the target voters' disagreement. We discuss and compare both results, from which we obtain PAC-Bayesian generalization bounds. Furthermore, from the PAC-Bayesian specialization to linear classifiers, we infer two learning algorithms, and we evaluate them on real data.
In this section, we review the two seminal works in domain adaptation that are based on a divergence measure between the domains @cite_3 @cite_14 @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_2" ], "mid": [ "2131953535", "2148440006", "2953369858" ], "abstract": [ "Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. In many situations, though, we have labeled training data for a source domain, and we wish to learn a classifier which performs well on a target domain with a different distribution. Under what conditions can we adapt a classifier trained on the source domain for use in the target domain? Intuitively, a good feature representation is a crucial factor in the success of domain adaptation. We formalize this intuition theoretically with a generalization bound for domain adaption. Our theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model. It also points toward a promising new model for domain adaptation: one which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.", "We consider the problem of learning accurate models from multiple sources of \"nearby\" data. Given distinct samples from multiple data sources and estimates of the dissimilarities between these sources, we provide a general theory of which samples should be used to learn models for each source. This theory is applicable in a broad decision-theoretic learning framework, and yields general results for classification and regression. A key component of our approach is the development of approximate triangle inequalities for expected loss, which may be of independent interest. We discuss the related problem of learning parameters of a distribution from multiple data sources. Finally, we illustrate our theory through a series of synthetic simulations.", "This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by Ben- (2007), we introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions. We give Rademacher complexity bounds for estimating the discrepancy distance from finite samples for different loss functions. Using this distance, we derive novel generalization bounds for domain adaptation for a wide family of loss functions. We also present a series of novel adaptation bounds for large classes of regularization-based algorithms, including support vector machines and kernel ridge regression based on the empirical discrepancy. This motivates our analysis of the problem of minimizing the empirical discrepancy for various loss functions for which we also give novel algorithms. We report the results of preliminary experiments that demonstrate the benefits of our discrepancy minimization algorithms for domain adaptation." ] }
1707.05712
2739150435
We provide two main contributions in PAC-Bayesian theory for domain adaptation where the objective is to learn, from a source distribution, a well-performing majority vote on a different, but related, target distribution. Firstly, we propose an improvement of the previous approach we proposed in (2013), which relies on a novel distribution pseudodistance based on a disagreement averaging, allowing us to derive a new tighter domain adaptation bound for the target risk. While this bound stands in the spirit of common domain adaptation works, we derive a second bound (recently introduced in , 2016) that brings a new perspective on domain adaptation by deriving an upper bound on the target risk where the distributions' divergence—expressed as a ratio—controls the trade-off between a source error measure and the target voters' disagreement. We discuss and compare both results, from which we obtain PAC-Bayesian generalization bounds. Furthermore, from the PAC-Bayesian specialization to linear classifiers, we infer two learning algorithms, and we evaluate them on real data.
Equation can be tighter than Equation Equation can lead to an error term three times higher than Equation in some cases (more details in @cite_2 ). since it bounds the difference between the target error of a classifier and the one of the optimal @math . Based on Theorem and a Rademacher complexity analysis, provide a generalization bound on the target risk, that expresses a trade-off between the disagreement (between @math and the best source hypothesis @math ), the complexity of @math , and---again---the incapacity'' of hypotheses to detect differences between the domains.
{ "cite_N": [ "@cite_2" ], "mid": [ "2953369858" ], "abstract": [ "This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by Ben- (2007), we introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions. We give Rademacher complexity bounds for estimating the discrepancy distance from finite samples for different loss functions. Using this distance, we derive novel generalization bounds for domain adaptation for a wide family of loss functions. We also present a series of novel adaptation bounds for large classes of regularization-based algorithms, including support vector machines and kernel ridge regression based on the empirical discrepancy. This motivates our analysis of the problem of minimizing the empirical discrepancy for various loss functions for which we also give novel algorithms. We report the results of preliminary experiments that demonstrate the benefits of our discrepancy minimization algorithms for domain adaptation." ] }
1707.05647
2738831047
This paper presents a generic pre-processor for expediting conventional template matching techniques. Instead of locating the best matched patch in the reference image to a query template via exhaustive search, the proposed algorithm rules out regions with no possible matches with minimum computational efforts. While working on simple patch features, such as mean, variance and gradient, the fast pre-screening is highly discriminative. Its computational efficiency is gained by using a novel octagonal-star-shaped template and the inclusion-exclusion principle to extract and compare patch features. Moreover, it can handle arbitrary rotation and scaling of reference images effectively. Extensive experiments demonstrate that the proposed algorithm greatly reduces the search space while never missing the best match.
A great amount of research effort has been dedicated to designing efficient and effective template matching techniques. Based on conventional full search matching, Alkhansari proposed a technique that reduces search space by pruning unmatched regions using a downsampled reference image @cite_26 . Pele and Werman developed a method to determine the optimal step size of sliding windows for full search matching @cite_13 . On the topic of latest full search equivalent techniques, Ouyang provided a comprehensive survey and compared the performances of several popular methods @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_26", "@cite_13" ], "mid": [ "", "2106237508", "1588903902" ], "abstract": [ "", "Template matching has many applications in signal processing, image processing, pattern recognition, and video compression. This paper proposes a fast coarse-to-fine template matching algorithm for finding the exact best match, i.e., the match that may be found by a full search. This is obtained by pruning the number of candidates in the full search using the results of a coarse search. Experimental results show that speed ups of a couple of orders of magnitude can easily be achieved using this method for typical low-noise cases of two-dimensional (2-D) template matching.", "This paper describes a method that accelerates pattern matching. The distance between a pattern and a window is usually close to the distance of the pattern to the adjacement windows due to image smoothness. We show how to exploit this fact to reduce the running time of pattern matching by adaptively sliding the window often by more than one pixel. The decision how much we can slide is based on a novel rank we define for each feature in the pattern. Implemented on a Pentium 4 3GHz processor, detection of a pattern with 7569 pixels in a 640 × 480 pixel image requires only 3.4ms." ] }
1707.05647
2738831047
This paper presents a generic pre-processor for expediting conventional template matching techniques. Instead of locating the best matched patch in the reference image to a query template via exhaustive search, the proposed algorithm rules out regions with no possible matches with minimum computational efforts. While working on simple patch features, such as mean, variance and gradient, the fast pre-screening is highly discriminative. Its computational efficiency is gained by using a novel octagonal-star-shaped template and the inclusion-exclusion principle to extract and compare patch features. Moreover, it can handle arbitrary rotation and scaling of reference images effectively. Extensive experiments demonstrate that the proposed algorithm greatly reduces the search space while never missing the best match.
Recently, feature-based image matching methods, such as scale invariant feature transform (SIFT) @cite_8 , become more popular. After obtaining rotation and scale invariant features for both template and reference image, data fitting algorithm like RANSAC @cite_17 are used for finding matching patterns. ASIFT extends SIFT to be fully affine invariant @cite_24 . Dekel introduced a novel similarity measure termed best-buddies similarity (BBS) for comparing the features of two patches @cite_16 . BBS is robust against many types of geometric deformations and well suited for video tracking applications. These feature-based methods are generally time consuming due to their heavy processes for generating feature descriptors, also they may fail to work if the template is relatively small or lightly textured.
{ "cite_N": [ "@cite_24", "@cite_16", "@cite_17", "@cite_8" ], "mid": [ "2052094314", "1913744585", "2085261163", "" ], "abstract": [ "If a physical object has a smooth or piecewise smooth boundary, its images obtained by cameras in varying positions undergo smooth apparent deformations. These deformations are locally well approximated by affine transforms of the image plane. In consequence the solid object recognition problem has often been led back to the computation of affine invariant image local features. Such invariant features could be obtained by normalization methods, but no fully affine normalization method exists for the time being. Even scale invariance is dealt with rigorously only by the scale-invariant feature transform (SIFT) method. By simulating zooms out and normalizing translation and rotation, SIFT is invariant to four out of the six parameters of an affine transform. The method proposed in this paper, affine-SIFT (ASIFT), simulates all image views obtainable by varying the two camera axis orientation parameters, namely, the latitude and the longitude angles, left over by the SIFT method. Then it covers the other four parameters by using the SIFT method itself. The resulting method will be mathematically proved to be fully affine invariant. Against any prognosis, simulating all views depending on the two camera orientation parameters is feasible with no dramatic computational load. A two-resolution scheme further reduces the ASIFT complexity to about twice that of SIFT. A new notion, the transition tilt, measuring the amount of distortion from one view to another, is introduced. While an absolute tilt from a frontal to a slanted view exceeding 6 is rare, much higher transition tilts are common when two slanted views of an object are compared (see Figure hightransitiontiltsillustration). The attainable transition tilt is measured for each affine image comparison method. The new method permits one to reliably identify features that have undergone transition tilts of large magnitude, up to 36 and higher. This fact is substantiated by many experiments which show that ASIFT significantly outperforms the state-of-the-art methods SIFT, maximally stable extremal region (MSER), Harris-affine, and Hessian-affine.", "We propose a novel method for template matching in unconstrained environments. Its essence is the Best-Buddies Similarity (BBS), a useful, robust, and parameter-free similarity measure between two sets of points. BBS is based on counting the number of Best-Buddies Pairs (BBPs)—pairs of points in source and target sets, where each point is the nearest neighbor of the other. BBS has several key features that make it robust against complex geometric deformations and high levels of outliers, such as those arising from background clutter and occlusions. We study these properties, provide a statistical analysis that justifies them, and demonstrate the consistent success of BBS on a challenging real-world dataset.", "A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing", "" ] }
1707.05495
2736688973
In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.
Furthermore, state-of-the-art DNN based multi-label algorithms have proposed different loss functions or architectures @cite_11 @cite_33 @cite_6 . For example, @cite_11 design a rank-based loss and compensate those with lowest ranks ones, @cite_33 generate multi-label candidates on several grids and combine results with max-pooling, and propose structured inference NN @cite_6 , which uses concept layers modeled with label graphs.
{ "cite_N": [ "@cite_33", "@cite_6", "@cite_11" ], "mid": [ "", "2256558689", "1514027499" ], "abstract": [ "", "Images of scenes have various objects as well as abundant attributes, and diverse levels of visual categorization are possible. A natural image could be assigned with fine-grained labels that describe major components, coarse-grained labels that depict high level abstraction or a set of labels that reveal attributes. Such categorization at different concept layers can be modeled with label graphs encoding label information. In this paper, we exploit this rich information with a state-of-art deep learning framework, and propose a generic structured model that leverages diverse label relations to improve image classification performance. Our approach employs a novel stacked label prediction neural network, capturing both inter-level and intra-level label semantics. We evaluate our method on benchmark image datasets, and empirical results illustrate the efficacy of our model.", "Multilabel image annotation is one of the most important challenges in computer vision with many real-world applications. While existing work usually use conventional visual features for multilabel annotation, features based on Deep Neural Networks have shown potential to significantly boost performance. In this work, we propose to leverage the advantage of such features and analyze key components that lead to better performances. Specifically, we show that a significant performance gain could be obtained by combining convolutional architectures with approximate top- @math ranking objectives, as thye naturally fit the multilabel tagging problem. Our experiments on the NUS-WIDE dataset outperforms the conventional visual features by about 10 , obtaining the best reported performance in the literature." ] }
1707.05471
2737935263
Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there lack practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.
Most conventional techniques for dense semantic correspondence have employed handcrafted features such as SIFT @cite_9 or DAISY @cite_3 . To improve matching quality, they have focused on optimization. Liu @cite_33 pioneered the idea of dense correspondence across different scenes, and proposed SIFT Flow which is based on hierarchical dual-layer belief propagation. Inspired by this, Kim @cite_19 proposed the deformable spatial pyramid (DSP) which performs multi-scale regularization with a hierarchical graph. Among other methods are those that take an exemplar-LDA approach @cite_48 , employ joint image set alignment @cite_16 , or jointly solve for cosegmentation @cite_56 .
{ "cite_N": [ "@cite_33", "@cite_48", "@cite_9", "@cite_3", "@cite_56", "@cite_19", "@cite_16" ], "mid": [ "", "209424029", "2151103935", "2104853049", "2464606141", "2124861766", "1926639317" ], "abstract": [ "", "Determining dense semantic correspondences across objects and scenes is a difficult problem that underpins many higher-level computer vision algorithms. Unlike canonical dense correspondence problems which consider images that are spatially or temporally adjacent, semantic correspondence is characterized by images that share similar high-level structures whose exact appearance and geometry may differ. Motivated by object recognition literature and recent work on rapidly estimating linear classifiers, we treat semantic correspondence as a constrained detection problem, where an exemplar LDA classifier is learned for each pixel. LDA classifiers have two distinct benefits: (i) they exhibit higher average precision than similarity metrics typically used in correspondence problems, and (ii) unlike exemplar SVM, can output globally interpretable posterior probabilities without calibration, whilst also being significantly faster to train. We pose the correspondence problem as a graphical model, where the unary potentials are computed via convolution with the set of exemplar classifiers, and the joint potentials enforce smoothly varying correspondence assignment.", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "In this paper, we introduce a local image descriptor, DAISY, which is very efficient to compute densely. We also present an EM-based algorithm to compute dense depth and occlusion maps from wide-baseline image pairs using this descriptor. This yields much better results in wide-baseline situations than the pixel and correlation-based algorithms that are commonly used in narrow-baseline stereo. Also, using a descriptor makes our algorithm robust against many photometric and geometric transformations. Our descriptor is inspired from earlier ones such as SIFT and GLOH but can be computed much faster for our purposes. Unlike SURF, which can also be computed efficiently at every pixel, it does not introduce artifacts that degrade the matching performance when used densely. It is important to note that our approach is the first algorithm that attempts to estimate dense depth maps from wide-baseline image pairs, and we show that it is a good one at that with many experiments for depth estimation accuracy, occlusion detection, and comparing it against other descriptors on laser-scanned ground truth scenes. We also tested our approach on a variety of indoor and outdoor scenes with different photometric and geometric transformations and our experiments support our claim to being robust against these.", "We propose a new technique to jointly recover cosegmentation and dense per-pixel correspondence in two images. Our method parameterizes the correspondence field using piecewise similarity transformations and recovers a mapping between the estimated common \"foreground\" regions in the two images allowing them to be precisely aligned. Our formulation is based on a hierarchical Markov random field model with segmentation and transformation labels. The hierarchical structure uses nested image regions to constrain inference across multiple scales. Unlike prior hierarchical methods which assume that the structure is given, our proposed iterative technique dynamically recovers the structure along with the labeling. This joint inference is performed in an energy minimization framework using iterated graph cuts. We evaluate our method on a new dataset of 400 image pairs with manually obtained ground truth, where it outperforms state-of-the-art methods designed specifically for either cosegmentation or correspondence estimation.", "We introduce a fast deformable spatial pyramid (DSP) matching algorithm for computing dense pixel correspondences. Dense matching methods typically enforce both appearance agreement between matched pixels as well as geometric smoothness between neighboring pixels. Whereas the prevailing approaches operate at the pixel level, we propose a pyramid graph model that simultaneously regularizes match consistency at multiple spatial extents-ranging from an entire image, to coarse grid cells, to every single pixel. This novel regularization substantially improves pixel-level matching in the face of challenging image variations, while the \"deformable\" aspect of our model overcomes the strict rigidity of traditional spatial pyramids. Results on Label Me and Caltech show our approach outperforms state-of-the-art methods (SIFT Flow [15] and Patch-Match [2]), both in terms of accuracy and run time.", "Given a set of poorly aligned images of the same visual concept without any annotations, we propose an algorithm to jointly bring them into pixel-wise correspondence by estimating a FlowWeb representation of the image set. FlowWeb is a fully-connected correspondence flow graph with each node representing an image, and each edge representing the correspondence flow field between a pair of images, i.e. a vector field indicating how each pixel in one image can find a corresponding pixel in the other image. Correspondence flow is related to optical flow but allows for correspondences between visually dissimilar regions if there is evidence they correspond transitively on the graph. Our algorithm starts by initializing all edges of this complete graph with an off-the-shelf, pairwise flow method. We then iteratively update the graph to force it to be more self-consistent. Once the algorithm converges, dense, globally-consistent correspondences can be read off the graph. Our results suggest that FlowWeb improves alignment accuracy over previous pairwise as well as joint alignment methods." ] }
1707.05471
2737935263
Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there lack practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.
Recently, CNN-based descriptors have been used to establish dense semantic correspondences. Zhou @cite_44 proposed a deep network that exploits cycle-consistency with a 3D CAD model @cite_45 as a supervisory signal. Choy @cite_32 proposed the universal correspondence network (UCN) based on fully convolutional feature learning. Most recently, Kim @cite_4 proposed the FCSS descriptor that formulates local self-similarity (LSS) @cite_55 within a fully convolutional network. Because of its LSS-based structure, FCSS is inherently insensitive to intra-class appearance variations while maintaining precise localization ability. However, none of these methods is able to handle non-rigid geometric variations.
{ "cite_N": [ "@cite_4", "@cite_55", "@cite_32", "@cite_44", "@cite_45" ], "mid": [ "2593948489", "2144502914", "2435623039", "2474531669", "" ], "abstract": [ "We present a descriptor, called fully convolutional self-similarity (FCSS), for dense semantic correspondence. To robustly match points among different instances within the same object class, we formulate FCSS using local self-similarity (LSS) within a fully convolutional network. In contrast to existing CNN-based descriptors, FCSS is inherently insensitive to intra-class appearance variations because of its LSS-based structure, while maintaining the precise localization ability of deep neural networks. The sampling patterns of local structure and the self-similarity measure are jointly learned within the proposed network in an end-to-end and multi-scale manner. As training data for semantic correspondence is rather limited, we propose to leverage object candidate priors provided in existing image datasets and also correspondence consistency between object pairs to enable weakly-supervised learning. Experiments demonstrate that FCSS outperforms conventional handcrafted descriptors and CNN-based descriptors on various benchmarks.", "We present an approach for measuring similarity between visual entities (images or videos) based on matching internal self-similarities. What is correlated across images (or across video sequences) is the internal layout of local self-similarities (up to some distortions), even though the patterns generating those local self-similarities are quite different in each of the images videos. These internal self-similarities are efficiently captured by a compact local \"self-similarity descriptor\"', measured densely throughout the image video, at multiple scales, while accounting for local and global geometric distortions. This gives rise to matching capabilities of complex visual data, including detection of objects in real cluttered images using only rough hand-sketches, handling textured objects with no clear boundaries, and detecting complex actions in cluttered video data with no prior learning. We compare our measure to commonly used image-based and video-based similarity measures, and demonstrate its applicability to object detection, retrieval, and action detection.", "A computer-implemented method for training a convolutional neural network (CNN) is presented. The method includes extracting coordinates of corresponding points in the first and second locations, identifying positive points in the first and second locations, identifying negative points in the first and second locations, training features that correspond to positive points of the first and second locations to move closer to each other, and training features that correspond to negative points in the first and second locations to move away from each other.", "Discriminative deep learning approaches have shown impressive results for problems where human-labeled ground truth is plentiful, but what about tasks where labels are difficult or impossible to obtain? This paper tackles one such problem: establishing dense visual correspondence across different object instances. For this task, although we do not know what the ground-truth is, we know it should be consistent across instances of that category. We exploit this consistency as a supervisory signal to train a convolutional neural network to predict cross-instance correspondences between pairs of images depicting objects of the same category. For each pair of training images we find an appropriate 3D CAD model and render two synthetic views to link in with the pair, establishing a correspondence flow 4-cycle. We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and realto-synthetic correspondences that are cycle-consistent with the ground-truth. At test time, no CAD models are required. We demonstrate that our end-to-end trained ConvNet supervised by cycle-consistency outperforms stateof-the-art pairwise matching methods in correspondencerelated tasks.", "" ] }
1707.05471
2737935263
Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there lack practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.
Several methods aim to alleviate geometric variations through extensions of SIFT Flow, including scale-less SIFT Flow (SLS) @cite_57 , scale-space SIFT Flow (SSF) @cite_8 , and generalized DSP (GDSP) @cite_18 . However, these techniques have a critical practical limitation that their computation increases linearly with the search space size. A generalized PatchMatch algorithm @cite_26 was proposed for efficient matching that leverages a randomized search scheme. This was utilized by HaCohen @cite_24 in a non-rigid dense correspondence (NRDC) algorithm, but employs weak matching evidence that cannot guarantee reliable performance. Geometric invariance to scale and rotation is provided by DFF @cite_20 , but its implicit smoothing model which relies on randomized sampling and propagation of good estimates in the direct neighborhood often induces mismatches. A segmentation-aware approach @cite_38 was proposed to provide geometric robustness for descriptors, , SIFT @cite_9 , but can have a negative effect on the discriminative power of the descriptor. Recently, Ham @cite_51 presented the Proposal Flow (PF) algorithm to estimate correspondences using object proposals. While these aforementioned techniques provide some amount of geometric invariance, none of them can deal with affine transformations across images, which are a frequent occurrence in dense semantic correspondence.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_26", "@cite_8", "@cite_9", "@cite_57", "@cite_24", "@cite_51", "@cite_20" ], "mid": [ "2158893304", "1960436198", "1763426478", "", "2151103935", "2016120301", "2106505277", "", "" ], "abstract": [ "In this work we exploit segmentation to construct appearance descriptors that can robustly deal with occlusion and background changes. For this, we downplay measurements coming from areas that are unlikely to belong to the same region as the descriptor's center, as suggested by soft segmentation masks. Our treatment is applicable to any image point, i.e. dense, and its computational overhead is in the order of a few seconds. We integrate this idea with Dense SIFT, and also with Dense Scale and Rotation Invariant Descriptors (SID), delivering descriptors that are densely computable, invariant to scaling and rotation, and robust to background changes. We apply our approach to standard benchmarks on large displacement motion estimation using SIFT-flow and wide-baseline stereo, systematically demonstrating that the introduction of segmentation yields clear improvements.", "We present a Generalized Deformable Spatial Pyramid (GDSP) matching algorithm for calculating the dense correspondence between a pair of images with large appearance variations. The main challenges of the problem generally originate in appearance dissimilarities and geometric variations between images. To address these challenges, we improve the existing Deformable Spatial Pyramid (DSP) [10] model by generalizing the search space and devising the spatial smoothness. The former is leveraged by rotations and scales, and the latter simultaneously considers dependencies between high-dimensional labels through the pyramid structure. Our spatial regularization in the high-dimensional space enables our model to effectively preserve the meaningful geometry of objects in the input images while allowing for a wide range of geometry variations such as perspective transform and non-rigid deformation. The experimental results on public datasets and challenging scenarios show that our method outperforms the state-of-the-art methods both qualitatively and quantitatively.", "PatchMatch is a fast algorithm for computing dense approximate nearest neighbor correspondences between patches of two image regions [1]. This paper generalizes PatchMatch in three ways: (1) to find k nearest neighbors, as opposed to just one, (2) to search across scales and rotations, in addition to just translations, and (3) to match using arbitrary descriptors and distances, not just sum-of-squared-differences on patch colors. In addition, we offer new search and parallelization strategies that further accelerate the method, and we show performance improvements over standard kd-tree techniques across a variety of inputs. In contrast to many previous matching algorithms, which for efficiency reasons have restricted matching to sparse interest points, or spatially proximate matches, our algorithm can efficiently find global, dense matches, even while matching across all scales and rotations. This is especially useful for computer vision applications, where our algorithm can be used as an efficient general-purpose component. We explore a variety of vision applications: denoising, finding forgeries by detecting cloned regions, symmetry detection, and object detection.", "", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "Scale invariant feature detectors often find stable scales in only a few image pixels. Consequently, methods for feature matching typically choose one of two extreme options: matching a sparse set of scale invariant features, or dense matching using arbitrary scales. In this paper we turn our attention to the overwhelming majority of pixels, those where stable scales are not found by standard techniques. We ask, is scale-selection necessary for these pixels, when dense, scale-invariant matching is required and if so, how can it be achieved? We make the following contributions: (i) We show that features computed over different scales, even in low-contrast areas, can be different; selecting a single scale, arbitrarily or otherwise, may lead to poor matches when the images have different scales. (ii) We show that representing each pixel as a set of SIFTs, extracted at multiple scales, allows for far better matches than single-scale descriptors, but at a computational price. Finally, (iii) we demonstrate that each such set may be accurately represented by a low-dimensional, linear subspace. A subspace-to-point mapping may further be used to produce a novel descriptor representation, the Scale-Less SIFT (SLS), as an alternative to single-scale descriptors. These claims are verified by quantitative and qualitative tests, demonstrating significant improvements over existing methods.", "This paper presents a new efficient method for recovering reliable local sets of dense correspondences between two images with some shared content. Our method is designed for pairs of images depicting similar regions acquired by different cameras and lenses, under non-rigid transformations, under different lighting, and over different backgrounds. We utilize a new coarse-to-fine scheme in which nearest-neighbor field computations using Generalized PatchMatch [ 2010] are interleaved with fitting a global non-linear parametric color model and aggregating consistent matching regions using locally adaptive constraints. Compared to previous correspondence approaches, our method combines the best of two worlds: It is dense, like optical flow and stereo reconstruction methods, and it is also robust to geometric and photometric variations, like sparse feature matching. We demonstrate the usefulness of our method using three applications for automatic example-based photograph enhancement: adjusting the tonal characteristics of a source image to match a reference, transferring a known mask to a new image, and kernel estimation for image deblurring.", "", "" ] }
1707.05471
2737935263
Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there lack practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.
A possible approach for estimating dense affine transformation fields is to interpolate sparsely matched points using a method, including thin plate splines (TPS) @cite_47 , motion coherence @cite_30 , coherence point drift @cite_10 , or smoothly varying affine stitching @cite_41 . MLS is also a scattered point interpolation technique first introduced in @cite_28 to reconstruct a continuous function from a set of point samples by incorporating spatially-weighted least squares. MLS has been successfully used in applications such as image deformation @cite_59 , surface reconstruction @cite_39 , image super-resolution and denoising @cite_43 , and color transfer @cite_36 . Inspired by the MLS concept, our method utilizes it to regularize estimated affine transformation fields, but with a different weight function and an efficient computational scheme.
{ "cite_N": [ "@cite_30", "@cite_41", "@cite_28", "@cite_36", "@cite_39", "@cite_43", "@cite_59", "@cite_47", "@cite_10" ], "mid": [ "1517084761", "", "1988744163", "2065632025", "2058524213", "", "2015475217", "", "" ], "abstract": [ "Tliere are a number of important phenom- ena in motion perception involving colicrcnce. Examples include motion capture and motion cooperativity. We propose a theoretical model, called the motion coherence tlieory, that gives a possible explanation for these effects (Yuille and Grzywacz, 1988a,b). In this framework, the aperture problem can also be thought of as a problem of coherence and given a similar explanation. We propose the concept of a velocity field dcfined everywhere in the image, even where there is no explicit motion information available. Through a cost function, tlie model imposes smoothness on the velocity field in a more general way than previous theories. In this paper, we provide a de- tailed theoretical analysis of the motion coherence theory. We discuss its relations with previous theories and show that some of t1ic.m arc approximations to it. A sccorid pa- per (Grzywacz, Smith, and Yuillc, 1088) provides exten- sions and cletnilcd comparisons to psychophysical plienom- cna. Tlic theory applies to both short-range and long- range motion. It places them in the same computational framework aiid provides a way to define interactions be- twcr:11 the two 1)roccsses.", "", "An analysis of moving least squares (m.l.s.) methods for smoothing and interpolating scattered data is presented. In particular, theorems are proved concerning the smoothness of interpolants and the description of m.l.s. processes as projection methods. Some properties of compositions of the m.l.s. projector, with projectors associated with finiteelement schemes, are also considered. The analysis is accompanied by examples of univariate and bivariate problems.", "This paper introduces a new color transfer method which is a process of transferring color of an image to match the color of another image of the same scene. The color of a scene may vary from image to image because the photographs are taken at different times, with different cameras, and under different camera settings. To solve for a full nonlinear and nonparametric color mapping in the 3D RGB color space, we propose a scattered point interpolation scheme using moving least squares and strengthen it with a probabilistic modeling of the color transfer in the 3D color space to deal with mis-alignments and noise. Experiments show the effectiveness of our method over previous color transfer methods both quantitatively and qualitatively. In addition, our framework can be applied for various instances of color transfer such as transferring color between different camera models, camera settings, and illumination conditions, as well as for video color transfers.", "We introduce a robust moving least-squares technique for reconstructing a piecewise smooth surface from a potentially noisy point cloud. We use techniques from robust statistics to guide the creation of the neighborhoods used by the moving least squares (MLS) computation. This leads to a conceptually simple approach that provides a unified framework for not only dealing with noise, but also for enabling the modeling of surfaces with sharp features.Our technique is based on a new robust statistics method for outlier detection: the forward-search paradigm. Using this powerful technique, we locally classify regions of a point-set to multiple outlier-free smooth regions. This classification allows us to project points on a locally smooth region rather than a surface that is smooth everywhere, thus defining a piecewise smooth surface and increasing the numerical stability of the projection operator. Furthermore, by treating the points across the discontinuities as outliers, we are able to define sharp features. One of the nice features of our approach is that it automatically disregards outliers during the surface-fitting phase.", "", "We provide an image deformation method based on Moving Least Squares using various classes of linear functions including affine, similarity and rigid transformations. These deformations are realistic and give the user the impression of manipulating real-world objects. We also allow the user to specify the deformations using either sets of points or line segments, the later useful for controlling curves and profiles present in the image. For each of these techniques, we provide simple closed-form solutions that yield fast deformations, which can be performed in real-time.", "", "" ] }
1707.05471
2737935263
Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there lack practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.
More related to our work is the method of Lin @cite_31 , which jointly estimates correspondence and relative patch orientation for descriptors. However, it is formulated with pre-computed sparse correspondences and also requires considerable computation to solve a complex non-linear optimization. By contrast, our method adopts dense descriptors that can be evaluated efficiently for any affine transformation, and employs quadratic continuous optimization to rapidly infer dense affine transformation fields.
{ "cite_N": [ "@cite_31" ], "mid": [ "2091244319" ], "abstract": [ "Aligning image pairs with significant appearance change is a long standing computer vision challenge. Much of this problem stems from the local patch descriptors' instability to appearance variation. In this paper we suggest this instability is due less to descriptor corruption and more the difficulty in utilizing local information to canoni-cally define the orientation (scale and rotation) at which a patch's descriptor should be computed. We address this issue by jointly estimating correspondence and relative patch orientation, within a hierarchical algorithm that utilizes a smoothly varying parameterization of geometric transformations. By collectively estimating the correspondence and orientation of all the features, we can align and orient features that cannot be stably matched with only local information. At the price of smoothing over motion discontinuities (due to independent motion or parallax), this approach can align image pairs that display significant inter-image appearance variations." ] }
1707.05527
2950146989
In the Convex Body Chasing problem, we are given an initial point @math in @math and an online sequence of @math convex bodies @math . When we receive @math , we are required to move inside @math . Our goal is to minimize the total distance travelled. This fundamental online problem was first studied by Friedman and Linial (DCG 1993). They proved an @math lower bound on the competitive ratio, and conjectured that a competitive ratio depending only on d is possible. However, despite much interest in the problem, the conjecture remains wide open. We consider the setting in which the convex bodies are nested: @math . The nested setting is closely related to extending the online LP framework of Buchbinder and Naor (ESA 2005) to arbitrary linear constraints. Moreover, this setting retains much of the difficulty of the general setting and captures an essential obstacle in resolving Friedman and Linial's conjecture. In this work, we give the first @math -competitive algorithm for chasing nested convex bodies in @math .
The influential primal-dual framework of Buchbinder and Naor for online covering LPs @cite_3 can now be seen as a special case of nested convex body chasing with the @math -norm. In the former problem, the algorithm starts at the origin @math , and at each time @math , a linear constraint @math with non-negative @math and @math arrives. The goal is to maintain a feasible point @math that satisfies all previous requests while the coordinates of @math are only allowed to increase over time. The objective function @math (where @math is non-negative) can be assumed to be @math by scaling. Finally, note that in nested convex body chasing with covering constraints, it never helps to decrease any variable and hence online covering LP is indeed a special case of nested convex body chasing.
{ "cite_N": [ "@cite_3" ], "mid": [ "2122886291" ], "abstract": [ "We study a wide range of online covering and packing optimization problems. In an online covering problem, a linear cost function is known in advance, but the linear constraints that define the feasible solution space are given one by one, in rounds. In an online packing problem, the profit function as well as the packing constraints are not known in advance. In each round additional information (i.e., a new variable) about the profit function and the constraints is revealed. An online algorithm needs to maintain a feasible solution in each round; in addition, the solutions generated over the different rounds need to satisfy a monotonicity property. We provide general deterministic primal-dual algorithms for online fractional covering and packing problems. We also provide deterministic algorithms for several integral online covering and packing problems. Our algorithms are designed via a novel online primal-dual technique and are evaluated via competitive analysis." ] }
1707.05527
2950146989
In the Convex Body Chasing problem, we are given an initial point @math in @math and an online sequence of @math convex bodies @math . When we receive @math , we are required to move inside @math . Our goal is to minimize the total distance travelled. This fundamental online problem was first studied by Friedman and Linial (DCG 1993). They proved an @math lower bound on the competitive ratio, and conjectured that a competitive ratio depending only on d is possible. However, despite much interest in the problem, the conjecture remains wide open. We consider the setting in which the convex bodies are nested: @math . The nested setting is closely related to extending the online LP framework of Buchbinder and Naor (ESA 2005) to arbitrary linear constraints. Moreover, this setting retains much of the difficulty of the general setting and captures an essential obstacle in resolving Friedman and Linial's conjecture. In this work, we give the first @math -competitive algorithm for chasing nested convex bodies in @math .
While the online primal-dual framework @cite_3 has been applied successfully to many online problems, so far it is limited to LPs with covering and packing constraints, and minor tweaks thereof. An important goal is to extend the online LP framework more broadly beyond packing and covering LPs. For example, it is unclear how to do this even for seemingly simple formulations such as @math -server on depth- @math HSTs or Metrical Task Systems on a line. Since the nested convex body chasing problem corresponds to solving online LPs with arbitrary constraints (with both positive and negative entries) and a specific type of objective, understanding the nested convex body chasing problem is an essential step towards this goal. Indeed, this is one of our main motivations to consider this problem.
{ "cite_N": [ "@cite_3" ], "mid": [ "2122886291" ], "abstract": [ "We study a wide range of online covering and packing optimization problems. In an online covering problem, a linear cost function is known in advance, but the linear constraints that define the feasible solution space are given one by one, in rounds. In an online packing problem, the profit function as well as the packing constraints are not known in advance. In each round additional information (i.e., a new variable) about the profit function and the constraints is revealed. An online algorithm needs to maintain a feasible solution in each round; in addition, the solutions generated over the different rounds need to satisfy a monotonicity property. We provide general deterministic primal-dual algorithms for online fractional covering and packing problems. We also provide deterministic algorithms for several integral online covering and packing problems. Our algorithms are designed via a novel online primal-dual technique and are evaluated via competitive analysis." ] }
1707.05527
2950146989
In the Convex Body Chasing problem, we are given an initial point @math in @math and an online sequence of @math convex bodies @math . When we receive @math , we are required to move inside @math . Our goal is to minimize the total distance travelled. This fundamental online problem was first studied by Friedman and Linial (DCG 1993). They proved an @math lower bound on the competitive ratio, and conjectured that a competitive ratio depending only on d is possible. However, despite much interest in the problem, the conjecture remains wide open. We consider the setting in which the convex bodies are nested: @math . The nested setting is closely related to extending the online LP framework of Buchbinder and Naor (ESA 2005) to arbitrary linear constraints. Moreover, this setting retains much of the difficulty of the general setting and captures an essential obstacle in resolving Friedman and Linial's conjecture. In this work, we give the first @math -competitive algorithm for chasing nested convex bodies in @math .
Another motivation for studying the nested case is that it captures much of the inherent hardness of the general convex body chasing problem. For example, the @math lower bound @cite_9 for the general problem also holds in the nested setting. Moreover, several natural algorithms also fail for the nested case.
{ "cite_N": [ "@cite_9" ], "mid": [ "1966160207" ], "abstract": [ "A player moving in the plane is given a sequence of instructions of the following type: at stepi a planar convex setFi is specified, and the player has to move to a point inFi. The player is charged for the distance traveled. We provide a strategy for the player which is competitive, i.e., for any sequenceFi the cost to the player is within a constant (multiplicative) factor of the \"off-line\" cost (i.e., the least possible cost when allFi are known in advance). We conjecture that similar strategies can be developed for this game in any Euclidean space and perhaps even in all metric spaces. The analogous statement where convex sets are replaced by more general families of sets in a metric space includes many on-line off-line problems such as thek-server problem; we make some remarks on these more general problems." ] }
1707.05527
2950146989
In the Convex Body Chasing problem, we are given an initial point @math in @math and an online sequence of @math convex bodies @math . When we receive @math , we are required to move inside @math . Our goal is to minimize the total distance travelled. This fundamental online problem was first studied by Friedman and Linial (DCG 1993). They proved an @math lower bound on the competitive ratio, and conjectured that a competitive ratio depending only on d is possible. However, despite much interest in the problem, the conjecture remains wide open. We consider the setting in which the convex bodies are nested: @math . The nested setting is closely related to extending the online LP framework of Buchbinder and Naor (ESA 2005) to arbitrary linear constraints. Moreover, this setting retains much of the difficulty of the general setting and captures an essential obstacle in resolving Friedman and Linial's conjecture. In this work, we give the first @math -competitive algorithm for chasing nested convex bodies in @math .
The only known algorithms for chasing convex bodies in @math with @math are for certain restricted families of convex bodies @math such as lines and affine subspaces. For chasing lines, Friedman and Linial @cite_9 gave an @math -competitive algorithm. For chasing lines and half-line, Sitters @cite_7 showed that the generalized work function algorithm (WFA) is also @math -competitive; this is interesting as the WFA is a generic algorithm that attains nearly-optimal competitive ratios for many MSS and is a natural candidate to be @math -competitive for convex body chasing. Recently, @cite_0 gave an elegant and simple @math -competitive algorithm for chasing lines, and a @math -competitive algorithm for chasing affine subspaces. However, all these results crucially rely on the fact that the requests @math have a lower dimension and do not seem to apply to our problem.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_7" ], "mid": [ "2345314460", "1966160207", "2061319653" ], "abstract": [ "We consider three related online problems: Online Convex Optimization, Convex Body Chasing, and Lazy Convex Body Chasing. In Online Convex Optimization the input is an online sequence of convex functions over some Euclidean space. In response to a function, the online algorithm can move to any destination point in the Euclidean space. The cost is the total distance moved plus the sum of the function costs at the destination points. Lazy Convex Body Chasing is a special case of Online Convex Optimization where the function is zero in some convex region, and grows linearly with the distance from this region. And Convex Body Chasing is a special case of Lazy Convex Body Chasing where the destination point has to be in the convex region. We show that these problems are equivalent in the sense that if any of these problems have an O(1)-competitive algorithm then all of the problems have an O(1)-competitive algorithm. By leveraging these results we then obtain the first O(1)-competitive algorithm for Online Convex Optimization in two dimensions, and give the first O(1)-competitive algorithm for chasing linear subspaces. We also give a simple algorithm and O(1)-competitiveness analysis for chasing lines.", "A player moving in the plane is given a sequence of instructions of the following type: at stepi a planar convex setFi is specified, and the player has to move to a point inFi. The player is charged for the distance traveled. We provide a strategy for the player which is competitive, i.e., for any sequenceFi the cost to the player is within a constant (multiplicative) factor of the \"off-line\" cost (i.e., the least possible cost when allFi are known in advance). We conjecture that similar strategies can be developed for this game in any Euclidean space and perhaps even in all metric spaces. The analogous statement where convex sets are replaced by more general families of sets in a metric space includes many on-line off-line problems such as thek-server problem; we make some remarks on these more general problems.", "The generalized 2-server problem is an online optimization problem where a sequence of requests has to be served at minimal cost. Requests arrive one by one and need to be served instantly by at least one of two servers. We consider the general model where the cost function of the two servers may be different. Formally, each server moves in its own metric space and a request consists of one point in each metric space. It is served by moving one of the two servers to its request point. Requests have to be served without knowledge of future requests. The objective is to minimize the total traveled distance. The special case where both servers move on the real line is known as the CNN problem. We show that the generalized work function algorithm, @math , is constant competitive for the generalized 2-server problem. Further, we give an outline for a possible extension to @math servers and discuss the applicability of our techniques and of the work function algorithm in general. We co..." ] }
1707.05300
2737215407
Many relevant tasks require an agent to reach a certain state, or to manipulate objects into a desired configuration. For example, we might want a robot to align and assemble a gear onto an axle or insert and turn a key in a lock. These goal-oriented tasks present a considerable challenge for reinforcement learning, since their natural reward function is sparse and prohibitive amounts of exploration are required to reach the goal and receive some learning signal. Past approaches tackle these problems by exploiting expert demonstrations or by manually designing a task-specific reward shaping function to guide the learning agent. Instead, we propose a method to learn these tasks without requiring any prior knowledge other than obtaining a single state in which the task is achieved. The robot is trained in reverse, gradually learning to reach the goal from a set of start states increasingly far from the goal. Our method automatically generates a curriculum of start states that adapts to the agent's performance, leading to efficient training on goal-oriented tasks. We demonstrate our approach on difficult simulated navigation and fine-grained manipulation problems, not solvable by state-of-the-art reinforcement learning methods.
Curriculum-based approaches with manually designed schedules have been explored in supervised learning to split particularly complex tasks into smaller, easier-to-solve sub-problems. One particular type of curriculum learning explicitly enables the learner to reject examples which it currently considers too hard . This type of adaptive curriculum has mainly been applied to supervised tasks, and most practical curriculum approaches in RL rely on pre-specified task sequences . Some very general frameworks have been proposed to generate increasingly hard problems , although challenges remain to apply the idea to difficult robotics tasks. A similar line of work uses intrinsic motivation based on learning progress to obtain developmental trajectories" that focus on increasingly difficult tasks @cite_0 . Nevertheless, their method requires iteratively partitioning the full task space, which strongly limits the application to fine-grain manipulation tasks like the ones presented in our work (see detailed analysis on easier tasks in @cite_4 ).
{ "cite_N": [ "@cite_0", "@cite_4" ], "mid": [ "2004303440", "2616430965" ], "abstract": [ "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills policies that solve a corresponding distribution of parameterized tasks goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: (1) learning the inverse kinematics in a highly-redundant robotic arm, (2) learning omnidirectional locomotion with motor primitives in a quadruped robot, and (3) an arm learning to control a fishing rod with a flexible wire. We show that (1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; (2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; (3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.", "Reinforcement learning is a powerful technique to train an agent to perform a task. However, an agent that is trained using reinforcement learning is only capable of achieving the single task that is specified via its reward function. Such an approach does not scale well to settings in which an agent needs to perform a diverse set of tasks, such as navigating to varying positions in a room or moving objects to varying locations. Instead, we propose a method that allows an agent to automatically discover the range of tasks that it is capable of performing. We use a generator network to propose tasks for the agent to try to achieve, specified as goal states. The generator network is optimized using adversarial training to produce tasks that are always at the appropriate level of difficulty for the agent. Our method thus automatically produces a curriculum of tasks for the agent to learn. We show that, by using this framework, an agent can efficiently and automatically learn to perform a wide set of tasks without requiring any prior knowledge of its environment. Our method can also learn to achieve tasks with sparse rewards, which traditionally pose significant challenges." ] }
1707.05300
2737215407
Many relevant tasks require an agent to reach a certain state, or to manipulate objects into a desired configuration. For example, we might want a robot to align and assemble a gear onto an axle or insert and turn a key in a lock. These goal-oriented tasks present a considerable challenge for reinforcement learning, since their natural reward function is sparse and prohibitive amounts of exploration are required to reach the goal and receive some learning signal. Past approaches tackle these problems by exploiting expert demonstrations or by manually designing a task-specific reward shaping function to guide the learning agent. Instead, we propose a method to learn these tasks without requiring any prior knowledge other than obtaining a single state in which the task is achieved. The robot is trained in reverse, gradually learning to reach the goal from a set of start states increasingly far from the goal. Our method automatically generates a curriculum of start states that adapts to the agent's performance, leading to efficient training on goal-oriented tasks. We demonstrate our approach on difficult simulated navigation and fine-grained manipulation problems, not solvable by state-of-the-art reinforcement learning methods.
The idea of directly influencing the start state distribution to accelerate learning in a Markov Decision Process (MDP) has drawn attention in the past. studied the idea of exploiting the access to a generative model' that allows training the policy on a fixed restart distribution' different from the one originally specified by the MDP. If properly chosen, this is proven to improve the policy training and final performance on the original start state distribution. Nevertheless, no practical procedure is given to choose this new distribution (only suggesting to use a more uniform distribution over states, which is what our baseline does), and they don't consider adapting the start state distribution during training, as we do. Other researchers have proposed to use expert demonstrations to improve learning of model-free RL algorithms, either by modifying the start state distribution to be uniform among states visited by the provided trajectories @cite_2 , or biasing the exploration towards relevant regions @cite_13 . Our method works without any expert demonstrations, so we do not compare against these lines of research.
{ "cite_N": [ "@cite_13", "@cite_2" ], "mid": [ "2415726935", "2132714442" ], "abstract": [ "Reinforcement Learning (RL) has been effectively used to solve complex problems given careful design of the problem and algorithm parameters. However standard RL approaches do not scale particularly well with the size of the problem and often require extensive engineering on the part of the designer to minimize the search space. To alleviate this problem, we present a model-free policy-based approach called Exploration from Demonstration (EfD) that uses human demonstrations to guide search space exploration. We use statistical measures of RL algorithms to provide feedback to the user about the agent's uncertainty and use this to solicit targeted demonstrations useful from the agent's perspective. The demonstrations are used to learn an exploration policy that actively guides the agent towards important aspects of the problem. We instantiate our approach in a gridworld and a popular arcade game and validate its performance under different experimental conditions. We show how EfD scales to large problems and provides convergence speed-ups over traditional exploration and interactive learning methods.", "In this thesis we present a technique for the composition of robot control laws in dynamical environments. We propose a challenging robotic task, called Dynamical Pick and Place, in which a robot equipped with merely a soft paddle must capture and contain a ball, safely negotiate it past obstacles, and bring it to rest at a desired location. We develop a composition technique for local controllers that provides a formal guarantee of the stability of the switching behavior required in this task, and provide descriptive statistics of a working implementation. Our robotic system displays unusually dexterous behavior in the face of significant system noise, and recovers gracefully from large unexpected perturbations caused by the experimenters. Our approach to controller composition makes use of the funnel as a metaphor for asymptotic stability, is motivated by the pre-image backchaining techniques developed by Lozano-Perez, Mason and Taylor, and extends their ideas from quasi-static environments to systems with full dynamics. We introduce the concepts of \"dynamical obstacle avoidance\" and \"dynamical safety\" for systems with only intermittent control of their environment, and show that it is important not only that the system avoid obstacles directly, but also that the system will never reach an obstacle before getting another chance to effect control. The Dynamical Pick and Place problem addressed by this thesis is a difficult control problem, but an easy planning problem. The system we develop provides a way to engage more powerful AI planning tools without sacrificing access to the stability arguments of dynamical systems theory." ] }
1707.05301
2737686350
To date, self-driving experimental wheelchair technologies have been either inexpensive or robust, but not both. Yet, in order to achieve real-world acceptance, both qualities are fundamentally essential. We present a unique approach to achieve inexpensive and robust autonomous and semi-autonomous assistive navigation for existing fielded wheelchairs, of which there are approximately 5 million units in Canada and United States alone. Our prototype wheelchair platform is capable of localization and mapping, as well as robust obstacle avoidance, using only a commodity RGB-D sensor and wheel odometry. As a specific example of the navigation capabilities, we focus on the single most common navigation problem: the traversal of narrow doorways in arbitrary environments. The software we have developed is generalizable to corridor following, desk docking, and other navigation tasks that are either extremely difficult or impossible for people with upper-body mobility impairments.
A wide variety of assistive wheelchair navigation systems have been developed over the past 30 years, and we provide only a brief survey of related literature. Low-cost solutions are generally either limited and semi-autonomous @cite_27 @cite_7 , require an external localization system @cite_8 @cite_10 , are able to operate over short distances only without global localization @cite_29 , or lack global planning capabilities @cite_28 . Fully autonomous navigation approaches typically rely on expensive laser scanner (lidar) sensors @cite_13 @cite_20 @cite_30 @cite_19 @cite_24 (which do have the advantage of being able to operate outdoors @cite_0 @cite_9 @cite_22 ). The goal of existing systems has usually been to provide a working solution in the target environment, without attempting to minimize cost.
{ "cite_N": [ "@cite_13", "@cite_30", "@cite_22", "@cite_7", "@cite_8", "@cite_28", "@cite_29", "@cite_9", "@cite_24", "@cite_19", "@cite_27", "@cite_0", "@cite_10", "@cite_20" ], "mid": [ "1766610279", "2009630762", "2288072177", "2963050952", "15570012", "1988268155", "", "", "2026176629", "2004337793", "2081883647", "2206106412", "1554290452", "2040003582" ], "abstract": [ "A device for conveying and separating articles which are moved, while in engagement with each other, by an input conveyor and are transferred by a periodically operating transfer member onto an output conveyor which operates in synchronism with the transfer member and which engages the articles during their conveying movement and conveys them with a speed which is greater than the speed of the input conveyor. The transfer member is composed of a first pusher which moves transverse to the conveying direction for transferring each item from the output end of the input conveyor onto the output conveyor, and a second pusher which is driven independently of the first pusher for holding back the item immediately following the item being transferred until the first pusher returns to its starting position.", "This paper outlines the development of a navigation system to give semi-autonomous operation of wheelchairs. Our goal is to develop a navigation system that is commercially viable and capable of being retro-fitted to powered electric wheelchairs in the near future. We introduce our proposed system, and discuss the three crucial components of any autonomous semi-autonomous system: SLAM, obstacle avoidance, and path planning. Our system makes extensive use of freely available open-source software. Using The Player Project as the base framework, we use GMapping for mapping, Vector Field Histograms for obstacle avoidance, and Wavefront for path planning. Qualitative results are provided and discussed.", "In this paper, we present a system level approach to smart wheelchair system (SWS) navigation in urban environments. The proposed SWS ecosystem has two primary components: a mapping service which generates large-scale landmark maps, and the SWS vehicle itself, which is a client of the mapping service. The SWS prototype integrates 3D LIDAR imaging systems which provide robust perception in unstructured, outdoor environments. It also leverages these same sensors for map-based localization. In demonstrating the efficacy of the approach, the SWS navigated autonomously over a distance of more than 12 km in a representative urban environment without once losing localization, and without the use of GPS.", "This paper presents the architecture and implementation of a tele-presence wheelchair system based on tele-presence robot, intelligent wheelchair, and touch screen technologies. The tele-presence wheelchair system consists of a commercial electric wheelchair, an add-on tele-presence interaction module, and a touchable live video image based user interface (called TIUI). The tele-presence interaction module is used to provide video-chatting for an elderly or disabled person with the family members or caregivers, and also captures the live video of an environment for tele-operation and semi-autonomous navigation. The user interface developed in our lab allows an operator to access the system anywhere and directly touch the live video image of the wheelchair to push it as if he she did it in the presence. This paper also discusses the evaluation of the user experience.", "The works on Intelligent Wheelchairs (IW) have been an active research area in Assistive Technologies since its first prototypes in the 1980’s. But more than 30 years after that promising start almost none (but a few) of those prototypes reached their final users. This work will try to analyze the problems that maintain those vehicles in the labs, far from the real people. Among the possible reasons for that, two main issues are the availability of suitable sensors (in terms of cost and reliability) and over all the lack of an appropriate standard platform in the market to build up an IW. In this paper we will focus our analysis on the platform problem. Under our point of view, it is mandatory to reduce the cost of any IW proposal by using a common framework from the more basic to the most advanced systems. To reach that goal, our proposal for an Intelligent Wheelchair framework take advantage of using standard Electronics systems and Communication devices (USB, CAN, WiFi) in order to guarantee its compatibility with other electronic subsystems and services around the environment (i.e. location based services) and to be able to follow the continuous upgrade of such subsystems.", "In this paper, we present an approach to visual SLAM and human tracking for a wheelchair robot equipped with a Microsoft Kinect sensor that which is a novel sensing system that captures RGB and depth (RGB-D) images simultaneously. The speeded-up robust feature (SURF) algorithm is employed to provide the robust description of feature for environments and the target person from RGB images. Based on the environmental SURF features, we present the natural landmark based simultaneous localization and mapping with the extended Kalman filter suing RGB-D data. Meanwhile, a depth clustering based human detection is proposed to extract human candidates. Accordantly, the target person tracking is achieved with an online learned RGB-D appearance model by integrating histogram orientation of gradient descriptor, color, depth, and position information from the body of the identified caregiver. Moreover, a fuzzy based controller provides dynamical human following for the wheelchair robot with a desired interval. Consequently, the experimental results demonstrated the effectiveness and feasibility in real world environments.", "", "", "Commercial electric-power wheelchairs have become much cheaper in the recent years, likewise the availability of reduced size sensors to an affordable price has made their integration easier in this kind of vehicle. This paper presents the development of a smart navigation system applied to an electric wheelchair. The developed work falls within the Ambient Assisted Living field (AAL), which includes all the technologies whose aim is to improve the quality of life of people in the home environment, especially for the elderly and physically impaired. In particular, the present work is focused on the development of technological support aimed at improving the daily life of that population segment who has motor difficulties, and is forced to use personal mobility support systems such as a wheelchair. This system is able to localize the wheelchair while it is moving in an indoor environment. The system exploits a low cost hardware and an integrated open source software, which permit a cheap integration with already available electric wheelchairs.", "Reliable autonomous navigation is an active research topic that has drawn the attention for decades, however, human factors such as navigational comfort has not received the same level of attention. This work proposes the concept of “comfortable map” and presents a navigation approach for autonomous passenger vehicles which in top of being safe and reliable is comfortable. In our approach we first extract information from users preference related to comfort while sitting on a robotic wheelchair under different conditions in an indoor corridor environment. Human-comfort factors are integrated to a geometric map generated by SLAM framework. Then a global planner computes a safe and comfortable path which is followed by the robotic wheelchair. Finally, an evaluation with 29 participants using a fully autonomous robotic wheelchair, showed that more than 90 of them found the proposed approach more comfortable than a shortest-path state of the art approach.", "Many people who are mobility impaired are, for a variety of reasons, incapable of using an ordinary wheelchair. In some instances, a power wheelchair also cannot be used, usually because of the difficulty the person has in controlling it (often due to additional disabilities). This paper describes two low-cost robotic wheelchair prototypes that assist the operator of the chair in avoiding obstacles, going to pre-designated places, and maneuvering through doorways and other narrow or crowded areas. These systems can be interfaced to a variety of input devices, and can give the operator as much or as little moment by moment control of the chair as they wish. This paper describes both systems, the evolution from one system to another, and the lessons learned.", "In this paper, we explore the use of synthesized landmark maps for absolute localization of a smart wheelchair system outdoors. In this paradigm, three-dimensional map data are acquired by an automobile equipped with high precision inertial GPS systems, in conjunction with light detection and ranging (LIDAR) systems, whose range measurements are subsequently registered to a global coordinate frame. The resulting map data are then synthesized a priori to identify robust, salient features for use as landmarks in localization. By leveraging such maps with landmark meta-data, robots possessing far lower cost sensor suites gain many of the benefits obtained from the higher fidelity sensors, but without the cost.We show that by using such a map-based localization approach, a smart wheelchair system outfitted only with a 2-D LIDAR and encoders was able to maintain accurate, global pose estimates outdoors over almost 1 km paths.", "In this paper, a robotic wheelchair using only a pan-tilt-zoom camera is proposed for developing a robotic system which enables a user to navigate in complex environment at low cost. For realizing the proposed system, a high accuracy visual marker, LentiBar which is capable of achieving a high level of precision in positioning and orienting a robotic wheelchair is utilized and its application to door crossing for a robotic wheelchair is presented. Since the visual marker is attached on the top of the target door through which the wheelchair is going to pass, the typical problems for achieving geometric features of the environment can be easily addressed. Door crossing navigation methods are also explained, which enable automatic and hybrid navigation of the robotic wheelchair. Finally, experimental results show the feasibility of the proposed navigation method and the robotic wheelchair system.", "In the present work, a robotic wheelchair navigation system which is specially designed for confined spaces is proposed. In confined spaces, the movements of wheelchairs are restricted by the environment more than other unicycle type vehicles. For example, if the wheelchair is too close to a wall, it can not rotate freely because the front or back may collide with the wall. The navigation system is composed by a path planning module and a control module; both use the environment and robot information provided by a SLAM algorithm to attain their objectives. The planning strategy uses the Monte Carlo technique to find a minimum path within the confined environment and takes into account the variance propagation in the predicted path for ensuring the safe driving of the robot. The objective of the navigation system is to drive the robotic wheelchair within the confined environment in order to reach a desired orientation or posture." ] }
1707.05301
2737686350
To date, self-driving experimental wheelchair technologies have been either inexpensive or robust, but not both. Yet, in order to achieve real-world acceptance, both qualities are fundamentally essential. We present a unique approach to achieve inexpensive and robust autonomous and semi-autonomous assistive navigation for existing fielded wheelchairs, of which there are approximately 5 million units in Canada and United States alone. Our prototype wheelchair platform is capable of localization and mapping, as well as robust obstacle avoidance, using only a commodity RGB-D sensor and wheel odometry. As a specific example of the navigation capabilities, we focus on the single most common navigation problem: the traversal of narrow doorways in arbitrary environments. The software we have developed is generalizable to corridor following, desk docking, and other navigation tasks that are either extremely difficult or impossible for people with upper-body mobility impairments.
The specific problem of doorway traversal, in particular, presents many challenges---no truly robust and cost-effective door detection and navigation system currently exists @cite_21 . Recent approaches utilize visual properties extracted from images, but such methods suffer from difficulties with lighting conditions as well as susceptibility to a variety of artifacts, leading to computationally demanding and unreliable solutions @cite_15 . For example, the three-camera visual approach in @cite_2 relies on edge detection and vanishing point identification to extract trapezoidal (door-like) structures in the environment; similarly, the method in @cite_16 uses two cameras to identify occupied and free space. Neither approach is robust to poor lighting or visual clutter. Further, the planning mechanisms implemented in both cases may be insufficient for tight doors because the chair may not begin in a pose that is well aligned for traversal. In addition, the approach in @cite_16 lacks a definitive mechanism for navigating the common case of a corridor on the other side of the door.
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_16", "@cite_2" ], "mid": [ "2125691069", "2108758759", "2472930703", "2013250853" ], "abstract": [ "We describe a probabilistic framework for detection and modeling of doors from sensor data acquired in corridor environments with mobile robots. The framework captures shape, color, and motion properties of door and wall objects. The probabilistic model is optimized with a version of the expectation maximization algorithm, which segments the environment into door and wall objects and learns their properties. The framework allows the robot to generalize the properties of detected object instances to new object instances. We demonstrate the algorithm on real-world data acquired by a Pioneer robot equipped with a laser range finder and an omni-directional camera. Our results show that our algorithm reliably segments the environment into walls and doors, finding both doors that move and doors that do not move. We show that our approach achieves better results than models that only capture behavior, or only capture appearance.", "Abstract— Several studies have shown that both children andadults benefit substantially from access to a means of indepen-dent mobility. While the needs of many individuals with disabil-ities can be satisfied with traditional manual or poweredwheelchairs, a segment of the disabled community finds it diffi-cult or impossible to use wheelchairs independently. To accom-modate this population, researchers have used technologiesoriginally developed for mobile robots to create “smart wheel-chairs.” Smart wheelchairs have been the subject of researchsince the early 1980s and have been developed on four conti-nents. This article presents a summary of the current state of theart and directions for future research. Key words: artificial intelligence, independent mobility, infra-red range finder, laser range finder, machine vision, powerwheelchairs, robotics, sonar, subsumption, voice control. INTRODUCTION Several studies have shown that both children andadults benefit substantially from access to a means ofindependent mobility, including power wheelchairs, man-ual wheelchairs, scooters, a nd walkers [1–2]. Independentmobility increases vocational and educational opportuni-ties, reduces dependence on caregivers and family mem-bers, and promotes feelings of self-reliance. For youngchildren, independent mobility serves as the foundationfor much early learning [1]. Nonambulatory children lackaccess to the wealth of stimuli afforded self-ambulatingchildren. This lack of exploration and control often pro-duces a cycle of deprivation and reduced motivation thatleads to learned helplessness [3].For adults, independent mobility is an importantaspect of self-esteem and plays a pivotal role in “aging inplace.” For example, if older people find it increasinglydifficult to walk or wheel themselves to the commode,they may do so less often or they may drink less fluid toreduce the frequency of urination. If they become unableto walk or wheel themselves to the commode and help isnot routinely available in the home when needed, a moveto a more enabling environment (e.g., assisted living) maybe necessary. Mobility limitati ons are the leading cause offunctional limitations among adults, with an estimatedprevalence of 40 per 1,000 persons age 18 to 44 and 188per 1,000 at age 85 and older [4]. Mobility difficulties arealso strong predictors of activities of daily living (ADL)and instrumental ADL disabi lities because of the need to", "In this paper we introduce CoPilot, an active driving aid that enables semi-autonomous, cooperative navigation of an electric powered wheelchair (EPW) for automated doorway detection and traversal. The system has been cleanly integrated into a commercially available EPW, and demonstrated with both joystick and head array interfaces. Leveraging the latest in 3D perception systems, we developed both feature and histogram-based approaches to the doorway detection problem. When coupled with a sample-based planner, success rates for automated doorway traversal approaching 100 were achieved.", "Navigating within an unknown indoor environment using an electric wheelchair is a challenging task, especially if the user suffers from severe disabilities. In order to reduce fatigability and increase autonomy, control architectures have to be designed that would assist users in wheelchair navigation. We present a framework for vision-based autonomous indoor navigation in an electric wheelchair capable of following corridors, and passing through open doorways using a single doorpost. Visual features extracted from cameras on board the wheelchair are used as inputs for image based controllers built-in the wheelchair. It has to be noted that no a-priori information is utilized except for the assumption that the wheelchair moves in a typical indoor environment while the system is coarsely calibrated. The designed control schemes have been implemented onto a robotized wheelchair and experimental results show the robust behaviour of the designed system. We devise a visual servoing approach for indoor wheelchair navigation.The fundamental tasks of corridor following and doorway passing are tackled.Results in simulation and on a robotic platform show the convergence and validity of the control system.This work is proposed as a first step in developing the concept of semi-autonomous assistive systems." ] }
1707.05301
2737686350
To date, self-driving experimental wheelchair technologies have been either inexpensive or robust, but not both. Yet, in order to achieve real-world acceptance, both qualities are fundamentally essential. We present a unique approach to achieve inexpensive and robust autonomous and semi-autonomous assistive navigation for existing fielded wheelchairs, of which there are approximately 5 million units in Canada and United States alone. Our prototype wheelchair platform is capable of localization and mapping, as well as robust obstacle avoidance, using only a commodity RGB-D sensor and wheel odometry. As a specific example of the navigation capabilities, we focus on the single most common navigation problem: the traversal of narrow doorways in arbitrary environments. The software we have developed is generalizable to corridor following, desk docking, and other navigation tasks that are either extremely difficult or impossible for people with upper-body mobility impairments.
One possible alternative to visual sensing is to employ active scanning devices. In @cite_23 , multiple 2D laser scanners are used to match input data to three potential doorway configurations. While successful in benign environments, 2D lasers are unable to detect objects above and below the plane of the laser, resulting in false positives. For example, desks and chairs may be detected as doors. Planar laser-based methods also are unable to verify that a detected doorway is traversable along the vertical axis.
{ "cite_N": [ "@cite_23" ], "mid": [ "2104058278" ], "abstract": [ "This paper proposes a solution to the cross-a-door problem in unknown environments for a robotic wheelchair commanded through a Human-Machine Interface (HMI). The problem is solved by a dynamic path planning algorithm implementation based on successive frontier points determination. An adaptive trajectory tracking control based on the dynamic model of the robotic wheelchair is implemented on the vehicle to direct the wheelchair motion along the path in a smooth movement. An EKF feature-based SLAM is also implemented on the vehicle which gives an estimate of the wheelchair pose inside the environment. The SLAM allows the map reconstruction of the environment for safe navigation purposes. The whole system steers satisfactorily the wheelchair with smooth movements through common doorways which are narrow considering the size of the vehicle. Implementation results validating the proposal are also shown in this work." ] }
1707.05487
2739244792
Wireless mobile backhaul networks have been proposed as a substitute in cases in which wired alternatives are not available due to economical or geographical reasons. In this work, we study the location problem of base stations in a given region where mobile terminals are distributed according to a certain probability density function and the base stations communicate through microwave backhaul links. Using results of optimal transport theory, we provide the optimal asymptotic distribution of base stations in the considered setting by minimizing the total power over the whole network.
Location games have been introduced by Hotelling @cite_0 , who modeled the spatial competition along a street between two firms for persuading the largest number of customers which are uniformly distributed. Problems similar to location games, as for example the maximum capture problem, have been analyzed by @cite_4 @cite_5 and references therein.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4" ], "mid": [ "2036454494", "", "2038839332" ], "abstract": [ "After the work of the late Professor F. Y. Edgeworth one may doubt that anything further can be said on the theory of competition among a small number of entrepreneurs. However, one important feature of actual business seems until recently to have escaped scrutiny. This is the fact that of all the purchasers of a commodity, some buy from one seller, some from another, in spite of moderate differences of price. If the purveyor of an article gradually increases his price while his rivals keep theirs fixed, the diminution in volume of his sales will in general take place continuously rather than in the abrupt way which has tacitly been assumed.", "", "Abstract We give an overview of the research, models and literature about optimisation approaches to the problem of optimally locating one or more new facilities in an environment where competing facilities are already established." ] }
1707.05487
2739244792
Wireless mobile backhaul networks have been proposed as a substitute in cases in which wired alternatives are not available due to economical or geographical reasons. In this work, we study the location problem of base stations in a given region where mobile terminals are distributed according to a certain probability density function and the base stations communicate through microwave backhaul links. Using results of optimal transport theory, we provide the optimal asymptotic distribution of base stations in the considered setting by minimizing the total power over the whole network.
Within the communication networks community, @cite_9 @cite_11 studied the duopoly situation in the uplink scenario of a cellular network where users are placed on a line segment. Considering the particular cost structure that arises in the cellular context, the authors observe that complex cell shapes are obtained at equilibrium. @cite_2 @cite_6 @cite_1 analyzed the problem of mobile terminals association to base stations using optimal transport theory and considering the data traffic congestion produced by this mobile terminals to base stations association.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_6", "@cite_2", "@cite_11" ], "mid": [ "2104005308", "2137960196", "2152949163", "2068957801", "2067723929" ], "abstract": [ "We study in this paper the question of determining locations of base stations (BSs) that may belong to the same or to competing service providers, taking into account the impact of these decisions on the behavior of intelligent mobile terminals who can connect to the base station that offers the best utility. We first study the SINR association-game: we determine the cells corresponding to each base stations, i.e. the locations at which mobile terminals prefer to connect to a given base station than to other. The Signal to Interference and Noise Ratio (SINR) is used as the quantity that determines the association. We make some surprising observations: (i) displacing a base station a little in one direction may result in a displacement of the boundary of the corresponding cell to the opposite direction; (ii) A cell corresponding to a BS may be the union of disconnected sub-cells. We then study the Stackelberg equilibrium in the combined BS location and mobile association problem: we determine where to locate the BSs so as to maximize the revenues obtained at the induced SINR mobile association game. We consider the cases of single frequency band and two frequency bands of operation. Finally, we also consider Stackelberg equilibria in two frequency systems with successive interference cancellation.", "We consider the uplink mobile association game with a large number of mobile terminals. Traditional techniques consider the discrete modelization but these models lead to high combinatorial complexities.", "We study the mobile association problem: we determine the cells corresponding to each base station, i.e, the locations at which intelligent mobile terminals prefer to connect to a given base station rather than to others. This paper proposes a new approach based on optimal transport theory to characterize the solution based on previous works on fluid approximations. We characterize the optimal solution from both the global network and the individual user points of view, for the downlink setting.", "The classic optimal transportation problem consists in finding the most cost-effective way of moving masses from one set of locations to another, minimizing its transportation cost. The formulation of this problem and its solution have been useful to understand various mathematical, economical, and control theory phenomena, such as, e.g., Witsenhausen's counterexample in stochastic control theory, the principal-agent problem in microeconomic theory, location and planning problems, etc. In this work, we incorporate the effect of network congestion to the optimal transportation problem and we are able to find a closed form expression for its solution. As an application of our work, we focus on the mobile association problem in cellular networks (the determination of the cells corresponding to each base station). In the continuum setting, this problem corresponds to the determination of the locations at which mobile terminals prefer to connect (by also considering the congestion they generate) to a given base station rather than to other base stations. Two types of problems have been addressed: a global optimization problem for minimizing the total power needed by the mobile terminals over the whole network (global optimum), and a user optimization problem, in which each mobile terminal chooses to which base station to connect in order to minimize its own cost (user equilibrium). This work combines optimal transportation with strategic decision making to characterize both solutions.", "We study the question of determining locations of base stations (BSs) that may belong to the same or to competing service providers. We take into account the impact of these decisions on the behavior of intelligent mobile terminals that can connect to the base station that offers the best utility. The signal-to-interference-plus-noise ratio (SINR) is used as the quantity that determines the association. We first study the SINR association-game: We determine the cells corresponding to each base stations, i.e., the locations at which mobile terminals prefer to connect to a given base station than to others. We make some surprising observations: 1) displacing a base station a little in one direction may result in a displacement of the boundary of the corresponding cell to the opposite direction; 2) a cell corresponding to a BS may be the union of disconnected subcells. We then study the hierarchical equilibrium in the combined BS location and mobile association problem: We determine where to locate the BSs so as to maximize the revenues obtained at the induced SINR mobile association game. We consider the cases of single frequency band and two frequency bands of operation. Finally, we also consider hierarchical equilibria in two frequency systems with successive interference cancellation." ] }
1707.05254
2736608324
Explainable recommendation is an important task. Many methods have been proposed which generate explanations from the content and reviews written for items. When review text is unavailable, generating explanations is still a hard problem. In this paper, we illustrate how explanations can be generated in such a scenario by leveraging external knowledge in the form of knowledge graphs. Our method jointly ranks items and knowledge graph entities using a Personalized PageRank procedure to produce recommendations together with their explanations.
More recent research has focused on providing explanations that are extracted from user written reviews for the items. @cite_16 extracted phrases and sentiments expressed in the reviews and used them to generate explanations. @cite_11 uses topics learned from the reviews as aspects of the item, and uses the topic distribution in the reviews to find useful or representative reviews.
{ "cite_N": [ "@cite_16", "@cite_11" ], "mid": [ "2152184085", "2061873838" ], "abstract": [ "Collaborative Filtering(CF)-based recommendation algorithms, such as Latent Factor Models (LFM), work well in terms of prediction accuracy. However, the latent features make it difficulty to explain the recommendation results to the users. Fortunately, with the continuous growth of online user reviews, the information available for training a recommender system is no longer limited to just numerical star ratings or user item features. By extracting explicit user opinions about various aspects of a product from the reviews, it is possible to learn more details about what aspects a user cares, which further sheds light on the possibility to make explainable recommendations. In this work, we propose the Explicit Factor Model (EFM) to generate explainable recommendations, meanwhile keep a high prediction accuracy. We first extract explicit product features (i.e. aspects) and user opinions by phrase-level sentiment analysis on user reviews, then generate both recommendations and disrecommendations according to the specific product features to the user's interests and the hidden features learned. Besides, intuitional feature-level explanations about why an item is or is not recommended are generated from the model. Offline experimental results on several real-world datasets demonstrate the advantages of our framework over competitive baseline algorithms on both rating prediction and top-K recommendation tasks. Online experiments show that the detailed explanations make the recommendations and disrecommendations more influential on user's purchasing behavior.", "In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews." ] }
1707.05116
2949618385
Does normalization help Part-of-Speech (POS) tagging accuracy on noisy, non-canonical data? To the best of our knowledge, little is known on the actual impact of normalization in a real-world scenario, where gold error detection is not available. We investigate the effect of automatic normalization on POS tagging of tweets. We also compare normalization to strategies that leverage large amounts of unlabeled data kept in its raw form. Our results show that normalization helps, but does not add consistently beyond just word embedding layer initialization. The latter approach yields a tagging model that is competitive with a Twitter state-of-the-art tagger.
Finally, normalization for POS tagging is certainly not limited to non-canonical data stemming from social media. Indeed, another stream of related work is focused on historical data, usually originating from the 15th till the 18th century. The motivation behind this is that in order to apply current language processing tools, the texts need to be normalized first, as spelling has changed through time. Experiments on POS tagging historical data that was previously normalized have been investigated for English @cite_3 , German @cite_4 , and Dutch . In this latter work, different methods of translating' historical Dutch texts to modern Dutch are explored, and a vocabulary lookup-based approach appears to work best. Interestingly, this work also resulted in a shared task on normalization of historical Dutch, in which the secondary evaluation metric was POS tagging accuracy: https: ifarm.nl clin2017st . In this paper we focused on normalization and POS tagging for Twitter data only.
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2250368612", "2295943005" ], "abstract": [ "This paper presents a method for part-ofspeech tagging of historical data and evaluates it on texts from different corpora of historical German (15th–18th century). Spelling normalization is used to preprocess the texts before applying a POS tagger trained on modern German corpora. Using only 250 manually normalized tokens as training data, the tagging accuracy of a manuscript from the 15th century can be raised from 28.65 to 74.89 .", "As more historical texts are digitized, there is interest in applying natural language processing tools to these archives. However, the performance of these tools is often unsatisfactory, due to language change and genre differences. Spelling normalization heuristics are the dominant solution for dealing with historical texts, but this approach fails to account for changes in usage and vocabulary. In this empirical paper, we assess the capability of domain adaptation techniques to cope with historical texts, focusing on the classic benchmark task of part-of-speech tagging. We evaluate several domain adaptation methods on the task of tagging Early Modern English and Modern British English texts in the Penn Corpora of Historical English. We demonstrate that the Feature Embedding method for unsupervised domain adaptation outperforms word embeddings and Brown clusters, showing the importance of embedding the entire feature space, rather than just individual words. Feature Embeddings also give better performance than spelling normalization, but the combination of the two methods is better still, yielding a 5 raw improvement in tagging accuracy on Early Modern English texts." ] }
1707.04757
2736867616
We present a new framework for formalizing mathematics in untyped set theory using auto2. Using this framework, we formalize in Isabelle FOL the entire chain of development from the axioms of set theory to the definition of the fundamental group for an arbitrary topological space. The auto2 prover is used as the sole automation tool, and enables succinct proof scripts throughout the project.
In Isabelle, the main library for formalized mathematics using FOL is Isabelle ZF. The basics of Isabelle ZF is described in @cite_7 @cite_16 . We also point to @cite_7 for a review of older work on set theory from automated deduction and artificial intelligence communities. Outside the official library, IsarMathLib @cite_0 is a more recent project based on Isabelle ZF. It formalized more results in abstract algebra and point-set topology, and also constructed the real numbers. The initial parts of our development closedly parallels that in Isabelle ZF, but we go further in several directions including constructing the number system. The primary difference between our work and IsarMathLib is that we use auto2 for proofs, and develop our own system for handling structures, so that we do not make use of Isabelle tactics, Isar, or locales.
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_7" ], "mid": [ "", "1990347565", "1968724907" ], "abstract": [ "", "A theory of recursive definitions has been mechanized in Isabelle's Zermelo-Fraenkel (ZF) set theory. The objective is to support the formalization of particular recursive definitions for use in verification, semantics proofs, and other computational reasoning.", "A logic for specification and verification is derived from the axioms of Zermelo-Fraenkel set theory. The proofs are performed using the proof assistant Isabelle. Isabelle is generic, supporting several different logics. Isabelle has the flexibility to adapt to variants of set theory. Its higher-order syntax supports the definition of new binding operators. Unknowns in subgoals can be instantiated incrementally. The paper describes the derivation of rules for descriptions, relations, and functions and discusses interactive proofs of Cantor's Theorem, the Composition of Homomorphisms challenge [9], and Ramsey's Theorem [5]. A generic proof assistant can stand up against provers dedicated to particular logics." ] }
1707.04757
2736867616
We present a new framework for formalizing mathematics in untyped set theory using auto2. Using this framework, we formalize in Isabelle FOL the entire chain of development from the axioms of set theory to the definition of the fundamental group for an arbitrary topological space. The auto2 prover is used as the sole automation tool, and enables succinct proof scripts throughout the project.
Outside Isabelle, the major formalization projects using set theory include Metamath @cite_15 and Mizar @cite_12 , both of which have extensive mathematical libraries. There are some recent efforts to reproduce the Mizar environment in HOL-type systems @cite_4 @cite_5 . While there are some similarities between our framework and Mizar's, we do not aim for an exact reproduction. In particular, we maintain the typical style of stating definitions and theorems in Isabelle. More comparisons between our approach and Mizar are discussed in Section .
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_4", "@cite_12" ], "mid": [ "2188893596", "1567130674", "2293613288", "1789208025" ], "abstract": [ "The Mizar system is a system for formalization of mathematics. It contains a relatively sophisticated and rich type system, which makes formalization of mathematics in Mizar more intuitive than in other systems. On the other hand, the Mizar type system is very complex and together with obscure implementation of the Mizar verifier there is concern about correctness of the Mizar system. One of the possible solutions is to translate the Mizar Math Library (MML) to other systems for formalization of mathematics and reverify MML in them. The HOL Light system has been chosen and the necessary first step of proposed translation is to reconstruct the Mizar type system in the HOL Light system, which is the aim of the presented work. The reconstruction is not easy because of complexity of the Mizar type system and dierent types of logic used in both systems. The basic idea is to represent Mizar types as predicates and express the dynamic part of the Mizar type system as proved theorems in HOL Light. The reconstruction was partialy implemented. The paper gives necessary introduction and summarizes main principles and obstacles encountered during reconstruction.", "", "In this paper we explore the possibility of emulating the Mizar environment as close as possible inside the Isabelle logical framework. We introduce adaptations to the Isabelle FOL object logic that correspond to the logic of Mizar, as well as Isar inner syntax notations that correspond to these of the Mizar language. We show how Isabelle types can be used to differentiate between the syntactic categories of the Mizar language, such as sets and Mizar types including modes and attributes, and show how they interact with the basic constructs of the Tarski-Grothendieck set theory. We discuss Mizar definitions and provide simple abbreviations that allow the introduction of Mizar predicates, functions, attributes and modes using the Isabelle Pure language elements for introducing definitions and theorems. We finally consider the definite and indefinite description operators in Mizar and their use to introduce definitions by “means” and “equals”. We demonstrate the usability of the environment on a sample Mizar-style formalization, with cluster inferences and “by” steps performed manually.", "This paper is intended to be a practical reference manual for basic Mizar terminology which may be helpful to get started using the system. The paper describes most important aspects of the Mizar language as well as some features of the verification software." ] }
1707.04757
2736867616
We present a new framework for formalizing mathematics in untyped set theory using auto2. Using this framework, we formalize in Isabelle FOL the entire chain of development from the axioms of set theory to the definition of the fundamental group for an arbitrary topological space. The auto2 prover is used as the sole automation tool, and enables succinct proof scripts throughout the project.
Mizar formalized not just the definition of the fundamental group @cite_8 , but several of its properties, including the computation of the fundamental group of the circle. There is also a formalization of path homotopy in HOL Light which is then ported to Isabelle HOL. This is used for the proof of the Brouwer fixed-point theorem and the Cauchy integral theorem, although the fundamental group itself does not appear to be constructed.
{ "cite_N": [ "@cite_8" ], "mid": [ "2337947764" ], "abstract": [ "We adopt the following convention: p, q, x, y are real numbers and n is a natural number. Next we state a number of propositions: (1) Let G, H be groups and h be a homomorphism from G to H. If h ·h−1 = idH and h−1 · h = idG, then h is an isomorphism. (2) For every subset X of I and for every point a of I such that X = ]a, 1] holds Xc = [0, a]. The paper was written during author’s post-doctoral fellowship granted by Shinshu University, Japan. This work has been partially supported by the CALCULEMUS grant HPRN-CT-200000102 and KBN grant 4 T11C 039 24." ] }
1707.04757
2736867616
We present a new framework for formalizing mathematics in untyped set theory using auto2. Using this framework, we formalize in Isabelle FOL the entire chain of development from the axioms of set theory to the definition of the fundamental group for an arbitrary topological space. The auto2 prover is used as the sole automation tool, and enables succinct proof scripts throughout the project.
In homotopy type theory, one can work with fundamental groups (and higher-homotopy groups) using synthetic definitions. This has led to formalizations of results about homotopy groups that are well beyond what can be achieved today using standard definitions (see @cite_9 for a more recent example). We emphasize that our definition of the fundamental group, as with Mizar's, follows the standard one in set theory.
{ "cite_N": [ "@cite_9" ], "mid": [ "2472013890" ], "abstract": [ "The goal of this thesis is to prove that π4(S3) ≃ Z 2Z in homotopy type theory. In particular it is a constructive and purely homotopy-theoretic proof. We first recall the basic concepts of homotopy type theory, and we prove some well-known results about the homotopy groups of spheres: the computation of the homotopy groups of the circle, the triviality of those of the form πk(Sn) with k < n, and the construction of the Hopf fibration. We then move to more advanced tools. In particular, we define the James construction which allows us to prove the Freudenthal suspension theorem and the fact that there exists a natural number n such that π4(S3) ≃ Z nZ. Then we study the smash product of spheres, we construct the cohomology ring of a space, and we introduce the Hopf invariant, allowing us to narrow down the n to either 1 or 2. The Hopf invariant also allows us to prove that all the groups of the form π4n−1(S2n) are infinite. Finally we construct the Gysin exact sequence, allowing us to compute the cohomology of CP2 and to prove that π4(S3) ≃ Z 2Z and that more generally πn+1(Sn) ≃ Z 2Z for every n ≥ 3" ] }
1707.04796
2951595849
Deep neural network (DNN) architectures have been shown to outperform traditional pipelines for object segmentation and pose estimation using RGBD data, but the performance of these DNN pipelines is directly tied to how representative the training data is of the true data. Hence a key requirement for employing these methods in practice is to have a large set of labeled data for your specific robotic manipulation task, a requirement that is not generally satisfied by existing datasets. In this paper we develop a pipeline to rapidly generate high quality RGBD data with pixelwise labels and object poses. We use an RGBD camera to collect video of a scene from multiple viewpoints and leverage existing reconstruction techniques to produce a 3D dense reconstruction. We label the 3D reconstruction using a human assisted ICP-fitting of object meshes. By reprojecting the results of labeling the 3D scene we can produce labels for each RGBD image of the scene. This pipeline enabled us to collect over 1,000,000 labeled object instances in just a few days. We use this dataset to answer questions related to how much training data is required, and of what quality the data must be, to achieve high performance from a DNN architecture.
Although the approach is not capable of generating the 6 DOF poses of objects, a relevant method for per-pixel labeling is described in @cite_20 . They employ an automated data collection pipeline in which the key idea is to use background subtraction. Two images are taken with the camera at the exact same location -- in the first, no object is present, while it is in the second. Background subtraction automatically yields a pixelwise segmentation of the object. Using this approach they generate 130,000 labeled images for their 39 objects. As a pixelwise labeling method, there are a few drawbacks to this approach. The first is that in order to apply the background subtraction method, they only have a single object present in each scene. In particular there are no training images with occlusions. They could in theory extend their method to support multi-object scenes by adding objects to the scene one-by-one, but this presents practical challenges. Secondly the approach requires an accurately calibrated robot arm to move the camera in a repeatable way. A benefit of the method, however, is that it does enable pixelwise labeling of even deformable objects.
{ "cite_N": [ "@cite_20" ], "mid": [ "2524923337" ], "abstract": [ "Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC). A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multi-view RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MIT-Princeton Team system that took 3rd- and 4th- place in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at this http URL" ] }
1707.04796
2951595849
Deep neural network (DNN) architectures have been shown to outperform traditional pipelines for object segmentation and pose estimation using RGBD data, but the performance of these DNN pipelines is directly tied to how representative the training data is of the true data. Hence a key requirement for employing these methods in practice is to have a large set of labeled data for your specific robotic manipulation task, a requirement that is not generally satisfied by existing datasets. In this paper we develop a pipeline to rapidly generate high quality RGBD data with pixelwise labels and object poses. We use an RGBD camera to collect video of a scene from multiple viewpoints and leverage existing reconstruction techniques to produce a 3D dense reconstruction. We label the 3D reconstruction using a human assisted ICP-fitting of object meshes. By reprojecting the results of labeling the 3D scene we can produce labels for each RGBD image of the scene. This pipeline enabled us to collect over 1,000,000 labeled object instances in just a few days. We use this dataset to answer questions related to how much training data is required, and of what quality the data must be, to achieve high performance from a DNN architecture.
The SceneNN @cite_2 and ScanNet @cite_15 data generation pipelines share some features with our method. They both use an RGBD sensor to produce a dense 3D reconstruction and then perform annotations in 3D. However, since SceneNN and ScanNet are focused on producing datasets for RGDB scene understanding tasks, the type of annotation that is needed is quite different. In particular their methods provide pixelwise segmenation into generic object classes (floor, wall, couch etc.). Neither SceneNN or ScaneNet have gometric models for the specific objects in a scene and thus cannot provide 6DOF object poses. Whereas ScanNet and SceneNN focus on producing datasets for benchmarking scene understanding algorithms, we provide a pipeline to enable rapid generation labeled data for your particular application and object set.
{ "cite_N": [ "@cite_15", "@cite_2" ], "mid": [ "2950493473", "2563685048" ], "abstract": [ "A key requirement for leveraging supervised deep learning methods is the availability of large, labeled datasets. Unfortunately, in the context of RGB-D scene understanding, very little data is available -- current datasets cover a small range of scene views and have limited semantic annotations. To address this issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation. We show that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks, including 3D object classification, semantic voxel labeling, and CAD model retrieval. The dataset is freely available at this http URL.", "Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http: www.scenenn.net." ] }
1707.04796
2951595849
Deep neural network (DNN) architectures have been shown to outperform traditional pipelines for object segmentation and pose estimation using RGBD data, but the performance of these DNN pipelines is directly tied to how representative the training data is of the true data. Hence a key requirement for employing these methods in practice is to have a large set of labeled data for your specific robotic manipulation task, a requirement that is not generally satisfied by existing datasets. In this paper we develop a pipeline to rapidly generate high quality RGBD data with pixelwise labels and object poses. We use an RGBD camera to collect video of a scene from multiple viewpoints and leverage existing reconstruction techniques to produce a 3D dense reconstruction. We label the 3D reconstruction using a human assisted ICP-fitting of object meshes. By reprojecting the results of labeling the 3D scene we can produce labels for each RGBD image of the scene. This pipeline enabled us to collect over 1,000,000 labeled object instances in just a few days. We use this dataset to answer questions related to how much training data is required, and of what quality the data must be, to achieve high performance from a DNN architecture.
There have been a wide variety of methods to estimate object poses for manipulation. A challenge is object specificity. @cite_26 and @cite_20 are both state of the art pipelines for estimating object poses from RGBD images in clutter -- both approaches use RGB pixelwise segmentation neural networks (trained on their datasets described in the previous section) to crop point clouds which are then fed into ICP-based algorithms to estimate object poses by registering against prior known meshes. Another approach is to directly learn pose estimation @cite_0 . The upcoming SIXD Challenge 2017 @cite_27 will provide a comparison of state of the art methods for 6DOF pose estimation on a common dataset. The challenge dataset contains RGBD images annotated with ground truth 6DOF object poses. This is exactly the type of data produced by our pipeline and we aim aim to submit our dataset to the 2018 challenge. There is also a trend in manipulation research to bypass object pose estimation and work directly with the raw sensor data @cite_23 @cite_14 @cite_13 . Making these methods object-specific in clutter could be aided by using the pipeline presented here to train segmentation networks.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_0", "@cite_27", "@cite_23", "@cite_13", "@cite_20" ], "mid": [ "", "2290564286", "2059625476", "", "2964161785", "2953249127", "2524923337" ], "abstract": [ "", "This paper considers the problem of grasp pose detection in point clouds. We follow a general algorithmic structure that first generates a large set of 6-DOF grasp candidates and then classifies each of them as a good or a bad grasp. Our focus in this paper is on improving the second step by using depth sensor scans from large online datasets to train a convolutional neural network. We propose two new representations of grasp candidates, and we quantify the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models. Our analysis shows that a more informative grasp candidate representation as well as pretraining and prior knowledge significantly improve grasp detection. We evaluate our approach on a Baxter Research Robot and demonstrate an average grasp success rate of 93 in dense clutter. This is a 20 improvement compared to our prior work.", "Pose estimation of object is one of the key problems for the automatic-grasping task of robotics. In this paper, we present a new vision-based robotic grasping system, which can not only recognize different objects but also estimate their poses by using a deep learning model, finally grasp them and move to a predefined destination. The deep learning model demonstrates strong power in learning hierarchical features which greatly facilitates the recognition mission. We apply the Max-pooling Convolutional Neural Network (MPCNN), one of the most popular deep learning models, in this system, and assign different poses of objects as different classes in MPCNN. Besides, a new object detection method is also presented to overcome the disadvantage of the deep learning model. We have built a database comprised of 5 objects with different poses and illuminations for experimental performance evaluation. The experimental results demonstrate that our system can achieve high accuracy on object recognition as well as pose estimation. And the vision-based robotic system can grasp objects successfully regardless of different poses and illuminations.", "", "Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.", "To reduce data collection time for deep learning of robust robotic grasp plans, we explore training from a synthetic dataset of 6.7 million point clouds, grasps, and analytic grasp metrics generated from thousands of 3D models from Dex-Net 1.0 in randomized poses on a table. We use the resulting dataset, Dex-Net 2.0, to train a Grasp Quality Convolutional Neural Network (GQ-CNN) model that rapidly predicts the probability of success of grasps from depth images, where grasps are specified as the planar position, angle, and depth of a gripper relative to an RGB-D sensor. Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93 on eight known objects with adversarial geometry and is 3x faster than registering point clouds to a precomputed dataset of objects and indexing grasps. The Dex-Net 2.0 grasp planner also has the highest success rate on a dataset of 10 novel rigid objects and achieves 99 precision (one false positive out of 69 grasps classified as robust) on a dataset of 40 novel household objects, some of which are articulated or deformable. Code, datasets, videos, and supplementary material are available at this http URL .", "Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC). A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multi-view RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MIT-Princeton Team system that took 3rd- and 4th- place in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at this http URL" ] }
1707.04796
2951595849
Deep neural network (DNN) architectures have been shown to outperform traditional pipelines for object segmentation and pose estimation using RGBD data, but the performance of these DNN pipelines is directly tied to how representative the training data is of the true data. Hence a key requirement for employing these methods in practice is to have a large set of labeled data for your specific robotic manipulation task, a requirement that is not generally satisfied by existing datasets. In this paper we develop a pipeline to rapidly generate high quality RGBD data with pixelwise labels and object poses. We use an RGBD camera to collect video of a scene from multiple viewpoints and leverage existing reconstruction techniques to produce a 3D dense reconstruction. We label the 3D reconstruction using a human assisted ICP-fitting of object meshes. By reprojecting the results of labeling the 3D scene we can produce labels for each RGBD image of the scene. This pipeline enabled us to collect over 1,000,000 labeled object instances in just a few days. We use this dataset to answer questions related to how much training data is required, and of what quality the data must be, to achieve high performance from a DNN architecture.
While the research community is more familiar with the scale and variety of data needed for images in the style of ImageNet @cite_17 , the type of visual data that robots have available is much different than ImageNet-style images. Additionally, higher object specificity may be desired. In robotics contexts, there has been recent work in trying to identify data requirements for achieving practical performance for deep visual models trained on simulation data @cite_24 @cite_22 @cite_28 @cite_8 , and specifically augmenting small datasets of real data with large datasets of simulation data @cite_24 @cite_22 @cite_28 @cite_8 . We do not know of prior studies that have performed generalization experiments with the scale of real data used here.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_28", "@cite_24", "@cite_17" ], "mid": [ "2528963632", "", "2431874326", "2952787450", "2117539524" ], "abstract": [ "Deep learning has rapidly transformed the state of the art algorithms used to address a variety of problems in computer vision and robotics. These breakthroughs have relied upon massive amounts of human annotated training data. This time consuming process has begun impeding the progress of these deep learning efforts. This paper describes a method to incorporate photo-realistic computer images from a simulation engine to rapidly generate annotated data that can be used for the training of machine learning algorithms. We demonstrate that a state of the art architecture, which is trained only using these synthetic annotations, performs better than the identical architecture trained on human annotated real-world data, when tested on the KITTI data set for vehicle detection. By training machine learning algorithms on a rich virtual world, real objects in real scenes can be learned and classified using synthetic data. This approach offers the possibility of accelerating deep learning's application to sensor-based classification problems like those that appear in self-driving cars. The source code and data to train and validate the networks described in this paper are made available for researchers.", "", "Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. Recent revolutionary results of deep convolutional neural networks (DCNNs) foreshadow the advent of reliable classifiers to perform such visual tasks. However, DCNNs require learning of many parameters from raw images, thus, having a sufficient amount of diverse images with class annotations is needed. These annotations are obtained via cumbersome, human labour which is particularly challenging for semantic segmentation since pixel-level annotations are required. In this paper, we propose to use a virtual world to automatically generate realistic synthetic images with pixel-level annotations. Then, we address the question of how useful such data can be for semantic segmentation – in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations. We use SYNTHIA in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments with DCNNs that show how the inclusion of SYNTHIA in the training stage significantly improves performance on the semantic segmentation task.", "Recent progress in computer vision has been driven by high-capacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic open-world computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just 1 3 of the CamVid training set outperform models trained on the complete CamVid training set.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements." ] }
1707.05124
2738538981
We provide a tight analysis which settles the round complexity of the well-studied parallel randomized greedy MIS algorithm, thus answering the main open question of Blelloch, Fineman, and Shun [SPAA'12]. The parallel distributed randomized greedy Maximal Independent Set (MIS) algorithm works as follows. An order of the vertices is chosen uniformly at random. Then, in each round, all vertices that appear before their neighbors in the order are added to the independent set and removed from the graph along with their neighbors. The main question of interest is the number of rounds it takes until the graph is empty. This algorithm has been studied since 1987, initiated by Coppersmith, Raghavan, and Tompa [FOCS'87], and the previously best known bounds were @math rounds in expectation for Erd o s-R ' e nyi random graphs by Calkin and Frieze [Random Struc. & Alg. '90] and @math rounds with high probability for general graphs by Blelloch, Fineman, and Shun [SPAA'12]. We prove a high probability upper bound of @math on the round complexity of this algorithm in general graphs, and that this bound is tight. This also shows that parallel randomized greedy MIS is as fast as the celebrated algorithm of Luby [STOC'85, JALG'86].
The Maximal Independent Set (MIS) problem plays a central role in parallel and distributed computing @cite_11 @cite_13 , and has---due to its many applications in symmetry breaking @cite_20 ---been extensively studied for more than three decades @cite_14 @cite_20 @cite_5 @cite_9 @cite_19 @cite_21 @cite_10 @cite_3 @cite_0 @cite_15 @cite_7 @cite_12 @cite_8 . We refer to @cite_22 for a thorough review of the state of the art.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_7", "@cite_8", "@cite_10", "@cite_9", "@cite_21", "@cite_3", "@cite_0", "@cite_19", "@cite_5", "@cite_15", "@cite_13", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "2026143151", "2467514673", "2151690061", "1666479227", "1997802297", "2112009244", "2071759604", "2110963759", "", "2169102947", "", "2054910423", "1974040526", "", "2100061495", "" ], "abstract": [ "A parallel algorithm is presented that accepts as input a graph G and produces a maximal independent set of vertices in G . On a P-RAM without the concurrent write or concurrent read features, the algorithm executes in O ((log n ) 4 ) time and uses O (( n (log n )) 3 ) processors, where n is the number of vertices in G . The algorithm has several novel features that may find other applications. These include the use of balanced incomplete block designs to replace random sampling by deterministic sampling, and the use of a “dynamic pigeonhole principle” that generalizes the conventional pigeonhole principle.", "Symmetry-breaking problems are among the most well studied in the field of distributed computing and yet the most fundamental questions about their complexity remain open. In this article we work in the LOCAL model (where the input graph and underlying distributed network are identical) and study the randomized complexity of four fundamental symmetry-breaking problems on graphs: computing MISs (maximal independent sets), maximal matchings, vertex colorings, and ruling sets. A small sample of our results includes the following: —An MIS algorithm running in O(log2D Δ L 2√log n, and comes close to the Ω(flog Δ log log Δ lower bound of Kuhn, Moscibroda, and Wattenhofer. —A maximal matching algorithm running in O(log Δ + log 4log n) time. This is the first significant improvement to the 1986 algorithm of Israeli and Itai. Moreover, its dependence on Δ is nearly optimal. —A (Δ + 1)-coloring algorithm requiring O(log Δ + 2o(√log log n) time, improving on an O(log Δ + √log n)-time algorithm of Schneider and Wattenhofer. —A method for reducing symmetry-breaking problems in low arboricity degeneracy graphs to low-degree graphs. (Roughly speaking, the arboricity or degeneracy of a graph bounds the density of any subgraph.) Corollaries of this reduction include an O(√log n)-time maximal matching algorithm for graphs with arboricity up to 2√log n and an O(log 2 3n)-time MIS algorithm for graphs with arboricity up to 2(log n)1 3. Each of our algorithms is based on a simple but powerful technique for reducing a randomized symmetry-breaking task to a corresponding deterministic one on a poly(log n)-size graph.", "The greedy sequential algorithm for maximal independent set (MIS) loops over the vertices in an arbitrary order adding a vertex to the resulting set if and only if no previous neighboring vertex has been added. In this loop, as in many sequential loops, each iterate will only depend on a subset of the previous iterates (i.e. knowing that any one of a vertex's previous neighbors is in the MIS, or knowing that it has no previous neighbors, is sufficient to decide its fate one way or the other). This leads to a dependence structure among the iterates. If this structure is shallow then running the iterates in parallel while respecting the dependencies can lead to an efficient parallel implementation mimicking the sequential algorithm. In this paper, we show that for any graph, and for a random ordering of the vertices, the dependence length of the sequential greedy MIS algorithm is polylogarithmic (O(log^2 n) with high probability). Our results extend previous results that show polylogarithmic bounds only for random graphs. We show similar results for greedy maximal matching (MM). For both problems we describe simple linear-work parallel algorithms based on the approach. The algorithms allow for a smooth tradeoff between more parallelism and reduced work, but always return the same result as the sequential greedy algorithms. We present experimental results that demonstrate efficiency and the tradeoff between work and parallelism.", "The Maximal Independent Set (MIS) problem is one of the basics in the study of locality in distributed graph algorithms. This paper presents a very simple randomized algorithm for this problem providing a near-optimal local complexity, which incidentally, when combined with some known techniques, also leads to a near-optimal global complexity. Classical MIS algorithms of Luby [STOC'85] and Alon, Babai and Itai [JALG'86] provide the global complexity guarantee that, with high probability1, all nodes terminate after O(log n) rounds. In contrast, our initial focus is on the local complexity, and our main contribution is to provide a very simple algorithm guaranteeing that each particular node v terminates after O(log deg(v) + log 1 e) rounds, with probability at least 1 -- e. The degree-dependency in this bound is optimal, due to a lower bound of Kuhn, Moscibroda, and Wattenhofer [PODC'04]. Interestingly, this local complexity smoothly transitions to a global complexity: by adding techniques of Barenboim, Elkin, Pettie, and Schneider [FOCS'12; arXiv: 1202.1983v3], we2 get an MIS algorithm with a high probability global complexity of O(log Δ) + 2O([EQUATION]), where Δ denotes the maximum degree. This improves over the O(log2 Δ) + 2O([EQUATION]) result of , and gets close to the Ω(min log Δ, [EQUATION] ) lower bound of Corollaries include improved algorithms for MIS in graphs of upper-bounded arboricity, or lower-bounded girth, for Ruling Sets, for MIS in the Local Computation Algorithms (LCA) model, and a faster distributed algorithm for the Lovasz Local Lemma.", "The problem of constructing in parallel a maximal independent set of a given graph is considered. A new deterministic NC-algorithm implemented in the EREW PRAM model is presented. On graphs with n vertices and m edges, it uses @math processors and runs in @math time. This reduces by a factor of @math both the running time and the processor count of the previously fastest deterministic algorithm that solves the problem using a linear number of processors.", "This paper deals with distributed graph algorithms. Processors reside in the vertices of a graph G and communicate only with their neighbors. The system is synchronous and reliable, there is no limit on message lengths and local computation is instantaneous. The results: A maximal independent set in an n-cycle cannot be found faster than Ω(log* n) and this is optimal by [CV]. The d-regular tree of radius r cannot be colored with fewer than √d colors in time 2r 3. If Δ is the largest degree in G which has order n, then in time O(log*n) it can be colored with O(Δ2) colors.", "The following three problems concerning random graphs can be solved in (log n)O(1) expected time using linearly many processors: (1) finding the lexicographically first maximal independent set, (2) coloring the vertices using a number of colors that is almost surely within twice the chromatic number, and (3) finding a Hamiltonian circuit.", "A new parallel algorithm for the maximal independent set problem is constructed. It runs in @math time when implemented on a linear number of EREW-processors. This is the first deterministic algorithm for the maximal independent set problem (MIS) whose running time is polylogarithmic and whose processor-time product is optimal up to a polylogarithmic factor.", "", "This paper describes efficient deterministic techniques for breaking symmetry in parallel. These techniques work well on rooted trees and graphs of constant degree or genus. The primary technique allows us to 3-color a rooted tree in @math time on an EREW PRAM using a linear number of processors. These techniques are used to construct fast linear processor algorithms for several problems, including the problem of @math -coloring constant-degree graphs and 5-coloring planar graphs. Lower bounds for 2-coloring directed lists and for finding maximal independent sets in arbitrary graphs are also proved.", "", "This paper concerns a number of algorithmic problems on graphs and how they may be solved in a distributed fashion. The computational model is such that each node of the graph is occupied by a processor which has its own ID. Processors are restricted to collecting data from others which are at a distance at most t away from them in t time units, but are otherwise computationally unbounded. This model focuses on the issue of locality in distributed processing, namely, to what extent a global solution to a computational problem can be obtained from locally available data.Three results are proved within this model: • A 3-coloring of an n-cycle requires time @math . This bound is tight, by previous work of Cole and Vishkin. • Any algorithm for coloring the d-regular tree of radius r which runs for time at most @math requires at least @math colors. • In an n-vertex graph of largest degree @math , an @math -coloring may be found in time @math .", "An historical overview of computational complexity is presented. Emphasis is on the fundamental issues of defining the intrinsic computational complexity of a problem and proving upper and lower bounds on the complexity of problems. Probabilistic and parallel computation are discussed.", "", "Two basic design strategies are used to develop a very simple and fast parallel algorithms for the maximal independent set (MIS) problem. The first strategy consists of assigning identical copies of a simple algorithm to small local portions of the problem input. The algorithm is designed so that when the copies are executed in parallel the correct problem output is produced very quickly. A very simple Monte Carlo algorithm for the MIS problem is presented which is based upon this strategy. The second strategy is a general and powerful technique for removing randomization from algorithms. This strategy is used to convert the Monte Carlo algorithm for this MIS problem into a simple deterministic algorithm with the same parallel running time.", "" ] }
1707.05124
2738538981
We provide a tight analysis which settles the round complexity of the well-studied parallel randomized greedy MIS algorithm, thus answering the main open question of Blelloch, Fineman, and Shun [SPAA'12]. The parallel distributed randomized greedy Maximal Independent Set (MIS) algorithm works as follows. An order of the vertices is chosen uniformly at random. Then, in each round, all vertices that appear before their neighbors in the order are added to the independent set and removed from the graph along with their neighbors. The main question of interest is the number of rounds it takes until the graph is empty. This algorithm has been studied since 1987, initiated by Coppersmith, Raghavan, and Tompa [FOCS'87], and the previously best known bounds were @math rounds in expectation for Erd o s-R ' e nyi random graphs by Calkin and Frieze [Random Struc. & Alg. '90] and @math rounds with high probability for general graphs by Blelloch, Fineman, and Shun [SPAA'12]. We prove a high probability upper bound of @math on the round complexity of this algorithm in general graphs, and that this bound is tight. This also shows that parallel randomized greedy MIS is as fast as the celebrated algorithm of Luby [STOC'85, JALG'86].
This algorithm is particularly easy to implement and requires only a small amount of communication. Indeed, a vertex only needs to inform its neighbors about its position in the random order and then, in the round of its removal from the graph, about its decision (whether to join the MIS). For practical implementation-related details, we refer to @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2151690061" ], "abstract": [ "The greedy sequential algorithm for maximal independent set (MIS) loops over the vertices in an arbitrary order adding a vertex to the resulting set if and only if no previous neighboring vertex has been added. In this loop, as in many sequential loops, each iterate will only depend on a subset of the previous iterates (i.e. knowing that any one of a vertex's previous neighbors is in the MIS, or knowing that it has no previous neighbors, is sufficient to decide its fate one way or the other). This leads to a dependence structure among the iterates. If this structure is shallow then running the iterates in parallel while respecting the dependencies can lead to an efficient parallel implementation mimicking the sequential algorithm. In this paper, we show that for any graph, and for a random ordering of the vertices, the dependence length of the sequential greedy MIS algorithm is polylogarithmic (O(log^2 n) with high probability). Our results extend previous results that show polylogarithmic bounds only for random graphs. We show similar results for greedy maximal matching (MM). For both problems we describe simple linear-work parallel algorithms based on the approach. The algorithms allow for a smooth tradeoff between more parallelism and reduced work, but always return the same result as the sequential greedy algorithms. We present experimental results that demonstrate efficiency and the tradeoff between work and parallelism." ] }
1707.05124
2738538981
We provide a tight analysis which settles the round complexity of the well-studied parallel randomized greedy MIS algorithm, thus answering the main open question of Blelloch, Fineman, and Shun [SPAA'12]. The parallel distributed randomized greedy Maximal Independent Set (MIS) algorithm works as follows. An order of the vertices is chosen uniformly at random. Then, in each round, all vertices that appear before their neighbors in the order are added to the independent set and removed from the graph along with their neighbors. The main question of interest is the number of rounds it takes until the graph is empty. This algorithm has been studied since 1987, initiated by Coppersmith, Raghavan, and Tompa [FOCS'87], and the previously best known bounds were @math rounds in expectation for Erd o s-R ' e nyi random graphs by Calkin and Frieze [Random Struc. & Alg. '90] and @math rounds with high probability for general graphs by Blelloch, Fineman, and Shun [SPAA'12]. We prove a high probability upper bound of @math on the round complexity of this algorithm in general graphs, and that this bound is tight. This also shows that parallel randomized greedy MIS is as fast as the celebrated algorithm of Luby [STOC'85, JALG'86].
For general graphs, Blelloch, Fineman, and Shun @cite_7 proved 5 years ago that w.h.p. As standard, , abbreviated as w.h.p., indicates a probability at least @math , for any desirably large constant @math . @math rounds are enough. The authors stated as one of their main open questions thus whether the analysis of parallel greedy MIS's round complexity can be improved to @math .
{ "cite_N": [ "@cite_7" ], "mid": [ "2151690061" ], "abstract": [ "The greedy sequential algorithm for maximal independent set (MIS) loops over the vertices in an arbitrary order adding a vertex to the resulting set if and only if no previous neighboring vertex has been added. In this loop, as in many sequential loops, each iterate will only depend on a subset of the previous iterates (i.e. knowing that any one of a vertex's previous neighbors is in the MIS, or knowing that it has no previous neighbors, is sufficient to decide its fate one way or the other). This leads to a dependence structure among the iterates. If this structure is shallow then running the iterates in parallel while respecting the dependencies can lead to an efficient parallel implementation mimicking the sequential algorithm. In this paper, we show that for any graph, and for a random ordering of the vertices, the dependence length of the sequential greedy MIS algorithm is polylogarithmic (O(log^2 n) with high probability). Our results extend previous results that show polylogarithmic bounds only for random graphs. We show similar results for greedy maximal matching (MM). For both problems we describe simple linear-work parallel algorithms based on the approach. The algorithms allow for a smooth tradeoff between more parallelism and reduced work, but always return the same result as the sequential greedy algorithms. We present experimental results that demonstrate efficiency and the tradeoff between work and parallelism." ] }
1707.05124
2738538981
We provide a tight analysis which settles the round complexity of the well-studied parallel randomized greedy MIS algorithm, thus answering the main open question of Blelloch, Fineman, and Shun [SPAA'12]. The parallel distributed randomized greedy Maximal Independent Set (MIS) algorithm works as follows. An order of the vertices is chosen uniformly at random. Then, in each round, all vertices that appear before their neighbors in the order are added to the independent set and removed from the graph along with their neighbors. The main question of interest is the number of rounds it takes until the graph is empty. This algorithm has been studied since 1987, initiated by Coppersmith, Raghavan, and Tompa [FOCS'87], and the previously best known bounds were @math rounds in expectation for Erd o s-R ' e nyi random graphs by Calkin and Frieze [Random Struc. & Alg. '90] and @math rounds with high probability for general graphs by Blelloch, Fineman, and Shun [SPAA'12]. We prove a high probability upper bound of @math on the round complexity of this algorithm in general graphs, and that this bound is tight. This also shows that parallel randomized greedy MIS is as fast as the celebrated algorithm of Luby [STOC'85, JALG'86].
The result of Calkin, Frieze and Ku c era @cite_6 proves that this is asymptotically best possible and we also provide an alternative short proof of the lower bound in appendix:lower . In appendix:implications , we present implications of thm:MIS for maximal matching and @math -vertex-coloring as well as the correlation clustering problem.
{ "cite_N": [ "@cite_6" ], "mid": [ "2006057641" ], "abstract": [ "We consider the parallel greedy algorithm of Coppersmith, Raghavan, and Tompa (Proc. of 28th Annual IEEE Symp. on Foundations of Computer Science, pp. 260–269, 1987) for finding the lexicographically first maximal independent set of a graph. We prove an Ω(log n) bound on the expected number of iterations for most edge densities. This complements the O(log n) bound proved in Calkin and Frieze (Random Structures and Algorithms, Vol. 1, pp. 39–50, 1990). © 1992 Wiley Periodicals, Inc." ] }
1707.05124
2738538981
We provide a tight analysis which settles the round complexity of the well-studied parallel randomized greedy MIS algorithm, thus answering the main open question of Blelloch, Fineman, and Shun [SPAA'12]. The parallel distributed randomized greedy Maximal Independent Set (MIS) algorithm works as follows. An order of the vertices is chosen uniformly at random. Then, in each round, all vertices that appear before their neighbors in the order are added to the independent set and removed from the graph along with their neighbors. The main question of interest is the number of rounds it takes until the graph is empty. This algorithm has been studied since 1987, initiated by Coppersmith, Raghavan, and Tompa [FOCS'87], and the previously best known bounds were @math rounds in expectation for Erd o s-R ' e nyi random graphs by Calkin and Frieze [Random Struc. & Alg. '90] and @math rounds with high probability for general graphs by Blelloch, Fineman, and Shun [SPAA'12]. We prove a high probability upper bound of @math on the round complexity of this algorithm in general graphs, and that this bound is tight. This also shows that parallel randomized greedy MIS is as fast as the celebrated algorithm of Luby [STOC'85, JALG'86].
It is a well-known fact that the removal of all local minima along with their neighbors from a graph for a random order in expectation leads to a constant factor decrease in the total number of edges (see, e.g., @cite_17 for a simple proof). When---as it is the case in Luby's algorithm---the random order is regenerated in every iteration, repeated application of this argument directly yields an upper bound of @math on the round complexity. However, if the order is kept fixed between rounds, then the order of the remaining vertices is no longer uniformly distributed.
{ "cite_N": [ "@cite_17" ], "mid": [ "2737338020" ], "abstract": [ "We present a randomised distributed maximal independent set (MIS) algorithm for arbitrary graphs of size @math that halts in time @math with probability @math , each message containing @math bit: thus its bit complexity per channel is @math (the bit complexity is the number of bits we need to solve a distributed task, it measures the communication complexity). We assume that the graph is anonymous: unique identities are not available to distinguish the processes; we only assume that each vertex distinguishes between its neighbours by locally known channel names. Furthermore we do not assume that the size (or an upper bound on the size) of the graph is known. This algorithm is optimal (modulo a multiplicative constant) for the bit complexity and improves the best previous randomised distributed MIS algorithms (deduced from the randomised PRAM algorithm due to Luby) for general graphs which is @math per channel (it halts in time @math and the size of each message is @math ). This result is based on a powerful and general technique for converting unrealistic exchanges of messages containing real numbers drawn at random on each vertex of a network into exchanges of bits. Then we consider a natural question: what is the impact of a vertex inclusion in the MIS on distant vertices? We prove that this impact vanishes rapidly as the distance grows for bounded-degree vertices. We provide a counter-example that shows this result does not hold in general. We prove also that these results remain valid for Luby's algorithm presented by Lynch and by Wattenhofer. This question remains open for the variant given by Peleg." ] }
1707.05124
2738538981
We provide a tight analysis which settles the round complexity of the well-studied parallel randomized greedy MIS algorithm, thus answering the main open question of Blelloch, Fineman, and Shun [SPAA'12]. The parallel distributed randomized greedy Maximal Independent Set (MIS) algorithm works as follows. An order of the vertices is chosen uniformly at random. Then, in each round, all vertices that appear before their neighbors in the order are added to the independent set and removed from the graph along with their neighbors. The main question of interest is the number of rounds it takes until the graph is empty. This algorithm has been studied since 1987, initiated by Coppersmith, Raghavan, and Tompa [FOCS'87], and the previously best known bounds were @math rounds in expectation for Erd o s-R ' e nyi random graphs by Calkin and Frieze [Random Struc. & Alg. '90] and @math rounds with high probability for general graphs by Blelloch, Fineman, and Shun [SPAA'12]. We prove a high probability upper bound of @math on the round complexity of this algorithm in general graphs, and that this bound is tight. This also shows that parallel randomized greedy MIS is as fast as the celebrated algorithm of Luby [STOC'85, JALG'86].
To overcome this problem of dependencies among different iterations, Blelloch, Fineman, and Shun @cite_7 ---inspired by an approach of @cite_21 and @cite_0 ---divide the algorithm into several phases. In each phase they only expose a of the remaining order and run the parallel algorithm on these vertices only (whilst still deleting a vertex in the suffix if it is adjacent to a vertex added to the MIS). This way, in each phase the order among the unprocessed (but possibly already deleted) vertices in the suffix remains random, leading to a sequence of independent" problems.
{ "cite_N": [ "@cite_0", "@cite_21", "@cite_7" ], "mid": [ "", "2071759604", "2151690061" ], "abstract": [ "", "The following three problems concerning random graphs can be solved in (log n)O(1) expected time using linearly many processors: (1) finding the lexicographically first maximal independent set, (2) coloring the vertices using a number of colors that is almost surely within twice the chromatic number, and (3) finding a Hamiltonian circuit.", "The greedy sequential algorithm for maximal independent set (MIS) loops over the vertices in an arbitrary order adding a vertex to the resulting set if and only if no previous neighboring vertex has been added. In this loop, as in many sequential loops, each iterate will only depend on a subset of the previous iterates (i.e. knowing that any one of a vertex's previous neighbors is in the MIS, or knowing that it has no previous neighbors, is sufficient to decide its fate one way or the other). This leads to a dependence structure among the iterates. If this structure is shallow then running the iterates in parallel while respecting the dependencies can lead to an efficient parallel implementation mimicking the sequential algorithm. In this paper, we show that for any graph, and for a random ordering of the vertices, the dependence length of the sequential greedy MIS algorithm is polylogarithmic (O(log^2 n) with high probability). Our results extend previous results that show polylogarithmic bounds only for random graphs. We show similar results for greedy maximal matching (MM). For both problems we describe simple linear-work parallel algorithms based on the approach. The algorithms allow for a smooth tradeoff between more parallelism and reduced work, but always return the same result as the sequential greedy algorithms. We present experimental results that demonstrate efficiency and the tradeoff between work and parallelism." ] }
1707.05101
2736404018
We study revenue optimization learning algorithms for repeated posted-price auctions where a seller interacts with a single strategic buyer that holds a fixed private valuation for a good and seeks to maximize his cumulative discounted surplus. For this setting, first, we propose a novel algorithm that never decreases offered prices and has a tight strategic regret bound in @math under some mild assumptions on the buyer surplus discounting. This result closes the open research question on the existence of a no-regret horizon-independent weakly consistent pricing. The proposed algorithm is inspired by our observation that a double decrease of offered prices in a weakly consistent algorithm is enough to cause a linear regret. This motivates us to construct a novel transformation that maps a right-consistent algorithm to a weakly consistent one that never decreases offered prices. Second, we outperform the previously known strategic regret upper bound of the algorithm PRRFES, where the improvement is achieved by means of a finer constant factor @math of the principal term @math in this upper bound. Finally, we generalize results on strategic regret previously known for geometric discounting of the buyer's surplus to discounting of other types, namely: the optimality of the pricing PRRFES to the case of geometrically concave decreasing discounting; and linear lower bound on the strategic regret of a wide range of horizon-independent weakly consistent algorithms to the case of arbitrary discounts.
Most of studies on online advertising auctions lies in the field of game theory @cite_13 @cite_22 : a large part of them focused on characterizing different aspects of equilibria, and recent ones was devoted (but not limited) to: position auctions @cite_18 @cite_12 @cite_37 @cite_27 , different generalizations of second-price auctions @cite_44 @cite_25 , efficiency @cite_17 , mechanism expressiveness @cite_15 , competition across auction platforms @cite_3 , buyer budget @cite_40 , experimental analysis @cite_31 @cite_23 @cite_53 , etc.
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_31", "@cite_22", "@cite_53", "@cite_3", "@cite_44", "@cite_27", "@cite_40", "@cite_23", "@cite_15", "@cite_13", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2080122405", "", "2030876146", "", "1965611011", "1501018584", "2033101724", "", "1965002555", "2082943428", "2168796947", "", "2127176678", "2019762987", "" ], "abstract": [ "We describe two auction forms for search engine advertising and present two simple theoretical results concerning i) the estimation of click-through rates and ii) how to adjust the auctions for broad match search. We also describe some of the practical issues involved in implementing a VCG auction.", "", "We present the results of a large field experiment on setting reserve prices in auctions for online advertisements, guided by the theory of optimal auction design suitably adapted to the sponsored search setting. Consistent with the theory, following the introduction of new reserve prices revenues in these auctions have increased substantially.", "", "We performed controlled experiments of human participants in a continuous sequence of ad auctions, similar to those used by Internet companies. The goal of the research was to understand users' strategies in making bids. We studied the behavior under two auction types: (1) the Generalized Second-Price (GSP) auction and (2) the Vickrey--Clarke--Groves (VCG) payment rule, and manipulated also the participants' knowledge conditions: (1) explicitly given valuations and (2) payoff information from which valuations could be deduced. We found several interesting behaviors, among them are: No convergence to equilibrium was detected; moreover the frequency with which participants modified their bids increased with time. We can detect explicit \"better-response\" behavior rather than just mixed bidding. While bidders in GSP auctions do strategically shade their bids, they tend to bid higher than theoretically predicted by the standard VCG-like equilibrium of GSP. Bidders who are not explicitly given their valuations but can only deduce them from their gains behave a little less \"precisely\" than those with such explicit knowledge, but mostly during an initial learning phase. VCG and GSP yield approximately the same (high) social welfare, but GSP tends to give higher revenue.", "We present a two-stage model of competing ad auctions. Search engines attract users via Cournot-style competition. Meanwhile, each advertiser must pay a participation cost to use each ad platform, and advertiser entry strategies are derived using symmetric Bayes-Nash equilibrium that lead to the VCG outcome of the ad auctions. Consistent with our model of participation costs, we find empirical evidence that multi-homing advertisers are larger than single-homing advertisers. We then link our model to search engine market conditions: We derive comparative statics on consumer choice parameters, presenting relationships between market share, quality, and user welfare. We also analyze the prospect of joining auctions to mitigate participation costs, and we characterize when such joins do and do not increase welfare.", "In sponsored search, a number of advertising slots is available on a search results page, and have to be allocated among a set of advertisers competing to display an ad on the page. This gives rise to a bipartite matching market that is typically cleared by the way of an automated auction. Several auction mechanisms have been proposed, with variants of the Generalized Second Price (GSP) being widely used in practice. There is a rich body of work on bipartite matching markets that builds upon the stable marriage model of Gale and Shapley and the assignment model of Shapley and Shubik. This line of research offers deep insights into the structure of stable outcomes in such markets and their incentive properties. In this paper, we model advertising auctions in terms of an assignment model with linear utilities, extended with bidder and item specific maximum and minimum prices. Auction mechanisms like the commonly used GSP or the well-known Vickrey-Clarke-Groves (VCG) can be interpreted as simply computing a bidder-optimal stable matching in this model, for a suitably defined set of bidder preferences, but our model includes much richer bidders and preferences. We prove that in our model the existence of a stable matching is guaranteed, and under a non-degeneracy assumption a bidder-optimal stable matching exists as well. We give an algorithm to find such matching in polynomial time, and use it to design truthful mechanism that generalizes GSP, is truthful for profit-maximizing bidders, correctly implements features like bidder-specific minimum prices and position-specific bids, and works for rich mixtures of bidders and preferences. Our main technical contributions are the existence of bidder-optimal matchings and strategyproofness of the resulting mechanism, and are proved by induction on the progress of the matching algorithm.", "", "Targeted online advertising is a prime source of revenue for many Internet companies. It is a common industry practice to use a generalized second price auction mechanism to rank advertisements at every opportunity of an impression. This greedy algorithm is suboptimal for both advertisers and publishers when advertisers have a finite budget. In a greedy mechanism high performing advertisers tend to drop out of the auction marketplace fast and that adversely affects both the advertiser experience and the publisher revenue. We describe a method for improving such ad serving systems by including a budget pacing component that serves ads by being aware of global supply patterns. Such a system is beneficial for both advertisers and publishers. We demonstrate the benefits of this component using experiments we conducted on advertising at LinkedIn.", "We consider the optimization of revenue in advertising auctions based on the generalized second-price (GSP) paradigm, which has become a de facto standard. We examine several different GSP variants (including squashing and different types of reserve prices), and consider how to set their parameters optimally. One intriguing finding is that charging each advertiser the same per-click reserve price (\"unweighted reserve prices\") yields dramatically more revenue than the quality-weighted reserve prices that have become common practice. This result is robust, arising both from theoretical analysis and from two different kinds of computational experiments. We also identify a new GSP variant that is revenue optimal in restricted settings. Finally, we study how squashing and reserve prices interact, and how equilibrium selection affects the revenue of GSP when features such as reserves or squashing are applied.", "Auctions are widely used on the Web. Applications range from internet advertising to platforms such as eBay. In most of these applications the auctions in use are single multi-item auctions with unit demand. The main drawback of standard mechanisms for this type of auctions, such as VCG and GSP, is the limited expressiveness that they offer to the bidders. The General Auction Mechanism (GAM) of [1] is taking a first step towards addressing the problem of limited expressiveness by computing a bidder optimal, envy free outcome for linear utility functions with identical slopes and a single discontinuity per bidder-item pair. We show that in many practical situations this does not suffice to adequately model the preferences of the bidders, and we overcome this problem by presenting the first mechanism for piece-wise linear utility functions with non-identical slopes and multiple discontinuities. Our mechanism runs in polynomial time. Like GAM it is incentive compatible for inputs that fulfill a certain non-degeneracy requirement, but our requirement is more general than the requirement of GAM. For discontinuous utility functions that are non-degenerate as well as for continuous utility functions the outcome of our mechanism is a competitive equilibrium. We also show how our mechanism can be used to compute approximately bidder optimal, envy free outcomes for a general class of continuous utility functions via piece-wise linear approximation. Finally, we prove hardness results for even more expressive settings.", "", "We present a simple auction mechanism which extends the second-price auction with reserve and is truthful in expectation. This mechanism is particularly effective in private value environments where the distribution of valuations are irregular. Bidders can \"buy-it-now\", or alternatively \"take-a-chance\" where the top d bidders are equally likely to win. The randomized take-a-chance allocation incentivizes high valuation bidders to buy-it-now. We show that for a large class of valuations, this mechanism achieves similar allocations and revenues as Myerson's optimal mechanism, and outperforms the second-price auction with reserve. In addition, we present an evaluation of bid data from Microsoft's AdECN platform. We find the valuations are irregular, and counterfactual experiments suggest our BIN-TAC mechanism would improve revenue by 11 relative to an optimal second-price mechanism with reserve.", "I describe how search engines sell ad space using an auction. I analyze advertiser behavior in this context using elementary price theory and derive a simple way to estimate the producer surplus generated by online search advertising. It appears that the estimated value of online advertising tends to be between 2 and 2.3 times advertising expenditures. JEL: D44 (Auctions), D21 (Firm Behavior)", "" ] }
1707.05101
2736404018
We study revenue optimization learning algorithms for repeated posted-price auctions where a seller interacts with a single strategic buyer that holds a fixed private valuation for a good and seeks to maximize his cumulative discounted surplus. For this setting, first, we propose a novel algorithm that never decreases offered prices and has a tight strategic regret bound in @math under some mild assumptions on the buyer surplus discounting. This result closes the open research question on the existence of a no-regret horizon-independent weakly consistent pricing. The proposed algorithm is inspired by our observation that a double decrease of offered prices in a weakly consistent algorithm is enough to cause a linear regret. This motivates us to construct a novel transformation that maps a right-consistent algorithm to a weakly consistent one that never decreases offered prices. Second, we outperform the previously known strategic regret upper bound of the algorithm PRRFES, where the improvement is achieved by means of a finer constant factor @math of the principal term @math in this upper bound. Finally, we generalize results on strategic regret previously known for geometric discounting of the buyer's surplus to discounting of other types, namely: the optimality of the pricing PRRFES to the case of geometrically concave decreasing discounting; and linear lower bound on the strategic regret of a wide range of horizon-independent weakly consistent algorithms to the case of arbitrary discounts.
Studies on revenue maximization were devoted to both the seller revenue solely @cite_14 @cite_33 and different sort of trade-offs either between several auction stakeholders @cite_36 @cite_24 @cite_46 or between auction characteristics (like revenue monotonicity @cite_24 , expressivity, and simplicity @cite_8 ). The optimization problem was generally reduced to a selection of proper quality scores for advertisements (for auctions with several advertisers @cite_14 @cite_33 ) or reserve prices for buyers (e.g., in VCG @cite_9 , GSP @cite_32 , and others @cite_36 @cite_1 ). The reserve prices, in such setups, usually depend on distributions of buyer bids or valuations and was in turn estimated by machine learning techniques @cite_33 @cite_21 @cite_1 , while alternative approaches learned reserve prices directly @cite_55 @cite_6 @cite_43 . In contrast to these works, we use an online deterministic learning approach for repeated auctions.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_8", "@cite_36", "@cite_9", "@cite_21", "@cite_1", "@cite_32", "@cite_55", "@cite_6", "@cite_24", "@cite_43", "@cite_46" ], "mid": [ "2008831863", "1680626568", "2189241046", "2088733959", "2029050771", "2087873422", "", "2058972815", "2950646558", "633128216", "2043770577", "1926689565", "" ], "abstract": [ "Displaying sponsored ads alongside the search results is a key monetization strategy for search engine companies. Since users are more likely to click ads that are relevant to their query, it is crucial for search engine to deliver the right ads for the query and the order in which they are displayed. There are several works investigating on how to learn a ranking function to maximize the number of ad clicks. In this paper, we address a new revenue optimization problem and aim to answer the question: how to construct a ranking model that can deliver high quality ads to the user as well as maximize search engine revenue? We introduce two novel methods from di fferent machine learning perspectives, and both of them take the revenue component into careful considerations. The algorithms are built upon the click-through log data with real ad clicks and impressions. The extensively experimental results verify the proposed algorithm that can produce more revenue than other methods as well as avoid losing relevance accuracy. To provide deep insight into the importance of each feature to search engine revenue, we extract twelve basic features from four categories. The experimental study provides a feature ranking list according to the revenue benefit of each feature.", "Sponsored search is an important monetization channel for search engines, in which an auction mechanism is used to select the ads shown to users and determine the prices charged from advertisers. There have been several pieces of work in the literature that investigate how to design an auction mechanism in order to optimize the revenue of the search engine. However, due to some unrealistic assumptions used, the practical values of these studies are not very clear. In this paper, we propose a novel game-theoretic machine learning approach, which naturally combines machine learning and game theory, and learns the auction mechanism using a bilevel optimization framework. In particular, we first learn a Markov model from historical data to describe how advertisers change their bids in response to an auction mechanism, and then for any given auction mechanism, we use the learnt model to predict its corresponding future bid sequences. Next we learn the auction mechanism through empirical revenue maximization on the predicted bid sequences. We show that the empirical revenue will converge when the prediction period approaches infinity, and a Genetic Programming algorithm can effectively optimize this empirical revenue. Our experiments indicate that the proposed approach is able to produce a much more effective auction mechanism than several baselines.", "This paper develops a general approach, rooted in statistical learning theory, to learning an approximately revenue-maximizing auction from data. We introduce t-level auctions to interpolate between simple auctions, such as welfare maximization with reserve prices, and optimal auctions, thereby balancing the competing demands of expressivity and simplicity. We prove that such auctions have small representation error, in the sense that for every product distribution F over bidders’ valuations, there exists a t-level auction with small t and expected revenue close to optimal. We show that the set of t-level auctions has modest pseudo-dimension (for polynomial t) and therefore leads to small learning error. One consequence of our results is that, in arbitrary single-parameter settings, one can learn a mechanism with expected revenue arbitrarily close to optimal from a polynomial number of samples.", "E-commerce web-sites such as Ebay as well as advertising exchanges (AdX) such as DoubleClick's, RightMedia, or AdECN work as intermediaries who sell items (e.g. page-views) on behalf of a seller (e.g. a publisher) to buyers on the opposite side of the market (e.g., advertisers). These platforms often use fixed-percentage sharing schemes, according to which (i) the platform runs an auction amongst buyers, and (ii) gives the seller a constant-fraction (e.g., 80 ) of the auction proceeds. In these settings, the platform faces asymmetric information regarding both the valuations of buyers for the item (as in a standard auction environment) as well as about the seller's opportunity cost of selling the item. Moreover, platforms often face intense competition from similar market places, and such competition is likely to favor auction rules that secure high payoffs to sellers. In such an environment, what selling mechanism should platforms employ? Our goal in this paper is to study optimal mechanism design in settings plagued by competition and two-sided asymmetric information, and identify conditions under which the current practice of employing constant cuts is indeed optimal. In particular, we first show that for a large class of competition games, platforms behave in equilibrium as if they maximize a a convex combination of seller's payoffs and platform's revenue, with weight α on the seller's payoffs (which is proxy for the intensity of competition in the market). We generalize the analysis of Myerson and Satterthwaite (1983), and derive the optimal direct-revelation mechanism for each α. As expected, the optimal mechanism applies a reserve price which is decreasing in α. Next, we present an indirect implementation based on sharing schemes\". We show that constant cuts are optimal if and only if the opportunity cost of the seller has a power-form distribution, and derive a simple formula for computing the optimal constant cut as a function of the sellers' distribution of opportunity costs, and the market competition proxy α. Finally, for completeness, we study the case of a seller's optimal auction with a fixed profit for the platform, and derive the optimal direct and indirect implementations in this setting.", "This paper considers the problem faced by a seller who has a single object to sell to one of several possible buyers, when the seller has imperfect information about how much the buyers might be willing to pay for the object. The seller's problem is to design an auction game which has a Nash equilibrium giving him the highest possible expected utility. Optimal auctions are derived in this paper for a wide class of auction design problems.", "Abstract Most search engines use the weighted Generalized Second Price (wGSP) auction to sell keyword-based text ads, generating billions of dollars of advertising revenue every year. Designing and implementing near-optimal reserve prices for these wGSP auctions are naturally important problems for both academia and industry. In this paper, we show how to calculate and implement the near-optimal reserve price of the wGSP mechanism in realistic settings. Unlike reserve prices in standard single-item auctions, optimal reserve prices in wGSP auctions are discriminatory, different even for advertisers bidding on the same keyword. The optimal reserve price results can be extended to support CPA CPC CPM 1 hybrid auctions. Our simulations indicate that setting a proper reserve price will transfer some bidder utility (payoff) to auctioneer utility, resulting in higher revenue for the search engine. We describe a practical methodology to implement optimal reserve prices in production systems.", "", "The Generalized Second Price (GSP) auction is the primary auction used for selling sponsored search advertisements. In this paper we consider the revenue of this auction at equilibrium. We prove that if agent values are drawn from identical regular distributions, then the GSP auction paired with an appropriate reserve price generates a constant fraction (1 6th) of the optimal revenue. In the full-information game, we show that at any Nash equilibrium of the GSP auction obtains at least half of the revenue of the VCG mechanism excluding the payment of a single participant. This bound holds also with any reserve price, and is tight. Finally, we consider the tradeoff between maximizing revenue and social welfare. We introduce a natural convexity assumption on the click-through rates and show that it implies that the revenue-maximizing equilibrium of GSP in the full information model will necessarily be envy-free. In particular, it is always possible to maximize revenue and social welfare simultaneously when click-through rates are convex. Without this convexity assumption, however, we demonstrate that revenue may be maximized at a non-envy-free equilibrium that generates a socially inefficient allocation.", "Second-price auctions with reserve play a critical role for modern search engine and popular online sites since the revenue of these companies often directly de- pends on the outcome of such auctions. The choice of the reserve price is the main mechanism through which the auction revenue can be influenced in these electronic markets. We cast the problem of selecting the reserve price to optimize revenue as a learning problem and present a full theoretical analysis dealing with the complex properties of the corresponding loss function. We further give novel algorithms for solving this problem and report the results of several experiments in both synthetic and real data demonstrating their effectiveness.", "We present an extensive analysis of the key problem of learning optimal reserve prices for generalized second price auctions. We describe two algorithms for this task: one based on density estimation, and a novel algorithm benefiting from solid theoretical guarantees and with a very favorable running-time complexity of @math , where @math is the sample size and @math the number of slots. Our theoretical guarantees are more favorable than those previously presented in the literature. Additionally, we show that even if bidders do not play at an equilibrium, our second algorithm is still well defined and minimizes a quantity of interest. To our knowledge, this is the first attempt to apply learning algorithms to the problem of reserve price optimization in GSP auctions. Finally, we present the first convergence analysis of empirical equilibrium bidding functions to the unique symmetric Bayesian-Nash equilibrium of a GSP.", "Online advertising is an essential part of the Internet and the main source of revenue for many web-centric firms such as search engines, social networks, and online publishers. A key component of online advertising is the auction mechanism which selects and prices the set of winning ads. This work is inspired by one of the biggest practical drawbacks of the widely popular Vickrey-Clarke-Groves (VCG) mechanism, which is the unique incentive-compatible mechanism that maximizes social welfare. It is known that VCG lacks a desired property of revenue monotonicity - a natural notion which states that the revenue of a mechanism shouldn't go down as the number of bidders increase or if the bidders increase their bids. Most firms which depend on online advertising revenue have a large sales team to attract more bidders on their inventory as the general belief is that more bidders will increase competition, and hence revenue. However, the lack of revenue monotonicity of VCG conflicts with this general belief and can be strategically confusing for the firm's business. In this work, we seek incentive-compatible mechanisms that are revenue-monotone. This natural property comes at the expense of social welfare - one can show that it is not possible to get incentive-compatibility, revenue-monotonicity, and optimal social welfare simultaneously. In light of this, we introduce the notion of Price of Revenue Monotonicity (PoRM) to capture the loss in social welfare of a revenue-monotone mechanism. We further study revenue-monotonicity for two important online advertising scenarios. First one is the text vs image ad auction where in an ad slot, one can either show a single image ad or a few text ads. Second one is the video-pod auction where we have a video advertising slot of k seconds which can be filled with multiple video ads. For the image-text auction, we give a mechanism that satisfy both RM and IC and achieve PoRM of ∑i=1k 1 i ≈ ln k. We also show that the PoRM of our mechanism is the best possible by proving a matching lower bound of ∑i=1k 1 i on the PoRM of any deterministic mechanism under some mild assumptions. For the video-pod auction, we give a mechanism that achieves a PoRM of (⌊ log k ⌋ + 1) ⋅ (2 + ln k).", "Many online companies sell advertisement space in second-price auctions with reserve. In this paper, we develop a probabilistic method to learn a profitable strategy to set the reserve price. We use historical auction data with features to fit a predictor of the best reserve price. This problem is delicate - the structure of the auction is such that a reserve price set too high is much worse than a reserve price set too low. To address this we develop objective variables, a new framework for combining probabilistic modeling with optimal decision-making. Objective variables are \"hallucinated observations\" that transform the revenue maximization task into a regularized maximum likelihood estimation problem, which we solve with an EM algorithm. This framework enables a variety of prediction mechanisms to set the reserve price. As examples, we study objective variable methods with regression, kernelized regression, and neural networks on simulated and real data. Our methods outperform previous approaches both in terms of scalability and profit.", "" ] }
1707.05000
2734940843
Both bottom-up and top-down strategies have been used for neural transition-based constituent parsing. The parsing strategies differ in terms of the order in which they recognize productions in the derivation tree, where bottom-up strategies and top-down strategies take post-order and pre-order traversal over trees, respectively. Bottom-up parsers benefit from rich features from readily built partial parses, but lack lookahead guidance in the parsing process; top-down parsers benefit from non-local guidance for local decisions, but rely on a strong encoder over the input to predict a constituent hierarchy before its construction.To mitigate both issues, we propose a novel parsing system based on in-order traversal over syntactic trees, designing a set of transition actions to find a compromise between bottom-up constituent information and top-down lookahead information. Based on stack-LSTM, our psycholinguistically motivated constituent parsing system achieves 91.8 F1 on WSJ benchmark. Furthermore, the system achieves 93.6 F1 with supervised reranking and 94.2 F1 with semi-supervised reranking, which are the best results on the WSJ benchmark.
Our work is related to left corner parsing. formalize this in automata theory, which have appeared frequently in the compiler literature. apply the strategy into parsing. Typical works investigate the transformation of syntactic trees based on left-corner rules @cite_6 @cite_11 @cite_5 . In contrast, we propose a novel general transition-based in-order constituent parsing system.
{ "cite_N": [ "@cite_5", "@cite_6", "@cite_11" ], "mid": [ "2161626415", "1530801890", "2130300813" ], "abstract": [ "The frequency of words and syntactic constructions has been observed to have a substantial effect on language processing. This begs the question of what causes certain constructions to be more or less frequent. A theory of grounding (Phillips, 2010) would suggest that cognitive limitations might cause languages to develop frequent constructions in such a way as to avoid processing costs. This paper studies how current theories of working memory fit into theories of language processing and what influence memory limitations may have over reading times. Measures of such limitations are evaluated on eye-tracking data and the results are compared with predictions made by different theories of processing.", "The apparatus includes a cluster of knife blades which is moved up and down to slice the fruit; a free floating cylinder having axial slots and freely resting on the cluster, the cluster being disposed within the free floating member with its blades being positioned within the slots; an apertured cup holding a cored fruit positioned below the cluster, the core passage of the fruit being aligned with the aperture in the cup; and a celling knife disposed below the cup which is raised into the core passage in the fruit to remove the seed cell material thereupon. The sequence of operation is commenced by positioning the fruit in the cup below the cluster and above the celling knife. The free floating member is brought to rest on the fruit while the celling knife is raised into the core passage in the fruit, gradually expanded to remove the seed cell material therefrom in small particles, collapsed and retracted from the core passage. The cluster of knife blades is advanced against the fruit to slice same and retracted while the free floating member, being restrained by the flexible stop member, extracts wedged slices of the fruit from between the knife blades.", "Human syntactic processing shows many signs of taking place within a general-purpose short-term memory. But this kind of memory is known to have a severely constrained storage capacity---possibly constrained to as few as three or four distinct elements. This article describes a model of syntactic processing that operates successfully within these severe constraints, by recognizing constituents in a right-corner transformed representation (a variant of left-corner parsing) and mapping this representation to random variables in a Hierarchic Hidden Markov Model, a factored time-series model which probabilistically models the contents of a bounded memory store over time. Evaluations of the coverage of this model on a large syntactically annotated corpus of English sentences, and the accuracy of a a bounded-memory parsing strategy based on this model, suggest this model may be cognitively plausible." ] }
1707.05000
2734940843
Both bottom-up and top-down strategies have been used for neural transition-based constituent parsing. The parsing strategies differ in terms of the order in which they recognize productions in the derivation tree, where bottom-up strategies and top-down strategies take post-order and pre-order traversal over trees, respectively. Bottom-up parsers benefit from rich features from readily built partial parses, but lack lookahead guidance in the parsing process; top-down parsers benefit from non-local guidance for local decisions, but rely on a strong encoder over the input to predict a constituent hierarchy before its construction.To mitigate both issues, we propose a novel parsing system based on in-order traversal over syntactic trees, designing a set of transition actions to find a compromise between bottom-up constituent information and top-down lookahead information. Based on stack-LSTM, our psycholinguistically motivated constituent parsing system achieves 91.8 F1 on WSJ benchmark. Furthermore, the system achieves 93.6 F1 with supervised reranking and 94.2 F1 with semi-supervised reranking, which are the best results on the WSJ benchmark.
Neural networks have achieved the state-of-the-art for parsing under various grammar formalisms, including dependency @cite_18 constituent @cite_26 @cite_9 and CCG parsing @cite_8 @cite_25 . Seminal work employs transition-based methods @cite_1 . This method has been extended by investigating more complex representations of configurations for constituent parsing @cite_20 @cite_26 . employ stack-LSTM onto top-down system, which is the same as our top-down parser. employ tree-LSTM to model the complex representation in stack in bottom-up system. We are the first to investigate in-order traversal by designing a novel transition-based system under the same neural structure model framework.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_8", "@cite_9", "@cite_1", "@cite_25", "@cite_20" ], "mid": [ "2518668950", "2289899728", "2563900099", "2950191064", "2250861254", "", "2251923740" ], "abstract": [ "We develop a novel bi-directional attention model for dependency parsing, which learns to agree on headword predictions from the forward and backward parsing directions. The parsing procedure for each direction is formulated as sequentially querying the memory component that stores continuous headword embeddings. The proposed parser makes use of soft headword embeddings, allowing the model to implicitly capture high-order parsing history without dramatically increasing the computational complexity. We conduct experiments on English, Chinese, and 12 other languages from the CoNLL 2006 shared task, showing that the proposed model achieves state-of-the-art unlabeled attachment scores on 6 languages.", "We introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure. We explain efficient inference procedures that allow application to both parsing and language modeling. Experiments show that they provide better parsing in English than any single previously published supervised generative model and better language modeling than state-of-the-art sequential RNNs in English and Chinese.", "", "Recurrent neural network grammars (RNNG) are a recently proposed probabilistic generative modeling family for natural language. They show state-of-the-art language modeling and parsing performance. We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection. We find that explicit modeling of composition is crucial for achieving the best performance. Through the attention mechanism, we find that headedness plays a central role in phrasal representation (with the model's latent attention largely agreeing with predictions made by hand-crafted head rules, albeit with some important differences). By training grammars without nonterminal labels, we find that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.", "Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2 improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2 unlabeled attachment score on the English Penn Treebank.", "", "We propose three improvements to address the drawbacks of state-of-the-art transition-based constituent parsers. First, to resolve the error propagation problem of the traditional pipeline approach, we incorporate POS tagging into the syntactic parsing process. Second, to alleviate the negative influence of size differences among competing action sequences, we align parser states during beam-search decoding. Third, to enhance the power of parsing models, we enlarge the feature set with non-local features and semisupervised word cluster features. Experimental results show that these modifications improve parsing performance significantly. Evaluated on the Chinese TreeBank (CTB), our final performance reaches 86.3 (F1) when trained on CTB 5.1, and 87.1 when trained on CTB 6.0, and these results outperform all state-of-the-art parsers." ] }
1707.05246
2736422900
Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are -- to some degree -- transferable across models, domains, and even tasks.
Most prior work on data selection for transfer learning focuses on phrase-based machine translation. Typically language models are leveraged via perplexity or cross-entropy scoring to select target data @cite_26 @cite_40 @cite_13 @cite_31 . A recent study investigates data selection for neural machine translation @cite_11 . Perplexity was also used to select training data for dependency parsing @cite_36 , but has been found to be less suitable for tasks such as sentiment analysis @cite_16 . In general, there are fewer studies on data selection for other tasks, e.g., constituent parsing @cite_28 , dependency parsing @cite_48 @cite_36 and sentiment analysis @cite_3 . Work on predicting task accuracy is related, but can be seen as complementary @cite_0 @cite_14 .
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_36", "@cite_28", "@cite_48", "@cite_3", "@cite_0", "@cite_40", "@cite_31", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2117278770", "", "", "", "2142746600", "1977088242", "2026976290", "", "2773549337", "2586992290", "", "2741787148" ], "abstract": [ "We address the problem of selecting non-domain-specific language model training data to build auxiliary language models for use in tasks such as machine translation. Our approach is based on comparing the cross-entropy, according to domain-specific and non-domain-specifc language models, for each sentence of the text source used to produce the latter language model. We show that this produces better language models, trained on less data, than both random data selection and two other previously proposed methods.", "", "", "", "It is well known that parsing accuracy suffers when a model is applied to out-of-domain data. It is also known that the most beneficial data to parse a given domain is data that matches the domain (Sekine, 1997; Gildea, 2001). Hence, an important task is to select appropriate domains. However, most previous work on domain adaptation relied on the implicit assumption that domains are somehow given. As more and more data becomes available, automatic ways to select data that is beneficial for a new (unknown) target domain are becoming attractive. This paper evaluates various ways to automatically acquire related training data for a given test set. The results show that an unsupervised technique based on topic models is effective -- it outperforms random data selection on both languages examined, English and Dutch. Moreover, the technique works better than manually assigned labels gathered from meta-data that is available for English.", "We propose an approach to domain adaptation that selects instances from a source domain training set, which are most similar to a target domain. The factor by which the original source domain training set size is reduced is determined automatically by measuring domain similarity between source and target domain as well as their domain complexity variance. Domain similarity is measured as divergence between term unigram distributions. Domain complexity is measured as homogeneity, i.e. self-similarity. We evaluate our approach in a semi-supervised cross-domain document-level polarity classification experiment. Thereby we show, that it yields small but statistically significant improvements over several natural baselines and achieves results competitive to other state-of-the-art domain adaptation schemes.", "Statistical parsers have become increasingly accurate, to the point where they are useful in many natural language applications. However, estimating parsing accuracy on a wide variety of domains and genres is still a challenge in the absence of gold-standard parse trees. In this paper, we propose a technique that automatically takes into account certain characteristics of the domains of interest, and accurately predicts parser performance on data from these new domains. As a result, we have a cheap (no annotation involved) and effective recipe for measuring the performance of a statistical parser on any given domain.", "", "Data selection is a common technique for adapting statistical translation models for a specific domain, which has been shown to both improve translation quality and to reduce model size. Selection relies on some in-domain data, of the same domain of the texts expected to be translated. Selecting the sentence-pairs that are most similar to the in-domain data from a pool of parallel texts has been shown to be effective; yet, this approach holds the risk of resulting in a limited coverage, when necessary n-grams that do appear in the pool are less similar to in-domain data that is available in advance. Some methods select additional data based on the actual text that needs to be translated. While useful, this is not always a practical scenario. In this work we describe an extensive exploration of data selection techniques over Arabic to French datasets, and propose methods to address both similarity and coverage considerations while maintaining a limited model size.", "Domain adaptation is important in sentiment analysis as sentiment-indicating words vary between domains. Recently, multi-domain adaptation has become more pervasive, but existing approaches train on all available source domains including dissimilar ones. However, the selection of appropriate training data is as important as the choice of algorithm. We undertake -- to our knowledge for the first time -- an extensive study of domain similarity metrics in the context of sentiment analysis and propose novel representations, metrics, and a new scope for data selection. We evaluate the proposed methods on two large-scale multi-domain adaptation settings on tweets and reviews and demonstrate that they consistently outperform strong random and balanced baselines, while our proposed selection strategy outperforms instance-level selection and yields the best score on a large reviews corpus.", "", "Intelligent selection of training data has proven a successful technique to simultaneously increase training efficiency and translation performance for phrase-based machine translation (PBMT). With the recent increase in popularity of neural machine translation (NMT), we explore in this paper to what extent and how NMT can also benefit from data selection. While state-of-the-art data selection (, 2011) consistently performs well for PBMT, we show that gains are substantially lower for NMT. Next, we introduce dynamic data selection for NMT, a method in which we vary the selected subset of training data between different training epochs. Our experiments show that the best results are achieved when applying a technique we call gradual fine-tuning, with improvements up to +2.6 BLEU over the original data selection approach and up to +3.1 BLEU over a general baseline." ] }
1707.05246
2736422900
Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are -- to some degree -- transferable across models, domains, and even tasks.
There is a long history of research in adaptive data selection, with early approaches grounded in information theory using a Bayesian learning framework @cite_38 . It has also been studied extensively as active learning @cite_33 . Curriculum learning is related to active learning @cite_24 , whose view is different: active learning aims at finding the most difficult instances to label, examples typically close to the decision boundary. Confidence-based measures are prominent, but as such are less widely applicable than our model-agnostic approach.
{ "cite_N": [ "@cite_24", "@cite_38", "@cite_33" ], "mid": [ "2903158431", "2115305054", "190546068" ], "abstract": [ "", "Learning can be made more efficient if we can actively select particularly salient data points. Within a Bayesian learning framework, objective functions are discussed that measure the expected informativeness of candidate measurements. Three alternative specifications of what we want to gain information about lead to three different criteria for data selection. All these criteria depend on the assumption that the hypothesis space is correct, which may prove to be their main weakness.", "This chapter introduces the notions of rational expectations and optimal learning extensively used in economic theory.* It has become well known from recent literature that in active learning situations (where the actions of the statistician, or the person learning about some parameters, influences the draws from the distribution about which he she is learning), full learning may not take place. This challenges the use of the rational expectations hypothesis which is justified on the basis that agents operating in an economy eventually all learn the true structure of the economy and optimize accordingly. In El-Gamal and Sundaram (1989, 1990) we presented a framework where a Bayesian economist imposes priors on agent-priors and we then study the evolution of those economist beliefs. We showed that generically, the economist limit beliefs generically do not have point mass at any particular agent-belief, let alone the true rational expectations belief. We show, however, that in most cases where there is sufficient variability in the law of motion that the agents are trying to learn, in sequential models that are extensively used in the economic literature, the rational expectations hypothesis may indeed be justified on the basis of optimizing and optimally updating agents." ] }
1707.04991
2739381051
We formulate tracking as an online decision-making process, where a tracking agent must follow an object despite ambiguous image frames and a limited computational budget. Crucially, the agent must decide where to look in the upcoming frames, when to reinitialize because it believes the target has been lost, and when to update its appearance model for the tracked object. Such decisions are typically made heuristically. Instead, we propose to learn an optimal decision-making policy by formulating tracking as a partially observable decision-making process (POMDP). We learn policies with deep reinforcement learning algorithms that need supervision (a reward signal) only when the track has gone awry. We demonstrate that sparse rewards allow us to quickly train on massive datasets, several orders of magnitude more than past work. Interestingly, by treating the data source of Internet videos as unlimited streams, we both learn and evaluate our trackers in a single, unified computational stream.
Several established benchmarks exist for evaluating trackers @cite_42 @cite_29 . Interestingly, there is evidence to suggest that many methods tend to overfit due to aggressive tuning @cite_28 . Withholding test data annotation and providing an evaluation server addresses this to some extent @cite_27 @cite_10 . Alternatively, we propose to evaluate on an open-world stream of Internet videos, making over-fitting impossible by design. It is well-known that algorithms trained on closed-world'' datasets (say, with centered objects against clean backgrounds @cite_4 @cite_57 ) are difficult to generalize to in-the-wild'' footage @cite_34 . We invite the reader to compare our videos in the supplementary material to contemporary video benchmarks for tracking.
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_29", "@cite_42", "@cite_57", "@cite_27", "@cite_34", "@cite_10" ], "mid": [ "74585943", "2118897962", "2916780012", "2089961441", "2148675068", "2066513826", "", "2117539524" ], "abstract": [ "", "Evaluating visual tracking algorithms, or trackers for short, is of great importance in computer vision. However, it is hard to fairly compare trackers due to many parameters need to be tuned in the experimental configurations. On the other hand, when introducing a new tracker, a recent trend is to validate it by comparing it with several existing ones. Such an evaluation may have subjective biases towards the new tracker which typically performs the best. This is mainly due to the difficulty to optimally tune all its competitors and sometimes the selected testing sequences. By contrast, little subjective bias exists towards the second best ones in the contest. This observation inspires us with a novel perspective towards inhibiting subjective bias in evaluating trackers by analyzing the results between the second bests. In particular, we first collect all tracking papers published in major computer vision venues in recent years. From these papers, after filtering out potential biases in various aspects, we create a dataset containing many records of comparison results between various visual trackers. Using these records, we derive performance rankings of the involved trackers by four different methods. The first two methods model the dataset as a graph and then derive the rankings over the graph, one by a rank aggregation algorithm and the other by a PageRank-like solution. The other two methods take the records as generated from sports contests and adopt widely used Elo's and Glicko's rating systems to derive the rankings. The experimental results are presented and may serve as a reference for related research.", "The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http: votchallenge.net).", "Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.", "We demonstrate that is it possible to automatically find representative example images of a specified object category. These canonical examples are perhaps the kind of images that one would show a child to teach them what, for example a horse is - images with a large object clearly separated from the background. Given a large collection of images returned by a web search for an object category, our approach proceeds without any user supplied training data for the category. First images are ranked according to a category independent composition model that predicts whether they contain a large clearly depicted object, and outputs an estimated location of that object. Then local features calculated on the proposed object regions are used to eliminate images not distinctive to the category and to cluster images by similarity of object appearance. We present results and a user evaluation on a variety of object categories, demonstrating the effectiveness of the approach.", "Numerous approaches on object tracking have been proposed during the past decade with demonstrated success. However, most tracking algorithms are evaluated on limited video sequences and annotations. For thorough performance evaluation, we propose a large-scale database which contains 365 challenging image sequences of pedestrians and rigid objects. The database covers 12 kinds of objects, and most of the sequences are captured from moving cameras. Each sequence is annotated with target location and occlusion level for evaluation. A thorough experimental evaluation of 20 state-of-the-art tracking algorithms is presented with detailed analysis using different metrics. The database is publicly available and evaluation can be carried out online for fair assessments of visual tracking algorithms.", "", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements." ] }
1707.04991
2739381051
We formulate tracking as an online decision-making process, where a tracking agent must follow an object despite ambiguous image frames and a limited computational budget. Crucially, the agent must decide where to look in the upcoming frames, when to reinitialize because it believes the target has been lost, and when to update its appearance model for the tracked object. Such decisions are typically made heuristically. Instead, we propose to learn an optimal decision-making policy by formulating tracking as a partially observable decision-making process (POMDP). We learn policies with deep reinforcement learning algorithms that need supervision (a reward signal) only when the track has gone awry. We demonstrate that sparse rewards allow us to quickly train on massive datasets, several orders of magnitude more than past work. Interestingly, by treating the data source of Internet videos as unlimited streams, we both learn and evaluate our trackers in a single, unified computational stream.
An interesting (but perhaps unsurprising) phenomenon is that better trackers tend to be slower @cite_29 . Indeed, on the VOT benchmark, most recent trackers do not run in real time. Generally, trackers that search locally @cite_52 @cite_9 run faster than those that search globally @cite_17 @cite_22 @cite_24 . To optimize visual recognition efficiency, one can learn a policy to guide selective search or attention. Inspired by recent work which finds a policy for selective search using RL @cite_33 @cite_16 @cite_5 @cite_37 @cite_11 @cite_6 @cite_40 , we also learn a policy that decides whether to track ( , search positions near the previous estimate) or reinitialize ( , search globally over the entire image). But in contrast to this prior work, we additionally learn a policy to decide when to update a tracker's appearance model. To ensure that our tracker operates with a fixed computational budget, we implement reinitialization by searching over a random subset of positions (equal in number to those examined by track).
{ "cite_N": [ "@cite_37", "@cite_11", "@cite_22", "@cite_33", "@cite_9", "@cite_29", "@cite_52", "@cite_6", "@cite_24", "@cite_40", "@cite_5", "@cite_16", "@cite_17" ], "mid": [ "2015058673", "", "", "2963796526", "", "2916780012", "", "", "2194002514", "2469312016", "2138068405", "", "" ], "abstract": [ "Survival depends on successfully foraging for food, for which evolution has selected diverse behaviors in different species. Humans forage not only for food, but also for information. We decide where to look over 170,000 times per day, approximately three times per wakeful second. The frequency of these saccadic eye movements belies the complexity underlying each individual choice. Experience factors into the choice of where to look and can be invoked to rapidly redirect gaze in a context and task-appropriate manner. However, remarkably little is known about how individuals learn to direct their gaze given the current context and task. We designed a task in which participants search a novel scene for a target whose location was drawn stochastically on each trial from a fixed prior distribution. The target was invisible on a blank screen, and the participants were rewarded when they fixated the hidden target location. In just a few trials, participants rapidly found the hidden targets by looking near previously rewarded locations and avoiding previously unrewarded locations. Learning trajectories were well characterized by a simple reinforcement-learning (RL) model that maintained and continually updated a reward map of locations. The RL model made further predictions concerning sensitivity to recent experience that were confirmed by the data. The asymptotic performance of both the participants and the RL model approached optimal performance characterized by an ideal-observer theory. These two complementary levels of explanation show how experience in a novel environment drives visual search in humans and may extend to other forms of search such as animal foraging.", "", "", "We present an attention-based modular neural framework for computer vision. The framework uses a soft attention mechanism allowing models to be trained with gradient descent. It consists of three modules: a recurrent attention module controlling where to look in an image or video frame, a feature-extraction module providing a representation of what is seen, and an objective module formalizing why the model learns its attentive behavior. The attention module allows the model to focus computation on task-related information in the input. We apply the framework to several object tracking tasks and explore various design choices. We experiment with three data sets, bouncing ball, moving digits and the real-world KTH data set. The proposed RATM performs well on all three tasks and can generalize to related but previously unseen sequences from a challenging tracking data set.", "", "The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http: votchallenge.net).", "", "", "Tracking-by-detection approaches are some of the most successful object trackers in recent years. Their success is largely determined by the detector model they learn initially and then update over time. However, under challenging conditions where an object can undergo transformations, e.g., severe rotation, these methods are found to be lacking. In this paper, we address this problem by formulating it as a proposal selection task and making two contributions. The first one is introducing novel proposals estimated from the geometric transformations undergone by the object, and building a rich candidate set for predicting the object location. The second one is devising a novel selection strategy using multiple cues, i.e., detection score and edgeness score computed from state-of-the-art object edges and motion boundaries. We extensively evaluate our approach on the visual object tracking 2014 challenge and online tracking benchmark datasets, and show the best performance.", "One of the most widely used strategies for visual object detection is based on exhaustive spatial hypothesis search. While methods like sliding windows have been successful and effective for many years, they are still brute-force, independent of the image content and the visual category being searched. In this paper we present principled sequential models that accumulate evidence collected at a small set of image locations in order to detect visual objects effectively. By formulating sequential search as reinforcement learning of the search policy (including the stopping condition), our fully trainable model can explicitly balance for each class, specifically, the conflicting goals of exploration – sampling more image regions for better accuracy –, and exploitation – stopping the search efficiently when sufficiently confident about the target's location. The methodology is general and applicable to any detector response function. We report encouraging results in the PASCAL VOC 2012 object detection test set showing that the proposed methodology achieves almost two orders of magnitude speed-up over sliding window methods.", "This work provides a framework for learning sequential attention in real-world visual object recognition, using an architecture of three processing stages. The first stage rejects irrelevant local descriptors based on an information theoretic saliency measure, providing candidates for foci of interest (FOI). The second stage investigates the information in the FOI using a codebook matcher and providing weak object hypotheses. The third stage integrates local information via shifts of attention, resulting in chains of descriptor-action pairs that characterize object discrimination. A Q-learner adapts then from explorative search and evaluative feedback from entropy decreases on the attention sequences, eventually prioritizing shifts that lead to a geometry of descriptor-action scanpaths that is highly discriminative with respect to object recognition. The methodology is successfully evaluated on indoors (COIL-20 database) and outdoors (TSG-20 database) imagery, demonstrating significant impact by learning, outperforming standard local descriptor based methods both in recognition accuracy and processing time.", "", "" ] }
1707.05141
2949065585
We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.
Batched GPU routines for LU, Cholesky and QR factorizations have been developed in @cite_5 @cite_15 @cite_1 using a block recursive approach which increases data reuse and leads to very good performance for relatively large matrix sizes. GPU routines optimized for computing the QR decomposition of very tall and skinny matrices are presented in @cite_20 where they develop an efficient transpose matrix-vector computation that is employed with some minor changes in this work. GPU-CPU hybrid algorithms for batched SVD using Jacobi and bidiagonalization methods are introduced in @cite_6 where pair generation for the Jacobi method and the solver phase of the bidiagonalization are handled on the CPU. The work in @cite_16 employs the power method to construct a rank 1 approximation for 2D filters in convolutional neural networks. Routines to handle the SVD of many matrices on GPUs is presented in @cite_10 where each thread within a warp computes the SVD of a single matrix.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_5", "@cite_15", "@cite_16", "@cite_10", "@cite_20" ], "mid": [ "", "1480169421", "1028753842", "", "2407624399", "2293640999", "2111221242" ], "abstract": [ "", "One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, C x = 1 over K Σ x KX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements for a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, Σ, and U such that A = UΣVH where U and V are orthonormal and Σ is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors.", "As modern hardware keeps evolving, an increasingly effective approach to developing energy efficient and high-performance solvers is to design them to work on many small size and independent problems. Many applications already need this functionality, especially for GPUs, which are currently known to be about four to five times more energy efficient than multicore CPUs. We describe the development of one-sided factorizations that work for a set of small dense matrices in parallel, and we illustrate our techniques on the QR factorization based on Householder transformations. We refer to this mode of operation as a batched factorization. Our approach is based on representing the algorithms as a sequence of batched BLAS routines for GPU-only execution. This is in contrast to the hybrid CPU-GPU algorithms that rely heavily on using the multicore CPU for specific parts of the workload. But for a system to benefit fully from the GPU’s significantly higher energy efficiency, avoiding the use of the multicore CPU must be a primary design goal, so the system can rely more heavily on the more efficient GPU. Additionally, this will result in the removal of the costly CPU-to-GPU communication. Furthermore, we do not use a single symmetric multiprocessor (on the GPU) to factorize a single problem at a time. We illustrate how our performance analysis, and the use of profiling and tracing tools, guided the development and optimization of our batched factorization to achieve up to a 2-fold speedup and a 3-fold energy efficiency improvement compared to our highly optimized batched CPU implementations based on the MKL library (when using two sockets of Intel Sandy Bridge CPUs). Compared to a batched QR factorization featured in the CUBLAS library for GPUs, we achieved up to (5 ) speedup on the K40 GPU.", "", "Convolutional neural networks (CNNs) are one of the most successful deep architectures in machine learning. While they achieve superior recognition rate, the intensive computation of CNNs limits their applicability. In this paper, we propose a method based on separable filters to reduce the computational cost. By using Singular Value Decompositions (SVDs), a 2D filter in the CNNs can be approximated by the product of two 1D filters, and the 2D convolution can be computed via two consecutive 1D convolutions. We implemented a batched SVD routine on GPUs that can compute the SVD of multiple small matrices simultaneously, and three convolution methods using different memory spaces according to the filter size. Comparing to the state-of-art GPU implementations of CNNs, experimental results show that our methods can achieve up to 2.66 times speedup in the forward pass and up to 2.35 times speedup in the backward pass.", "In this paper we present a homography algorithm to produce image mosaics using parallelism to solve a multiple Singular Value Decomposition (SVD) system. We analyse four state of art SVD methods and choose the one which better suites the expected size of the matrices derived from the datasets of interest. Then we use cuda to accelerate the solution of the transformation homogeneous matrices.", "We describe an implementation of the Communication-Avoiding QR (CAQR) factorization that runs entirely on a single graphics processor (GPU). We show that the reduction in memory traffic provided by CAQR allows us to outperform existing parallel GPU implementations of QR for a large class of tall-skinny matrices. Other GPU implementations of QR handle panel factorizations by either sending the work to a general-purpose processor or using entirely bandwidth-bound operations, incurring data transfer overheads. In contrast, our QR is done entirely on the GPU using compute-bound kernels, meaning performance is good regardless of the width of the matrix. As a result, we outperform CULA, a parallel linear algebra library for GPUs by up to 17x for tall-skinny matrices and Intel's Math Kernel Library (MKL) by up to 12x. We also discuss stationary video background subtraction as a motivating application. We apply a recent statistical approach, which requires many iterations of computing the singular value decomposition of a tall-skinny matrix. Using CAQR as a first step to getting the singular value decomposition, we are able to get the answer 3x faster than if we use a traditional bandwidth-bound GPU QR factorization tuned specifically for that matrix size, and 30x faster than if we use Intel's Math Kernel Library (MKL) singular value decomposition routine on a multicore CPU." ] }
1707.04818
2736482575
Action anticipation aims to detect an action before it happens. Many real world applications in robotics and surveillance are related to this predictive capability. Current methods address this problem by first anticipating visual representations of future frames and then categorizing the anticipated representations to actions. However, anticipation is based on a single past frame's representation, which ignores the history trend. Besides, it can only anticipate a fixed future time. We propose a Reinforced Encoder-Decoder (RED) network for action anticipation. RED takes multiple history representations as input and learns to anticipate a sequence of future representations. One salient aspect of RED is that a reinforcement module is adopted to provide sequence-level supervision; the reward function is designed to encourage the system to make correct predictions as early as possible. We test RED on TVSeries, THUMOS-14 and TV-Human-Interaction datasets for action anticipation and achieve state-of-the-art performance on all datasets.
In the setting of offline action detection, the whole video is given and the task is to detect whether given actions occurs in this video and when does it occurs. S-CNN @cite_3 presented a two-stage action localization framework: first using a proposal network to generate temporal proposals and then scoring the proposals with a localization network. TURN @cite_0 proposed to use temporal coordinate regression to refine action boundaries for temporal proposal generation, which is proved to be effective and could be generalized to different action domains. TALL @cite_5 used natural language as query to localize actions in long videos and designed a cross-modal regression model to solve it.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_3" ], "mid": [ "2953229046", "2611788449", "2394849137" ], "abstract": [ "Temporal Action Proposal (TAP) generation is an important problem, as fast and accurate extraction of semantically important (e.g. human actions) segments from untrimmed videos is an important step for large-scale video analysis. We propose a novel Temporal Unit Regression Network (TURN) model. There are two salient aspects of TURN: (1) TURN jointly predicts action proposals and refines the temporal boundaries by temporal coordinate regression; (2) Fast computation is enabled by unit feature reuse: a long untrimmed video is decomposed into video units, which are reused as basic building blocks of temporal proposals. TURN outperforms the state-of-the-art methods under average recall (AR) by a large margin on THUMOS-14 and ActivityNet datasets, and runs at over 880 frames per second (FPS) on a TITAN X GPU. We further apply TURN as a proposal generation stage for existing temporal action localization pipelines, it outperforms state-of-the-art performance on THUMOS-14 and ActivityNet.", "This paper focuses on temporal localization of actions in untrimmed videos. Existing methods typically train classifiers for a pre-defined list of actions and apply them in a sliding window fashion. However, activities in the wild consist of a wide combination of actors, actions and objects; it is difficult to design a proper activity list that meets users' needs. We propose to localize activities by natural language queries. Temporal Activity Localization via Language (TALL) is challenging as it requires: (1) suitable design of text and video representations to allow cross-modal matching of actions and language queries; (2) ability to locate actions accurately given features from sliding windows of limited granularity. We propose a novel Cross-modal Temporal Regression Localizer (CTRL) to jointly model text query and video clips, output alignment scores and action boundary regression results for candidate clips. For evaluation, we adopt TaCoS dataset, and build a new dataset for this task on top of Charades by adding sentence temporal annotations, called Charades-STA. We also build complex sentence queries in Charades-STA for test. Experimental results show that CTRL outperforms previous methods significantly on both datasets.", "We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments in a long video that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes on the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and therefore achieve high temporal localization accuracy. Only the proposal network and the localization network are used during prediction. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7 to 7.4 on MEXaction2 and increases from 15.0 to 19.0 on THUMOS 2014, when the overlap threshold for evaluation is set to 0.5." ] }
1707.04818
2736482575
Action anticipation aims to detect an action before it happens. Many real world applications in robotics and surveillance are related to this predictive capability. Current methods address this problem by first anticipating visual representations of future frames and then categorizing the anticipated representations to actions. However, anticipation is based on a single past frame's representation, which ignores the history trend. Besides, it can only anticipate a fixed future time. We propose a Reinforced Encoder-Decoder (RED) network for action anticipation. RED takes multiple history representations as input and learns to anticipate a sequence of future representations. One salient aspect of RED is that a reinforcement module is adopted to provide sequence-level supervision; the reward function is designed to encourage the system to make correct predictions as early as possible. We test RED on TVSeries, THUMOS-14 and TV-Human-Interaction datasets for action anticipation and achieve state-of-the-art performance on all datasets.
There have been some promising works on anticipating future action categories. Lan @cite_8 designed a hierarchical representation, which describes human movements at multiple levels of granularities, to predict future actions in the wild. Pei @cite_12 proposed an event parsing algorithm by using Stochastic Context Sensitive Grammar (SCSG) for inferring the goal of agents, and predicting the intended actions. Xie @cite_14 proposed to infer people's intention of performing actions, which is a good clue for predicting future actions. Vondrick @cite_17 proposed to anticipate visual representation by training CNN on large-scale unlabelled video data.
{ "cite_N": [ "@cite_14", "@cite_17", "@cite_12", "@cite_8" ], "mid": [ "", "2951242004", "2045792079", "2185953016" ], "abstract": [ "", "Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.", "In this paper, we present an event parsing algorithm based on Stochastic Context Sensitive Grammar (SCSG) for understanding events, inferring the goal of agents, and predicting their plausible intended actions. The SCSG represents the hierarchical compositions of events and the temporal relations between the sub-events. The alphabets of the SCSG are atomic actions which are defined by the poses of agents and their interactions with objects in the scene. The temporal relations are used to distinguish events with similar structures, interpolate missing portions of events, and are learned from the training data. In comparison with existing methods, our paper makes the following contributions. i) We define atomic actions by a set of relations based on the fluents of agents and their interactions with objects in the scene. ii) Our algorithm handles events insertion and multi-agent events, keeps all possible interpretations of the video to preserve the ambiguities, and achieves the globally optimal parsing solution in a Bayesian framework; iii) The algorithm infers the goal of the agents and predicts their intents by a top-down process; iv) The algorithm improves the detection of atomic actions by event contexts. We show satisfactory results of event recognition and atomic action detection on the data set we captured which contains 12 event categories in both indoor and outdoor videos.", "We consider inferring the future actions of people from a still image or a short video clip. Predicting future actions before they are actually executed is a critical ingredient for enabling us to effectively interact with other humans on a daily basis. However, challenges are two fold: First, we need to capture the subtle details inherent in human movements that may imply a future action; second, predictions usually should be carried out as quickly as possible in the social world, when limited prior observations are available." ] }
1707.04818
2736482575
Action anticipation aims to detect an action before it happens. Many real world applications in robotics and surveillance are related to this predictive capability. Current methods address this problem by first anticipating visual representations of future frames and then categorizing the anticipated representations to actions. However, anticipation is based on a single past frame's representation, which ignores the history trend. Besides, it can only anticipate a fixed future time. We propose a Reinforced Encoder-Decoder (RED) network for action anticipation. RED takes multiple history representations as input and learns to anticipate a sequence of future representations. One salient aspect of RED is that a reinforcement module is adopted to provide sequence-level supervision; the reward function is designed to encourage the system to make correct predictions as early as possible. We test RED on TVSeries, THUMOS-14 and TV-Human-Interaction datasets for action anticipation and achieve state-of-the-art performance on all datasets.
We get inspiration from recent approaches that used REINFORCE @cite_26 to learn task-specific policies. Yeung @cite_4 proposed to learn policies to predict next observation location for action detection task by using LSTM networks. Mnih @cite_10 proposed to adaptively select a sequence of regions in images and only processing the selected regions at high resolution for the image classification task. Ranzato @cite_13 proposed a sequence-level training algorithm for image captioning that directly optimizes the metric used at test time by policy gradient methods.
{ "cite_N": [ "@cite_13", "@cite_10", "@cite_26", "@cite_4" ], "mid": [ "", "2951527505", "2119717200", "2179401333" ], "abstract": [ "", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.", "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.", "In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2 or less) of the video frames." ] }
1707.04677
2737272473
This paper aims at task-oriented action prediction, i.e., predicting a sequence of actions towards accomplishing a specific task under a certain scene, which is a new problem in computer vision research. The main challenges lie in how to model task-specific knowledge and integrate it in the learning procedure. In this work, we propose to train a recurrent long-short term memory (LSTM) network for handling this problem, i.e., taking a scene image (including pre-located objects) and the specified task as input and recurrently predicting action sequences. However, training such a network usually requires large amounts of annotated samples for covering the semantic space (e.g., diverse action decomposition and ordering). To alleviate this issue, we introduce a temporal And-Or graph (AOG) for task description, which hierarchically represents a task into atomic actions. With this AOG representation, we can produce many valid samples (i.e., action sequences according with common sense) by training another auxiliary LSTM network with a small set of annotated samples. And these generated samples (i.e., task-oriented action sequences) effectively facilitate training the model for task-oriented action prediction. In the experiments, we create a new dataset containing diverse daily tasks and extensively evaluate the effectiveness of our approach.
Task planning. In literature, task planning (aslo referred to symbolic planning @cite_4 ) has been traditionally formalized as the deduction @cite_5 or satisfiability @cite_6 problems for a long period. @cite_11 introduced hierarchical planning, which first planned abstractly and then generated fine-level details. @cite_8 utilized the PDDL representation for actions and developed an action-related modeling system to learn an action model from a set of observed successful plans. Some works also combined symbolic with motion planning @cite_2 . @cite_7 regarded symbolic planning as a constraint and proposed a heuristic function for motion planning. @cite_13 extended the work and planned with geometric and differential constraints. @cite_3 proposed a hierarchical task and motion planning algorithm based on the hierarchical transition networks. Although working quite well in the controlled environments, these methods required encoding every precondition for each operation or domain knowledge, and they could hardly generalize to the unconstrained environments with large variance @cite_4 . Most recently, @cite_4 represented the environment with a set of attributes, and proposed to use the Markov Random Field to learn the sequences of controllers to complete the given tasks.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_6", "@cite_3", "@cite_2", "@cite_5", "@cite_13", "@cite_11" ], "mid": [ "2129093238", "2075577325", "2024113144", "", "2143936212", "123516175", "1507678707", "2132467020", "1981627423" ], "abstract": [ "Many tasks in human environments require performing a sequence of complex navigation and manipulation tasks. In unstructured human environments, the locations and conguration of the objects can change in unpredictable ways. This requires a high-level planning strategy that is robust and exible in an uncertain environment. We propose a novel dynamic planning strategy, which can be trained from a set of example activities. High level activities are expressed as a sequence of primitive actions or controllers (with appropriate parameters). Our planning model synthesizes a universal strategy, where the a suitable next action is selected based on the current state of the environment. By expressing the environment using sets of attributes, the approach generalizes well to unseen scenarios. By unfolding our planning strategy into a Markov Random Field approximation, we can eectively train parameters using a maximum margin learning strategy. We provide a detailed empirical validation of our overall framework demonstrating successful plan strategies for a variety of tasks.", "We propose a representation and a planning algorithm able to deal with problems integrating task planning as well as motion and manipulation planning knowledge involving several robots and objects. Robot plans often include actions where the robot has to place itself in some position in order to perform some other action or to \"modify\" the configuration of its environment by displacing objects. Our approach aims at establishing a bridge between task planning and manipulation planning that allows a rigorous treatment of geometric preconditions and effects of robot actions in realistic environments. We show how links can be established between a symbolic description and its geometric counterpart and how they can be used in an integrated planning process that is able to deal with intricate symbolic and geometric constraints. Finally, we describe the main features of an implemented planner and discuss several examples of its use.", "AI planning requires the definition of action models using a formal action and plan description language, such as the standard Planning Domain Definition Language (PDDL), as input. However, building action models from scratch is a difficult and time-consuming task, even for experts. In this paper, we develop an algorithm called ARMS (action-relation modelling system) for automatically discovering action models from a set of successful observed plans. Unlike the previous work in action-model learning, we do not assume complete knowledge of states in the middle of observed plans. In fact, our approach works when no or partial intermediate states are given. These example plans are obtained by an observation agent who does not know the logical encoding of the actions and the full state information between the actions. In a real world application, the cost is prohibitively high in labelling the training examples by manually annotating every state in a plan example from snapshots of an environment. To learn action models, ARMS gathers knowledge on the statistical distribution of frequent sets of actions in the example plans. It then builds a weighted propositional satisfiability (weighted MAX-SAT) problem and solves it using a MAX-SAT solver. We lay the theoretical foundations of the learning problem and evaluate the effectiveness of ARMS empirically.", "", "We present a hierarchical planning system and its application to robotic manipulation. The novel features of the system are: 1) it finds high-quality kinematic solutions to task-level problems; 2) it takes advantage of subtask-specific irrelevance information, reusing optimal solutions to state-abstracted sub-problems across the search space. We briefly describe how the system handles uncertainty during plan execution, and present results on discrete problems as well as pick-and-place tasks for a mobile robot.", "Many real-world planning problems involve substantial amounts of domain-specific reasoning that is either awkward or inefficient to encode in a general purpose planner. Previous approaches for planning in such domains have either been largely domain specific or have employed shallow models of the domain-specific considerations. In this paper we investigate a hybrid planning model that utilizes a set of specialists to complement both the overall expressiveness and the reasoning power of a traditional hierarchical planner. Such a model retains the flexibility and generality of classical planning framework while allowing deeper and more efficient domain-specific reasoning through specialists. We describe a preliminary implementation of a planning architecture based on this model in a manufacturing planning domain, and use it to explore issues regarding the effect of the specialists on the planning, and the interactions and interfaces between them and the planner.", "This paper describes a reasoning system based on a temporal logic that can solve planning problems along the lines of traditional planning systems. Because it is cast as inference in a general representation, however, the ranges of problems that can be described is considerably greater than in traditional planning systems. In addition, other modes of plan reasoning, such as plan recognition or plan monitoring, can be formalized within the same framework.", "To compute collision-free and dynamically-feasibile trajectories that satisfy high-level specifications given in a planning-domain definition language, this paper proposes to combine sampling-based motion planning with symbolic action planning. The proposed approach, Sampling-based Motion and Symbolic Action Planner (SMAP), leverages from sampling-based motion planning the underlying idea of searching for a solution trajectory by selectively sampling and exploring the continuous space of collision-free and dynamically-feasible motions. Drawing from AI, SMAP uses symbolic action planning to identify actions and regions of the continuous space that sampling-based motion planning can further explore to significantly advance the search. The planning layers interact with each-other through estimates on the utility of each action, which are computed based on information gathered during the search. Simulation experiments with dynamical models of vehicles carrying out tasks given by high-level STRIPS specifications provide promising initial validation, showing that SMAP efficiently solves challenging problems.", "Abstract A problem domain can be represented as a hierarchy of abstraction spaces in which successively finer levels of detail are introduced. The problem solver ABSTRIPS, a modification of STRIPS, can define an abstraction space hierarchy from the STRIPS representation of a problem domain, and it can utilize the hierarchy in solving problems. Examples of the system's performance are presented that demonstrate the significant increases in problem-solving power that this approach provides. Then some further implications of the hierarchical planning approach are explored." ] }