aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1701.07393
2584325720
Acquiring 3D geometry of real world objects has various applications in 3D digitization, such as navigation and content generation in virtual environments. Image remains one of the most popular media for such visual tasks due to its simplicity of acquisition. Traditional image-based 3D reconstruction approaches heavily exploit point-to-point correspondence among multiple images to estimate camera motion and 3D geometry. Establishing point-to-point correspondence lies at the center of the 3D reconstruction pipeline, which however is easily prone to errors. In this paper, we propose an optimization framework which traces image points using a novel structure-guided dynamic tracking algorithm and estimates both the camera motion and a 3D structure model by enforcing a set of planar constraints. The key to our method is a structure model represented as a set of planes and their arrangements. Constraints derived from the structure model is used both in the correspondence establishment stage and the bundle adjustment stage in our reconstruction pipeline. Experiments show that our algorithm can effectively localize structure correspondence across dense image frames while faithfully reconstructing the camera motion and the underlying structured 3D model.
Relying on raw outputs of traditional multi-view stereo techniques, a structured model can be created and regularized with structural constraints discovered from the point cloud @cite_23 @cite_15 . Such methods could fail once the multi-view stereo methods return degenerated output due to occlusion, reflectance, and bad illuminations, etc.
{ "cite_N": [ "@cite_15", "@cite_23" ], "mid": [ "2127392014", "2553066529" ], "abstract": [ "VideoTrace is a system for interactively generating realistic 3D models of objects from video---models that might be inserted into a video game, a simulation environment, or another video sequence. The user interacts with VideoTrace by tracing the shape of the object to be modelled over one or more frames of the video. By interpreting the sketch drawn by the user in light of 3D information obtained from computer vision techniques, a small number of simple 2D interactions can be used to generate a realistic 3D model. Each of the sketching operations in VideoTrace provides an intuitive and powerful means of modelling shape from video, and executes quickly enough to be used interactively. Immediate feedback allows the user to model rapidly those parts of the scene which are of interest and to the level of detail required. The combination of automated and manual reconstruction allows VideoTrace to model parts of the scene not visible, and to succeed in cases where purely automated approaches would fail.", "In this paper, we present an interactive system for mechanism modeling from multi-view images. Its key feature is that the generated 3D mechanism models contain not only geometric shapes but also internal motion structures: they can be directly animated through kinematic simulation. Our system consists of two steps: interactive 3D modeling and stochastic motion parameter estimation. At the 3D modeling step, our system is designed to integrate the sparse 3D points reconstructed from multi-view images and a sketching interface to achieve accurate 3D modeling of a mechanism. To recover the motion parameters, we record a video clip of the mechanism motion and adopt stochastic optimization to recover its motion parameters by edge matching. Experimental results show that our system can achieve the 3D modeling of a range of mechanisms from simple mechanical toys to complex mechanism objects." ] }
1701.07579
2583286606
Linear batch codes and codes for private information retrieval (PIR) with a query size @math and a restricted size @math of the reconstruction sets are studied. New bounds on the parameters of such codes are derived for small values of @math or of @math by providing corresponding constructions. By building on the ideas of Cadambe and Mazumdar, a new bound in a recursive form is derived for batch codes and PIR codes.
In this work, we study (primitive, multiset) batch codes with restricted size of the recovery sets, as they are defined in @cite_7 (see also @cite_1 ).
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "2951944286", "1893105184" ], "abstract": [ "Consider a large database of @math data items that need to be stored using @math servers. We study how to encode information so that a large number @math of read requests can be performed in parallel while the rate remains constant (and ideally approaches one). This problem is equivalent to the design of multiset Batch Codes introduced by Ishai, Kushilevitz, Ostrovsky and Sahai [17]. We give families of multiset batch codes with asymptotically optimal rates of the form @math and a number of servers @math scaling polynomially in the number of read requests @math . An advantage of our batch code constructions over most previously known multiset batch codes is explicit and deterministic decoding algorithms and asymptotically optimal fault tolerance. Our main technical innovation is a graph-theoretic method of designing multiset batch codes using dense bipartite graphs with no small cycles. We modify prior graph constructions of dense, high-girth graphs to obtain our batch code results. We achieve close to optimal tradeoffs between the parameters for bipartite graph based batch codes.", "We present new upper bounds on the parameters of batch codes with restricted query size. These bounds are an improvement on the Singleton bound. The techniques for derivations of these bounds are based on the ideas in the literature for codes with locality. By employing additional ideas, we obtain further improvements, which are specific for batch codes." ] }
1701.07579
2583286606
Linear batch codes and codes for private information retrieval (PIR) with a query size @math and a restricted size @math of the reconstruction sets are studied. New bounds on the parameters of such codes are derived for small values of @math or of @math by providing corresponding constructions. By building on the ideas of Cadambe and Mazumdar, a new bound in a recursive form is derived for batch codes and PIR codes.
This setup has first appeared in @cite_14 . It was shown therein that the minimum distance @math of a batch code satisfies @math . It is worth mentioning that batch codes are closely related to locally repairable codes, which have been extensively studied in the context of the distributed data storage. The main difference between them is that in batch codes we are interested in the reconstruction of information symbols in @math , while in locally repairable codes we are interested in the recovery of coded symbols in @math .
{ "cite_N": [ "@cite_14" ], "mid": [ "2097444899" ], "abstract": [ "In an application, where a client wants to obtain many symbols from a large database, it is often desirable to balance the load. Batch codes (introduced by in STOC 2004) do exactly that: the large database is divided between many servers, so that the client has to only make a small number of queries to every server to obtain sufficient information to reconstruct all desired symbols." ] }
1701.07579
2583286606
Linear batch codes and codes for private information retrieval (PIR) with a query size @math and a restricted size @math of the reconstruction sets are studied. New bounds on the parameters of such codes are derived for small values of @math or of @math by providing corresponding constructions. By building on the ideas of Cadambe and Mazumdar, a new bound in a recursive form is derived for batch codes and PIR codes.
There is a number of bounds on the parameters of batch codes in the literature, but it is often difficult to make a comparison due to slight variations in the models and assumptions made. Thus, it is proven in @cite_7 that for a linear @math batch code over @math , In particular, when the code is systematic, the bound can be tighten a bit, as follows: If the queries in Definition are restricted to @math , then the corresponding code is called an @math code for private information retrieval (PIR) @cite_12 , or simply @math PIR code. In particular, all batch codes are PIR codes with the corresponding parameters. It should be mentioned that the proofs of ) and ) in @cite_7 work in analogous way for the PIR codes too, and therefore these two bounds hold for general and systematic @math PIR codes, respectively.
{ "cite_N": [ "@cite_12", "@cite_7" ], "mid": [ "411845244", "1893105184" ], "abstract": [ "Private information retrieval (PIR) protocols allow a user to retrieve a data item from a database without revealing any information about the identity of the item being retrieved. Specifically, in information-theoretic @math -server PIR, the database is replicated among @math non-communicating servers, and each server learns nothing about the item retrieved by the user. The cost of PIR protocols is usually measured in terms of their communication complexity, which is the total number of bits exchanged between the user and the servers, and storage overhead, which is the ratio between the total number of bits stored on all the servers and the number of bits in the database. Since single-server information-theoretic PIR is impossible, the storage overhead of all existing PIR protocols is at least @math . In this work, we show that information-theoretic PIR can be achieved with storage overhead arbitrarily close to the optimal value of @math , without sacrificing the communication complexity. Specifically, we prove that all known @math -server PIR protocols can be efficiently emulated, while preserving both privacy and communication complexity but significantly reducing the storage overhead. To this end, we distribute the @math bits of the database among @math servers, each storing @math coded bits (rather than replicas). For every fixed @math , the resulting storage overhead @math approaches @math as @math grows; explicitly we have @math . Moreover, in the special case @math , the storage overhead is only @math . In order to achieve these results, we introduce and study a new kind of binary linear codes, called here @math -server PIR codes. We then show how such codes can be constructed, and we establish several bounds on the parameters of @math -server PIR codes. Finally, we briefly discuss extensions of our results to nonbinary alphabets, to robust PIR, and to @math -private PIR.", "We present new upper bounds on the parameters of batch codes with restricted query size. These bounds are an improvement on the Singleton bound. The techniques for derivations of these bounds are based on the ideas in the literature for codes with locality. By employing additional ideas, we obtain further improvements, which are specific for batch codes." ] }
1701.07579
2583286606
Linear batch codes and codes for private information retrieval (PIR) with a query size @math and a restricted size @math of the reconstruction sets are studied. New bounds on the parameters of such codes are derived for small values of @math or of @math by providing corresponding constructions. By building on the ideas of Cadambe and Mazumdar, a new bound in a recursive form is derived for batch codes and PIR codes.
Binary simplex codes of length @math are shown to be optimal batch codes with parameters @math , @math and @math (for any @math ) @cite_4 , yet those codes exist only for very specific parameters.
{ "cite_N": [ "@cite_4" ], "mid": [ "1623581787" ], "abstract": [ "In this paper, we study a construction of binary switch codes. A switch code is a code such that a multi-set request of information symbols can be simultaneously recovered from disjoint sets of codeword symbols. Our construction is optimal in the sense that it has the smallest codeword length given its average encoding degree, which is logarithmic in the code dimension. Moreover, the number of queries needed to recover any information symbol in the request is at most 2. As a result, our construction is the first family of switch codes with low encoding and decoding complexity." ] }
1701.07579
2583286606
Linear batch codes and codes for private information retrieval (PIR) with a query size @math and a restricted size @math of the reconstruction sets are studied. New bounds on the parameters of such codes are derived for small values of @math or of @math by providing corresponding constructions. By building on the ideas of Cadambe and Mazumdar, a new bound in a recursive form is derived for batch codes and PIR codes.
In Section , we present constructions of binary PIR codes for arbitrary @math and @math achieving rate @math for @math similar to their counterparts in @cite_10 . For @math these codes are batch codes. The achieved rate is close to optimal, especially for small values of @math and @math .
{ "cite_N": [ "@cite_10" ], "mid": [ "1678309266" ], "abstract": [ "The @math th coordinate of an @math code is said to have locality @math and availability @math if there exist @math disjoint groups, each containing at most @math other coordinates that can together recover the value of the @math th coordinate. This property is particularly useful for codes for distributed storage systems because it permits local repair and parallel accesses of hot data. In this paper, for any positive integers @math and @math , we construct a binary linear code of length @math which has locality @math and availability @math for all coordinates. The information rate of this code attains @math , which is always higher than that of the direct product code, the only known construction that can achieve arbitrary locality and availability." ] }
1701.07579
2583286606
Linear batch codes and codes for private information retrieval (PIR) with a query size @math and a restricted size @math of the reconstruction sets are studied. New bounds on the parameters of such codes are derived for small values of @math or of @math by providing corresponding constructions. By building on the ideas of Cadambe and Mazumdar, a new bound in a recursive form is derived for batch codes and PIR codes.
A special case of @math batch and PIR codes, where the size of the recovery sets @math is not restricted (for example, it can be assumed that @math ), is studied in @cite_1 . Let @math be the shortest length @math of any systematic linear batch code with unrestricted size of the recovery set, and @math be the shortest length @math of any linear systematic PIR code with unrestricted size of the recovery set. Then the optimal redundancy of batch and PIR codes, respectively, is defined as @math and @math .
{ "cite_N": [ "@cite_1" ], "mid": [ "2951944286" ], "abstract": [ "Consider a large database of @math data items that need to be stored using @math servers. We study how to encode information so that a large number @math of read requests can be performed in parallel while the rate remains constant (and ideally approaches one). This problem is equivalent to the design of multiset Batch Codes introduced by Ishai, Kushilevitz, Ostrovsky and Sahai [17]. We give families of multiset batch codes with asymptotically optimal rates of the form @math and a number of servers @math scaling polynomially in the number of read requests @math . An advantage of our batch code constructions over most previously known multiset batch codes is explicit and deterministic decoding algorithms and asymptotically optimal fault tolerance. Our main technical innovation is a graph-theoretic method of designing multiset batch codes using dense bipartite graphs with no small cycles. We modify prior graph constructions of dense, high-girth graphs to obtain our batch code results. We achieve close to optimal tradeoffs between the parameters for bipartite graph based batch codes." ] }
1701.07017
2951119709
Multi-dimensional magnetic resonance spectroscopy is an important tool for studying molecular structures, interactions and dynamics in bio-engineering. The data acquisition time, however, is relatively long and non-uniform sampling can be applied to reduce this time. To obtain the full spectrum,a reconstruction method with Vandermonde factorization is proposed.This method explores the general signal property in magnetic resonance spectroscopy: Its time domain signal is approximated by a sum of a few exponentials. Results on synthetic and realistic data show that the new approach can achieve faithful spectrum reconstruction and outperforms state-of-the-art low rank Hankel matrix method.
Let @math be a Hankel operator which maps a vector @math to a Hankel matrix @math with @math as follows In particular, we denote the Hankel operator by @math instead of @math in the case @math . The LRHMC @cite_9 is based on the observations that the Hankel matrix @math constructed by the FID @math is low rank if the number of spectral peaks is much smaller than the data points in the whole spectrum. Hence, the reconstruction problem can be formulated as the low rank matrix completion problem where @math is the acquired NUS FID data, @math is an operator of the NUS schedule, @math is the nuclear norm defined as a sum of matrix singular values. The efficiency of LRHMC has been verified on numerical simulations and real MRS data @cite_9 . However, we will present in Section that the Hankel matrix of the FID has Vandermonde factorization and experiment results in Section show that the new approach exploiting Vandermonde factorization can achieve much better reconstruction than LRHMC from the same NUS data.
{ "cite_N": [ "@cite_9" ], "mid": [ "2109163609" ], "abstract": [ "Accelerated multi-dimensional NMR spectroscopy is a prerequisite for high-throughput applications, studying short-lived molecular systems and monitoring chemical reactions in real time. Non-uniform sampling is a common approach to reduce the measurement time. Here, a new method for high-quality spectra reconstruction from non-uniformly sampled data is introduced, which is based on recent developments in the field of signal processing theory and uses the so far unexploited general property of the NMR signal, its low rank. Using experimental and simulated data, we demonstrate that the low-rank reconstruction is a viable alternative to the current state-of-the-art technique compressed sensing. In particular, the low-rank approach is good in preserving of low-intensity broad peaks, and thus increases the effective sensitivity in the reconstructed spectra." ] }
1701.07174
2583680099
Plenty of effective methods have been proposed for face recognition during the past decade. Although these methods differ essentially in many aspects, a common practice of them is to specifically align the facial area based on the prior knowledge of human face structure before feature extraction. In most systems, the face alignment module is implemented independently. This has actually caused difficulties in the designing and training of end-to-end face recognition models. In this paper we study the possibility of alignment learning in end-to-end face recognition, in which neither prior knowledge on facial landmarks nor artificially defined geometric transformations are required. Specifically, spatial transformer layers are inserted in front of the feature extraction layers in a Convolutional Neural Network (CNN) for face recognition. Only human identity clues are used for driving the neural network to automatically learn the most suitable geometric transformation and the most appropriate facial area for the recognition task. To ensure reproducibility, our model is trained purely on the publicly available CASIA-WebFace dataset, and is tested on the Labeled Face in the Wild (LFW) dataset. We have achieved a verification accuracy of 99.08 which is comparable to state-of-the-art single model based methods.
Recently, the introduction of deep learning models has greatly promoted the development of the face recognition technology. Records of recognition accuracies on several major challenging benchmarks have been constantly broken since the Facebook DeepFace system @cite_38 demonstrated the effectiveness of the data driven deep learning paradigm for face recognition. A lot of Deep models of various structures, especially CNNs, have been proposed for face recognition ever since then. Exemplary works include the DeepID @cite_6 @cite_3 @cite_19 track of models, the FaceNet @cite_21 , etc. It has been widely accepted that comparing to traditional handcrafted features such as the high dimensional LBP @cite_18 or features learnt by imposing artificially designed constraints such as the Bayesian face @cite_33 and the GaussianFace @cite_4 , automatically learnt deep features based directly on personal identity clues are obviously more advantageous in terms of both the discriminative power and robustness.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_4", "@cite_33", "@cite_21", "@cite_3", "@cite_6", "@cite_19" ], "mid": [ "2145287260", "", "2950005842", "170472577", "2096733369", "1950843348", "", "2140609507" ], "abstract": [ "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.", "", "Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model, named GaussianFace, to enrich the diversity of training data. In comparison to existing methods, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. Extensive experiments demonstrate the effectiveness of the proposed model in learning from diverse data sources and generalize to unseen domain. Specifically, the accuracy of our algorithm achieves an impressive accuracy rate of 98.52 on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53 ) on LFW is surpassed.", "In this paper, we revisit the classical Bayesian face recognition method by Baback and propose a new joint formulation. The classical Bayesian method models the appearance difference between two faces. We observe that this \"difference\" formulation may reduce the separability between classes. Instead, we model two faces jointly with an appropriate prior on the face representation. Our joint formulation leads to an EM-like model learning at the training time and an efficient, closed-formed computation at the test time. On extensive experimental evaluations, our method is superior to the classical Bayesian face and many other supervised approaches. Our method achieved 92.4 test accuracy on the challenging Labeled Face in Wild (LFW) dataset. Comparing with current best commercial system, we reduced the error rate by 10 .", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "This paper designs a high-performance deep convolutional network (DeepID2+) for face recognition. It is learned with the identification-verification supervisory signal. By increasing the dimension of hidden representations and adding supervision to early convolutional layers, DeepID2+ achieves new state-of-the-art on LFW and YouTube Faces benchmarks. Through empirical studies, we have discovered three properties of its deep neural activations critical for the high performance: sparsity, selectiveness and robustness. (1) It is observed that neural activations are moderately sparse. Moderate sparsity maximizes the discriminative power of the deep net as well as the distance between images. It is surprising that DeepID2+ still can achieve high recognition accuracy even after the neural responses are binarized. (2) Its neurons in higher layers are highly selective to identities and identity-related attributes. We can identify different subsets of neurons which are either constantly excited or inhibited when different identities or attributes are present. Although DeepID2+ is not taught to distinguish attributes during training, it has implicitly learned such high-level concepts. (3) It is much more robust to occlusions, although occlusion patterns are not included in the training set.", "", "The state-of-the-art of face recognition has been significantly advanced by the emergence of deep learning. Very deep neural networks recently achieved great success on general object recognition because of their superb learning capacity. This motivates us to investigate their effectiveness on face recognition. This paper proposes two very deep neural network architectures, referred to as DeepID3, for face recognition. These two architectures are rebuilt from stacked convolution and inception layers proposed in VGG net and GoogLeNet to make them suitable to face recognition. Joint face identification-verification supervisory signals are added to both intermediate and final feature extraction layers during training. An ensemble of the proposed two architectures achieves 99.53 LFW face verification accuracy and 96.0 LFW rank-1 face identification accuracy, respectively. A further discussion of LFW face verification result is given in the end." ] }
1701.07174
2583680099
Plenty of effective methods have been proposed for face recognition during the past decade. Although these methods differ essentially in many aspects, a common practice of them is to specifically align the facial area based on the prior knowledge of human face structure before feature extraction. In most systems, the face alignment module is implemented independently. This has actually caused difficulties in the designing and training of end-to-end face recognition models. In this paper we study the possibility of alignment learning in end-to-end face recognition, in which neither prior knowledge on facial landmarks nor artificially defined geometric transformations are required. Specifically, spatial transformer layers are inserted in front of the feature extraction layers in a Convolutional Neural Network (CNN) for face recognition. Only human identity clues are used for driving the neural network to automatically learn the most suitable geometric transformation and the most appropriate facial area for the recognition task. To ensure reproducibility, our model is trained purely on the publicly available CASIA-WebFace dataset, and is tested on the Labeled Face in the Wild (LFW) dataset. We have achieved a verification accuracy of 99.08 which is comparable to state-of-the-art single model based methods.
In the work of the FaceNet @cite_21 , Sharoff et. al. tried to avoid the explicit face alignment and used an extremely large training dataset consisting of over 200 million images of 8 million identities to realize highly pose-invariant facial feature extraction. Nevertheless, they still found that adding additional face alignments during the testing stage may further increase the recognition accuracy. To a certain extents, this shows the indispensability of the explicit face alignment even in a deep learning based face recognition framework. Interestingly, despite of its significance, the face alignment is usually implemented based on an independent landmark location process and several artificially defined transformation principles in most existing systems. The automatic learning of optimum geometric transformations for face recognition has been largely overlooked although the face alignment process has already become the greatest impediment towards an end-to-end training of face recognition models.
{ "cite_N": [ "@cite_21" ], "mid": [ "2096733369" ], "abstract": [ "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors." ] }
1701.06854
2954108249
We propose a convolutional neural network (ConvNet) based approach for learning local image descriptors which can be used for significantly improved patch matching and 3D reconstructions. A multi-resolution ConvNet is used for learning keypoint descriptors. We also propose a new dataset consisting of an order of magnitude more number of scenes, images, and positive and negative correspondences compared to the currently available Multi-View Stereo (MVS) [18] dataset. The new dataset also has better coverage of the overall viewpoint, scale, and lighting changes in comparison to the MVS dataset. We evaluate our approach on publicly available datasets, such as Oxford Affine Covariant Regions Dataset (ACRD) [12], MVS [18], Synthetic [6] and Strecha [15] datasets to quantify the image descriptor performance. Scenes from the Oxford ACRD, MVS and Synthetic datasets are used for evaluating the patch matching performance of the learnt descriptors while the Strecha dataset is used to evaluate the 3D reconstruction task. Experiments show that the proposed descriptor outperforms the current state-of-the-art descriptors in both the evaluation tasks.
Several papers in the literature exist that address the challenges involved in designing image descriptors that are in turn used to find the image correspondences using local patch matching. These include the traditional hand-crafted descriptors such as SIFT @cite_9 and SURF @cite_17 and the more recent ConvNet based descriptors such as DeepDesc @cite_6 , DeepCompare @cite_24 , Matchnet @cite_3 , and Tfeat @cite_23 . Learning the descriptors for local patches using ConvNets was attempted early by @cite_20 but was not followed up due to numerous practical issues and limited evaluation. However, with recent success of ConvNets and deep learning, Matching local image patches via learned descriptors became widespread study and many ConvNet based architectures have been proposed @cite_24 @cite_2 @cite_14 @cite_23 . It has been shown in the literature that the descriptors learned using Siamese architecture based ConvNets considerably improve the matching performance @cite_24 @cite_2 @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_3", "@cite_6", "@cite_24", "@cite_23", "@cite_2", "@cite_20", "@cite_17" ], "mid": [ "", "2151103935", "1929856797", "1869500417", "1955055330", "2612112834", "", "190584210", "1677409904" ], "abstract": [ "", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "Motivated by recent successes on learning feature representations and on learning feature comparison functions, we propose a unified approach to combining both for training a patch matching system. Our system, dubbed Match-Net, consists of a deep convolutional network that extracts features from patches and a network of three fully connected layers that computes a similarity between the extracted features. To ensure experimental repeatability, we train MatchNet on standard datasets and employ an input sampler to augment the training set with synthetic exemplar pairs that reduce overfitting. Once trained, we achieve better computational efficiency during matching by disassembling MatchNet and separately applying the feature computation and similarity networks in two sequential stages. We perform a comprehensive set of experiments on standard datasets to carefully study the contributions of each aspect of MatchNet, with direct comparisons to established methods. Our results confirm that our unified approach improves accuracy over previous state-of-the-art results on patch matching datasets, while reducing the storage requirement for descriptors. We make pre-trained MatchNet publicly available.", "Deep learning has revolutionalized image-level tasks such as classification, but patch-level tasks, such as correspondence, still rely on hand-crafted features, e.g. SIFT. In this paper we use Convolutional Neural Networks (CNNs) to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches. We deal with the large number of potential pairs with the combination of a stochastic sampling of the training set and an aggressive mining strategy biased towards patches that are hard to classify. By using the L2 distance during both training and testing we develop 128-D descriptors whose euclidean distances reflect patch similarity, and which can be used as a drop-in replacement for any task involving SIFT. We demonstrate consistent performance gains over the state of the art, and generalize well against scaling and rotation, perspective transformation, non-rigid deformation, and illumination changes. Our descriptors are efficient to compute and amenable to modern GPUs, and are publicly available.", "In this paper we show how to learn directly from image data (i.e., without resorting to manually-designed features) a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems. To encode such a function, we opt for a CNN-based model that is trained to account for a wide variety of changes in image appearance. To that end, we explore and study multiple neural network architectures, which are specifically adapted to this task. We show that such an approach can significantly outperform the state-of-the-art on several problems and benchmark datasets.", "", "", "A magnetic head is provided with an apex portion having a first and second chamfered portion, both formed in a C core. The first chamfered portion has a first apex angle alpha at a position corresponding to a predetermined gap depth. The first chamfered portion also has a predetermined apex length. The second chamfered portion is contiguous to the first chamfered portion and has second apex angle beta which is smaller than the first apex angle alpha . The first apex angle alpha is greater than or equal to 70 DEG and less than 80 DEG .", "In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance." ] }
1701.06854
2954108249
We propose a convolutional neural network (ConvNet) based approach for learning local image descriptors which can be used for significantly improved patch matching and 3D reconstructions. A multi-resolution ConvNet is used for learning keypoint descriptors. We also propose a new dataset consisting of an order of magnitude more number of scenes, images, and positive and negative correspondences compared to the currently available Multi-View Stereo (MVS) [18] dataset. The new dataset also has better coverage of the overall viewpoint, scale, and lighting changes in comparison to the MVS dataset. We evaluate our approach on publicly available datasets, such as Oxford Affine Covariant Regions Dataset (ACRD) [12], MVS [18], Synthetic [6] and Strecha [15] datasets to quantify the image descriptor performance. Scenes from the Oxford ACRD, MVS and Synthetic datasets are used for evaluating the patch matching performance of the learnt descriptors while the Strecha dataset is used to evaluate the 3D reconstruction task. Experiments show that the proposed descriptor outperforms the current state-of-the-art descriptors in both the evaluation tasks.
Few papers in the literature, study patch matching as a task @cite_24 @cite_3 , where the feature layers (Siamese network) and the metric learning layers (fully-connected layers) are jointly learnt in an end-to-end fashion. These type of ConvNets cannot be used as general descriptors for any tasks such as reconstruction except patch matching. Whereas, @cite_6 uses the features extracted at the output of the Siamese networks without learning any non-linear decision network or metric learning layer. These type of descriptors, are generic in nature and can be used for many tasks as drop-in replacement of traditional descriptors including keypoint matching, 3D reconstruction, and tracking. Since, metrics to compare between patches are not learned, a generic metric such as @math distance to compare patches and train the network. Learning feature descriptors using triplets of patches was investigated in @cite_23 using shallow networks in order to reduce the descriptor extraction time. Similar to @cite_6 @cite_23 , the aim of the proposed approach is to extract descriptors for local image patches that can be used for 3D reconstruction.
{ "cite_N": [ "@cite_24", "@cite_23", "@cite_6", "@cite_3" ], "mid": [ "1955055330", "2612112834", "1869500417", "1929856797" ], "abstract": [ "In this paper we show how to learn directly from image data (i.e., without resorting to manually-designed features) a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems. To encode such a function, we opt for a CNN-based model that is trained to account for a wide variety of changes in image appearance. To that end, we explore and study multiple neural network architectures, which are specifically adapted to this task. We show that such an approach can significantly outperform the state-of-the-art on several problems and benchmark datasets.", "", "Deep learning has revolutionalized image-level tasks such as classification, but patch-level tasks, such as correspondence, still rely on hand-crafted features, e.g. SIFT. In this paper we use Convolutional Neural Networks (CNNs) to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches. We deal with the large number of potential pairs with the combination of a stochastic sampling of the training set and an aggressive mining strategy biased towards patches that are hard to classify. By using the L2 distance during both training and testing we develop 128-D descriptors whose euclidean distances reflect patch similarity, and which can be used as a drop-in replacement for any task involving SIFT. We demonstrate consistent performance gains over the state of the art, and generalize well against scaling and rotation, perspective transformation, non-rigid deformation, and illumination changes. Our descriptors are efficient to compute and amenable to modern GPUs, and are publicly available.", "Motivated by recent successes on learning feature representations and on learning feature comparison functions, we propose a unified approach to combining both for training a patch matching system. Our system, dubbed Match-Net, consists of a deep convolutional network that extracts features from patches and a network of three fully connected layers that computes a similarity between the extracted features. To ensure experimental repeatability, we train MatchNet on standard datasets and employ an input sampler to augment the training set with synthetic exemplar pairs that reduce overfitting. Once trained, we achieve better computational efficiency during matching by disassembling MatchNet and separately applying the feature computation and similarity networks in two sequential stages. We perform a comprehensive set of experiments on standard datasets to carefully study the contributions of each aspect of MatchNet, with direct comparisons to established methods. Our results confirm that our unified approach improves accuracy over previous state-of-the-art results on patch matching datasets, while reducing the storage requirement for descriptors. We make pre-trained MatchNet publicly available." ] }
1701.06854
2954108249
We propose a convolutional neural network (ConvNet) based approach for learning local image descriptors which can be used for significantly improved patch matching and 3D reconstructions. A multi-resolution ConvNet is used for learning keypoint descriptors. We also propose a new dataset consisting of an order of magnitude more number of scenes, images, and positive and negative correspondences compared to the currently available Multi-View Stereo (MVS) [18] dataset. The new dataset also has better coverage of the overall viewpoint, scale, and lighting changes in comparison to the MVS dataset. We evaluate our approach on publicly available datasets, such as Oxford Affine Covariant Regions Dataset (ACRD) [12], MVS [18], Synthetic [6] and Strecha [15] datasets to quantify the image descriptor performance. Scenes from the Oxford ACRD, MVS and Synthetic datasets are used for evaluating the patch matching performance of the learnt descriptors while the Strecha dataset is used to evaluate the 3D reconstruction task. Experiments show that the proposed descriptor outperforms the current state-of-the-art descriptors in both the evaluation tasks.
Inspired by the multi-bank architecture used in human-pose estimation @cite_0 , the proposed network uses a three bank network to encode scale variations of the image patches. Each bank shares common weights and hence the scaled patch inputs undergo similar transformation before being combined together and processed further. This helps the proposed network in being more robust to scale changes. Similar multi-resolution architecture has been proposed as a variant (central-surround two-stream model) in @cite_24 . This multi-resolution model produces independent output combined by the metric learned layers. In the current literature, this type of architecture has not been studied for stand alone descriptors.
{ "cite_N": [ "@cite_0", "@cite_24" ], "mid": [ "2952422028", "1955055330" ], "abstract": [ "This paper proposes a new hybrid architecture that consists of a deep Convolutional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques.", "In this paper we show how to learn directly from image data (i.e., without resorting to manually-designed features) a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems. To encode such a function, we opt for a CNN-based model that is trained to account for a wide variety of changes in image appearance. To that end, we explore and study multiple neural network architectures, which are specifically adapted to this task. We show that such an approach can significantly outperform the state-of-the-art on several problems and benchmark datasets." ] }
1701.06991
2582396747
The increasing traffic demand in cellular networks has recently led to the investigation of new strategies to save precious resources like spectrum and energy. A possible solution employs direct device-to-device (D2D) communications, which is particularly promising when the two terminals involved in the communications are located in close proximity. The D2D communications should coexist with other transmissions, so they must be careful scheduled in order to avoid harmful interference impacts. In this paper, we analyze how a distributed context-awareness, obtained by observing few local channel and topology parameters, can be used to adaptively exploit D2D communications. We develop a rigorous theoretical analysis to quantify the balance between the gain offered by a D2D transmission, and its impact on the other network communications. Based on this analysis, we derive two theorems that define the optimal strategy to be employed, in terms of throughput maximization, when a single or multiple transmit power levels are available for the D2D communications. We compare this strategy to the state-of-the-art in the same network scenario, showing how context awareness can be exploited to achieve a higher sum throughput and an improved fairness.
Direct communications among mobile terminals have been envisioned by 3GPP as a promising way to improve network performance. D2D proximity services can in fact either reduce the amount of traffic handled by the BSs, or provide service beyond cellular coverage and or in emergency scenarios, where the core network may be unavailable @cite_10 @cite_14 . Establishig and maintaining D2D connections entails a set of technical challenges, including peer discovery, resource allocation, interference management and synchronization, which are presented and discussed in @cite_3 . Multiple-input-multiple-output (MIMO) D2D communications are investigated in @cite_0 , whereas some pricing models are illustrated in @cite_7 . An opportunistic multi-hop forwarding technique is presented in @cite_21 : the main aim here is to extend cellular coverage, by letting a mobile terminal forward the data packets to from another terminal that is not within the range of a BS. Similar works along this line of research seek to reduce the density of the BSs.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_21", "@cite_3", "@cite_0", "@cite_10" ], "mid": [ "", "1969281101", "2146268021", "1990557017", "2143487028", "" ], "abstract": [ "", "In a conventional cellular system, devices are not allowed to directly communicate with each other in the licensed cellular bandwidth and all communications take place through the base stations. In this article, we envision a two-tier cellular network that involves a macrocell tier (i.e., BS-to-device communications) and a device tier (i.e., device-to-device communications). Device terminal relaying makes it possible for devices in a network to function as transmission relays for each other and realize a massive ad hoc mesh network. This is obviously a dramatic departure from the conventional cellular architecture and brings unique technical challenges. In such a two-tier cellular system, since the user data is routed through other users? devices, security must be maintained for privacy. To ensure minimal impact on the performance of existing macrocell BSs, the two-tier network needs to be designed with smart interference management strategies and appropriate resource allocation schemes. Furthermore, novel pricing models should be designed to tempt devices to participate in this type of communication. Our article provides an overview of these major challenges in two-tier networks and proposes some pricing schemes for different types of device relaying.", "With emerging demands for local area and popular content sharing services, multihop device-to-device communication is conceived as a vital component of next-generation cellular networks to improve spectral reuse, bring hop gains, and enhance system capacity. Ripening these benefits depends on fundamentally understanding its potential performance impacts and efficiently solving several main technical problems. Aiming to establish a new paradigm for the analysis and design of multihop D2D communications, in this article, we propose a dynamic graph optimization framework that enables the modeling of large-scale systems with multiple D2D pairs and node mobility patterns. By inherently modeling the main technological problems for multihop D2D communications, this framework benefits investigation of theoretical performance limits and studying the optimal system design. Furthermore, these achievable benefits are demonstrated by examples of simulations under a realistic multihop D2D communication underlaying cellular network.", "Device-to-device communication is likely to be added to LTE in 3GPP Release 12. In principle, exploiting direct communication between nearby mobile devices will improve spectrum utilization, overall throughput, and energy consumption, while enabling new peer-to-peer and location-based applications and services. D2D-enabled LTE devices can also become competitive for fallback public safety networks, which must function when cellular networks are not available or fail. Introducing D2D poses many challenges and risks to the long-standing cellular architecture, which is centered around the base station. We provide an overview of D2D standardization activities in 3GPP, identify outstanding technical challenges, draw lessons from initial evaluation studies, and summarize \"best practices\" in the design of a D2D-enabled air interface for LTE-based cellular networks", "Device-to-device communications enable two proximity users to transmit signal directly without going through the base station. It can increase network spectral efficiency and energy efficiency, reduce transmission delay, offload traffic for the BS, and alleviate congestion in the cellular core networks. However, many technical challenges need to be addressed for D2D communications to harvest the potential benefits, including device discovery and D2D session setup, D2D resource allocation to guarantee QoS, D2D MIMO transmission, as well as D2D-aided BS deployment in heterogeneous networks. In this article, the basic concepts of D2D communications are first introduced, and then existing fundamental works on D2D communications are discussed. In addition, some potential research topics and challenges are also identified.", "" ] }
1701.06991
2582396747
The increasing traffic demand in cellular networks has recently led to the investigation of new strategies to save precious resources like spectrum and energy. A possible solution employs direct device-to-device (D2D) communications, which is particularly promising when the two terminals involved in the communications are located in close proximity. The D2D communications should coexist with other transmissions, so they must be careful scheduled in order to avoid harmful interference impacts. In this paper, we analyze how a distributed context-awareness, obtained by observing few local channel and topology parameters, can be used to adaptively exploit D2D communications. We develop a rigorous theoretical analysis to quantify the balance between the gain offered by a D2D transmission, and its impact on the other network communications. Based on this analysis, we derive two theorems that define the optimal strategy to be employed, in terms of throughput maximization, when a single or multiple transmit power levels are available for the D2D communications. We compare this strategy to the state-of-the-art in the same network scenario, showing how context awareness can be exploited to achieve a higher sum throughput and an improved fairness.
Our work, on the contrary, does not aim at extending coverage, but instead at allowing direct communications between terminals in close proximity. Authors in @cite_5 propose a technique to organize nodes into clusters, by means of a centralized scheduling, and exploit D2D communications over an orthogonal channel. Conversely, we focus on a scenario where non-orthogonal spectrum sharing between terminals transmitting to the BS and D2D sources is employed. In this kind of scenario, two main approaches have been proposed in the state-of-the-art. The former is to let D2D sources transmit only on temporarily free channels (overlay), thus causing no extra interference; the latter is to allow D2D transmissions also on already utilized channels, but limiting the interference impact on the other ongoing communications (underlay). An overlay scheme is developed in @cite_15 , where D2D sources exploit the energy harvested by surrounding radio communications. Stochastic geometry tools are instead utilized in @cite_20 to analyze the performance of both overlay and underlay schemes in terms of network connectivity and coverage probability.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "2134023064", "2076579100", "2089242380" ], "abstract": [ "Device-to-device (D2D) communications help improve the performance of wireless multicast services in cellular networks via cooperative retransmissions among multicast recipients within a cluster. Resource utilization efficiency should be taken into account in the design of D2D communication systems. To maximize resource efficiency of D2D retransmissions, there is a tradeoff between multichannel diversity and multicast gain. In this paper, by analyzing the relationship between the number of relays and minimal time-frequency resource cost on retransmissions, we derive a closed-form probability density function (pdf) for an optimal number of D2D relays. Motivated by the analysis, we then propose an intracluster D2D retransmission scheme with optimized resource utilization, which can adaptively select the number of cooperative relays performing multicast retransmissions and give an iterative subcluster partition algorithm to enhance retransmission throughput. Exploiting both multichannel diversity and multicast gain, the proposed scheme achieves a significant gain in terms of resource utilization if compared with its counterparts with a fixed number of relays.", "While cognitive radio enables spectrum-efficient wireless communication, radio frequency (RF) energy harvesting from ambient interference is an enabler for energy-efficient wireless communication. In this paper, we model and analyze cognitive and energy harvesting-based device-to-device (D2D) communication in cellular networks. The cognitive D2D transmitters harvest energy from ambient interference and use one of the channels allocated to cellular users (in uplink or downlink), which is referred to as the D2D channel, to communicate with the corresponding receivers. We investigate two spectrum access policies for cellular communication in the uplink or downlink, namely, random spectrum access (RSA) policy and prioritized spectrum access (PSA) policy. In RSA, any of the available channels including the channel used by the D2D transmitters can be selected randomly for cellular communication, while in PSA the D2D channel is used only when all of the other channels are occupied. A D2D transmitter can communicate successfully with its receiver only when it harvests enough energy to perform channel inversion toward the receiver, the D2D channel is free, and the signal-to-interference-plus-noise ratio @math at the receiver is above the required threshold; otherwise, an outage occurs for the D2D communication. We use tools from stochastic geometry to evaluate the performance of the proposed communication system model with general path-loss exponent in terms of outage probability for D2D and cellular users. We show that energy harvesting can be a reliable alternative to power cognitive D2D transmitters while achieving acceptable performance. Under the same @math outage requirements as for the non-cognitive case, cognitive channel access improves the outage probability for D2D users for both the spectrum access policies. When compared with the RSA policy, the PSA policy provides a better performance to the D2D users. Also, using an uplink channel provides improved performance to the D2D users in dense networks when compared to a downlink channel. For cellular users, the PSA policy provides almost the same outage performance as the RSA policy.", "Providing direct communications among a rapidly growing number of wireless devices within the coverage area of a cellular system is an attractive way of exploiting the proximity among them to enhance coverage and spectral and energy efficiency. However, such device-to-device (D2D) communications create a new type of interference in cellular systems, calling for rigorous system analysis and design to both protect mobile users (MUs) and guarantee the connectivity of devices. Motivated by the potential advantages of cognitive radio (CR) technology in detecting and exploiting underutilized spectrum, we investigate CR-assisted D2D communications in a cellular network as a viable solution for D2D communications, in which devices access the network with mixed overlay–underlay spectrum sharing. Our comprehensive analysis reveals several engineering insights useful to system design. We first derive bounds of pivotal performance metrics. For a given collision probability constraint, as the prime spectrum-sharing criterion, we also derive the maximum allowable density of devices. This captures the density of MUs and that of active macro base stations. Limited in spatial density, devices may not have connectivity among them. Nevertheless, it is shown that for the derived maximum allowable density, one should judiciously push a portion of devices into receiving mode in order to preserve the connectivity and to keep the isolation probability low. Furthermore, upper bounds on the cellular coverage probability are obtained incorporating load-based power allocation for both path-loss and fading-based cell association mechanisms, which are fairly accurate and consistent with our in-depth simulation results. Finally, implementation issues are discussed." ] }
1701.06991
2582396747
The increasing traffic demand in cellular networks has recently led to the investigation of new strategies to save precious resources like spectrum and energy. A possible solution employs direct device-to-device (D2D) communications, which is particularly promising when the two terminals involved in the communications are located in close proximity. The D2D communications should coexist with other transmissions, so they must be careful scheduled in order to avoid harmful interference impacts. In this paper, we analyze how a distributed context-awareness, obtained by observing few local channel and topology parameters, can be used to adaptively exploit D2D communications. We develop a rigorous theoretical analysis to quantify the balance between the gain offered by a D2D transmission, and its impact on the other network communications. Based on this analysis, we derive two theorems that define the optimal strategy to be employed, in terms of throughput maximization, when a single or multiple transmit power levels are available for the D2D communications. We compare this strategy to the state-of-the-art in the same network scenario, showing how context awareness can be exploited to achieve a higher sum throughput and an improved fairness.
An interesting underlay approach comparable to ours is proposed in @cite_22 . Here, D2D communications are performed using uplink resources, and employing power control, in order to limit interference. Furthermore, D2D connections are allowed only between terminals located in close proximity, as in our work. However, the source terminal decides whether to transmit directly to its destination or to rely on the BS based only on topological information. This scheme avoids the need for channel sensing overhead, but it lacks the fundamental adaptivity of our approach. We briefly describe it in Sec. , where we compare its performance with that of our proposed strategy.
{ "cite_N": [ "@cite_22" ], "mid": [ "1982698541" ], "abstract": [ "Device-to-device (D2D) communication enables the user equipments (UEs) located in close proximity to bypass the cellular base stations (BSs) and directly connect to each other, and thereby, offload traffic from the cellular infrastructure. D2D communication can improve spatial frequency reuse and energy efficiency in cellular networks. This paper presents a comprehensive and tractable analytical framework for D2D-enabled uplink cellular networks with a flexible mode selection scheme along with truncated channel inversion power control. The developed framework is used to analyze and understand how the underlaying D2D communication affects the cellular network performance. Through comprehensive numerical analysis, we investigate the expected performance gains and provide guidelines for selecting the network parameters." ] }
1701.06991
2582396747
The increasing traffic demand in cellular networks has recently led to the investigation of new strategies to save precious resources like spectrum and energy. A possible solution employs direct device-to-device (D2D) communications, which is particularly promising when the two terminals involved in the communications are located in close proximity. The D2D communications should coexist with other transmissions, so they must be careful scheduled in order to avoid harmful interference impacts. In this paper, we analyze how a distributed context-awareness, obtained by observing few local channel and topology parameters, can be used to adaptively exploit D2D communications. We develop a rigorous theoretical analysis to quantify the balance between the gain offered by a D2D transmission, and its impact on the other network communications. Based on this analysis, we derive two theorems that define the optimal strategy to be employed, in terms of throughput maximization, when a single or multiple transmit power levels are available for the D2D communications. We compare this strategy to the state-of-the-art in the same network scenario, showing how context awareness can be exploited to achieve a higher sum throughput and an improved fairness.
Overall, the main difference between our approach and the existing ones lies in our strategy to mitigate the interference. We do not rely neither on a geographic based criterion, as in @cite_22 , which is easy to implement in a distributed fashion but is static and often over-restrictive, nor on a centralized optimization problem, as in @cite_5 @cite_16 @cite_2 @cite_1 , which achieves optimal solutions, but needs full channel state information over all the involved channels.
{ "cite_N": [ "@cite_22", "@cite_1", "@cite_2", "@cite_5", "@cite_16" ], "mid": [ "1982698541", "2062633627", "2037417110", "2134023064", "" ], "abstract": [ "Device-to-device (D2D) communication enables the user equipments (UEs) located in close proximity to bypass the cellular base stations (BSs) and directly connect to each other, and thereby, offload traffic from the cellular infrastructure. D2D communication can improve spatial frequency reuse and energy efficiency in cellular networks. This paper presents a comprehensive and tractable analytical framework for D2D-enabled uplink cellular networks with a flexible mode selection scheme along with truncated channel inversion power control. The developed framework is used to analyze and understand how the underlaying D2D communication affects the cellular network performance. Through comprehensive numerical analysis, we investigate the expected performance gains and provide guidelines for selecting the network parameters.", "In this paper, we propose using group partition and dynamic rate adaptation for scalable throughput optimization of capacity-region-aware device-to-device communications. We adopt network information theory that allows a receiving device to simultaneously decode multiple packets from multiple transmitting devices, as long as the vector of transmitting rates is inside the capacity region. Based on graph theory, devices are first partitioned into subgroups. To optimize the throughput of a subgroup, instead of directly solving an integer-linear programming problem, we propose using a fast iterative algorithm to select active devices and using aggression levels for rate adaptation based on channel state information. Simulation results show that the proposed algorithm is scalable and could significantly outperform the greedy algorithm by more than 50 .", "We develop a flexible and accurate framework for device-to-device (D2D) communication in the context of a conventional cellular network, which allows for time-frequency resources to be either shared or orthogonally partitioned between the two networks. Using stochastic geometry, we provide accurate expressions for SINR distributions and average rates, under an assumption of interference randomization via time and or frequency hopping, for both dedicated and shared spectrum approaches. We obtain analytical results in closed or semi-closed form in high SNR regime, that allow us to easily explore the impact of key parameters (e.g., the load and hopping probabilities) on the network performance. In particular, unlike other models, the expressions we obtain are tractable, i.e., they can be efficiently optimized without extensive simulation. Using these, we optimize the hopping probabilities for the D2D links, i.e., how often they should request a time or frequency slot. This can be viewed as an optimized lower bound to other more sophisticated scheduling schemes. We also investigate the optimal resource partitions between D2D and cellular networks when the dedicated spectrum approach is used.", "Device-to-device (D2D) communications help improve the performance of wireless multicast services in cellular networks via cooperative retransmissions among multicast recipients within a cluster. Resource utilization efficiency should be taken into account in the design of D2D communication systems. To maximize resource efficiency of D2D retransmissions, there is a tradeoff between multichannel diversity and multicast gain. In this paper, by analyzing the relationship between the number of relays and minimal time-frequency resource cost on retransmissions, we derive a closed-form probability density function (pdf) for an optimal number of D2D relays. Motivated by the analysis, we then propose an intracluster D2D retransmission scheme with optimized resource utilization, which can adaptively select the number of cooperative relays performing multicast retransmissions and give an iterative subcluster partition algorithm to enhance retransmission throughput. Exploiting both multichannel diversity and multicast gain, the proposed scheme achieves a significant gain in terms of resource utilization if compared with its counterparts with a fixed number of relays.", "" ] }
1701.06991
2582396747
The increasing traffic demand in cellular networks has recently led to the investigation of new strategies to save precious resources like spectrum and energy. A possible solution employs direct device-to-device (D2D) communications, which is particularly promising when the two terminals involved in the communications are located in close proximity. The D2D communications should coexist with other transmissions, so they must be careful scheduled in order to avoid harmful interference impacts. In this paper, we analyze how a distributed context-awareness, obtained by observing few local channel and topology parameters, can be used to adaptively exploit D2D communications. We develop a rigorous theoretical analysis to quantify the balance between the gain offered by a D2D transmission, and its impact on the other network communications. Based on this analysis, we derive two theorems that define the optimal strategy to be employed, in terms of throughput maximization, when a single or multiple transmit power levels are available for the D2D communications. We compare this strategy to the state-of-the-art in the same network scenario, showing how context awareness can be exploited to achieve a higher sum throughput and an improved fairness.
Conversely, we create a distributed situational awareness by proper observations of some channel parameters, and exploit it through our analytical results to choose when and how a D2D connection can be established. We have already investigated the concept of situational awareness in a multi-cell scenario in @cite_19 . In that work, however, context awareness is based on statistical information, and decisions are taken based on the output of properly designed Bayesian Networks, without seeking to find the optimal solution.
{ "cite_N": [ "@cite_19" ], "mid": [ "2584539427" ], "abstract": [ "Device-to-device (D2D) communication is one of the most promising solutions to the dramatic increase of wireless networks traffic load. In D2D communications, mobile nodes can communicate in a semi-autonomous way, with minimal or no control by the base station (BS). In this context, we address the problem of the coexistence of cellular and D2D tiers in the uplink frequencies, where a D2D source is allowed to transmit without direct control of its scheduling by the base station (BS). In order to limit the interference, we add a punishment mechanism triggered by the BS to limit the activity of disturbing terminals. We propose a context- aware channel access mechanism for a D2D source, where the context-awareness is obtained by 1) observing the topology and the wireless transmissions in the proximity of the D2D source, and 2) exploiting the past knowledge learned thanks to a Bayesian network approach. To design the channel access mechanism, we study the tradeoff between maximizing the end-to-end throughput and minimizing the interference to the cellular tier. We then evaluate the performance improvement of the proposed solution, showing the effectiveness of the learning mechanism and the advantages of context awareness." ] }
1701.06991
2582396747
The increasing traffic demand in cellular networks has recently led to the investigation of new strategies to save precious resources like spectrum and energy. A possible solution employs direct device-to-device (D2D) communications, which is particularly promising when the two terminals involved in the communications are located in close proximity. The D2D communications should coexist with other transmissions, so they must be careful scheduled in order to avoid harmful interference impacts. In this paper, we analyze how a distributed context-awareness, obtained by observing few local channel and topology parameters, can be used to adaptively exploit D2D communications. We develop a rigorous theoretical analysis to quantify the balance between the gain offered by a D2D transmission, and its impact on the other network communications. Based on this analysis, we derive two theorems that define the optimal strategy to be employed, in terms of throughput maximization, when a single or multiple transmit power levels are available for the D2D communications. We compare this strategy to the state-of-the-art in the same network scenario, showing how context awareness can be exploited to achieve a higher sum throughput and an improved fairness.
Several other strategies have appeared in the literature to permit D2D communications through an underlay approach. Authors in @cite_6 defined a scheme to forward interference by means of D2D communications, in order to make it easier to apply interference cancellation schemes; similarly, in @cite_11 the BS relays the D2D communications to allow interference cancellation at the receiving nodes. In @cite_1 , graph theory is used to divide mobiles into subgroups, and a throughput maximization is attained by employing multiuser detection and solving an iterative optimization algorithm. @cite_12 contract theory is leveraged to study the incentives to be granted to potential D2D users. Finally, in @cite_24 , various sharing schemes, both orthogonal and non-orthogonal, are investigated in a Manhattan grid topology based on the solution of a sum-rate optimization problem.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_24", "@cite_12", "@cite_11" ], "mid": [ "2062633627", "2065403303", "2130171753", "1590174667", "" ], "abstract": [ "In this paper, we propose using group partition and dynamic rate adaptation for scalable throughput optimization of capacity-region-aware device-to-device communications. We adopt network information theory that allows a receiving device to simultaneously decode multiple packets from multiple transmitting devices, as long as the vector of transmitting rates is inside the capacity region. Based on graph theory, devices are first partitioned into subgroups. To optimize the throughput of a subgroup, instead of directly solving an integer-linear programming problem, we propose using a fast iterative algorithm to select active devices and using aggression levels for rate adaptation based on channel state information. Simulation results show that the proposed algorithm is scalable and could significantly outperform the greedy algorithm by more than 50 .", "The ongoing densification in cellular networks has turned interference into a serious problem in future cellular networks. Since the interference experienced by close-by users in a cellular network is usually correlated, it can be cooperatively suppressed to improve user experience. In this article, we introduce the idea of cooperative interference cancellation (CIC) between close-by users using device-to-device (D2D) communications for the example of the upcoming 3rd Generation Partnership Program (3GPP) Long Term Evolution (LTE) Rel-12 D2D technology. We understand CIC as a new network interference management tool, capable of exploiting interference correlation to improve downlink throughput. We discuss possible deployment scenarios as well as theoretical and practical challenges. To each challenge we provide some possible solutions. Finally, a first feasibility analysis using numerical simulations is presented that demonstrates the potential gains of CIC.", "We consider Device-to-Device (D2D) communication underlaying cellular networks to improve local services. The system aims to optimize the throughput over the shared resources while fulfilling prioritized cellular service constraints. Optimum resource allocation and power control between the cellular and D2D connections that share the same resources are analyzed for different resource sharing modes. Optimality is discussed under practical constraints such as minimum and maximum spectral efficiency restrictions, and maximum transmit power or energy limitation. It is found that in most of the considered cases, optimum power control and resource allocation for the considered resource sharing modes can either be solved in closed form or searched from a finite set. The performance of the D2D underlay system is evaluated in both a single-cell scenario, and a Manhattan grid environment with multiple WINNER II A1 office buildings. The results show that by proper resource management, D2D communication can effectively improve the total throughput without generating harmful interference to cellular networks.", "Device-to-device (D2D) communication is viewed as one promising technology for boosting the capacity of wireless networks and the efficiency of resource management. D2D communication heavily depends on the participation of users in sharing contents. Thus, it is imperative to introduce new incentive mechanisms to motivate such user involvement. In this paper, a contract-theoretic approach is proposed to solve the problem of providing incentives for D2D communication in cellular networks. First, using the framework of contract theory, the users' preferences toward D2D communication are classified into a finite number of types, and the service trading between the base station and users is properly modeled. Next, necessary and sufficient conditions are derived to provide incentives for users' engagement in D2D communication. Finally, our analysis is extended to the case in which there is a continuum of users. Simulation results show that the contract can effectively incentivize users' participation, and increase capacity of the cellular network than the other mechanisms.", "" ] }
1701.06548
2950300355
We systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide applicability of these regularizers.
The maximum entropy principle has a long history with deep connections to many areas of machine learning including unsupervised learning, supervised learning, and reinforcement learning. In supervised learning, we can search for the model with maximum entropy subject to constraints on empirical statistics, which naturally gives rise to maximum likelihood in log-linear models (see for a review). Deterministic annealing @cite_1 is a general approach for optimization that is widely applicable, avoids local minima, and can minimize discrete objectives, and it can be derived from the maximum entropy principle. Closely related to our work, apply deterministic annealing to train multilayer perceptrons, where an entropy based regularizer is introduced and slowly annealed. However, their focus is avoiding poor initialization and local minima, and while they find that deterministic annealing helps, the improvement diminishes quickly as the number of hidden units exceeds eight.
{ "cite_N": [ "@cite_1" ], "mid": [ "2161877964" ], "abstract": [ "The deterministic annealing approach to clustering and its extensions has demonstrated substantial performance improvement over standard supervised and unsupervised learning methods in a variety of important applications including compression, estimation, pattern recognition and classification, and statistical regression. The application-specific cost is minimized subject to a constraint on the randomness of the solution, which is gradually lowered. We emphasize the intuition gained from analogy to statistical physics. Alternatively the method is derived within rate-distortion theory, where the annealing process is equivalent to computation of Shannon's rate-distortion function, and the annealing temperature is inversely proportional to the slope of the curve. The basic algorithm is extended by incorporating structural constraints to allow optimization of numerous popular structures including vector quantizers, decision trees, multilayer perceptrons, radial basis functions, and mixtures of experts." ] }
1701.06521
2950848235
We introduce multi-modal, attention-based neural machine translation (NMT) models which incorporate visual features into different parts of both the encoder and the decoder. We utilise global image features extracted using a pre-trained convolutional neural network and incorporate them (i) as words in the source sentence, (ii) to initialise the encoder hidden state, and (iii) as additional data to initialise the decoder hidden state. In our experiments, we evaluate how these different strategies to incorporate global image features compare and which ones perform best. We also study the impact that adding synthetic multi-modal, multilingual data brings and find that the additional data have a positive impact on multi-modal models. We report new state-of-the-art results and our best models also significantly improve on a comparable phrase-based Statistical MT (PBSMT) model trained on the Multi30k data set according to all metrics evaluated. To the best of our knowledge, it is the first time a purely neural model significantly improves over a PBSMT model on all metrics evaluated on this data set.
Attention-based encoder-decoder models for MT have been actively investigated in recent years. Some researchers have studied how to improve attention mechanisms @cite_40 @cite_33 and how to train attention-based models to translate between many languages @cite_23 @cite_15 .
{ "cite_N": [ "@cite_40", "@cite_33", "@cite_23", "@cite_15" ], "mid": [ "2949335953", "", "2251743902", "2229833550" ], "abstract": [ "An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.", "", "In this paper, we investigate the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Our solution is inspired by the recently proposed neural machine translation model which generalizes machine translation as a sequence learning problem. We extend the neural machine translation to a multi-task learning framework which shares source language representation and separates the modeling of different target language translation. Our framework can be applied to situations where either large amounts of parallel data or limited parallel data is available. Experiments show that our multi-task learning model is able to achieve significantly higher translation quality over individually learned model in both situations on the data sets publicly available.", "We propose multi-way, multilingual neural machine translation. The proposed approach enables a single neural translation model to translate between multiple languages, with a number of parameters that grows only linearly with the number of languages. This is made possible by having a single attention mechanism that is shared across all language pairs. We train the proposed multi-way, multilingual model on ten language pairs from WMT'15 simultaneously and observe clear performance improvements over models trained on only one language pair. In particular, we observe that the proposed model significantly improves the translation quality of low-resource language pairs." ] }
1701.06767
2950422009
During last ten years, the number of smartphones and mobile applications has been constantly growing. Android, iOS and Windows Mobile are three mobile platforms that cover almost all smartphones in the world in 2017. Developing a mobile app involves first to choose the platforms the app will run, and then to develop specific solutions (i.e., native apps) for each chosen platform using platform-related toolkits such as AndroidSDK. Across-platform mobile application is an app that runs on two or more mobile platforms. Several frameworks have been proposed to simplify the development of cross-platform mobile applications and to reduce development and maintenance costs.They are called cross-platform mobile app development frameworks.However, to our knowledge, the life-cycle and the quality of cross-platforms mobile applications built using those frameworks have not been studied in depth. Our main goal is to first study the processes of development and maintenance of mobile applications built using cross-platform mobile app development frameworks, focusing particularly on the bug-fixing activity. Then, we aim at defining tools for automated repairing bugs from cross-platform mobile applications.
Cross-platform mobile app development : In addition to open-source development frameworks such as PhoneGap, Xamarin and React-Native, academic researchers @cite_25 @cite_10 @cite_26 proposed solutions with the goal of simplify the development of cross-platforms applications.
{ "cite_N": [ "@cite_26", "@cite_10", "@cite_25" ], "mid": [ "", "2050192850", "2082656110" ], "abstract": [ "", "There is a multitude of mobile OS: iOS android, Windows Phone 8 and each OS provides its own standards and tools. This heterogeneity in the mobile domain forces developers to implement an application for e ach mobile platform. To achieve that, developers need t o master several languages (Java, Objective-C…). They also need to have several devices at their disposal (PC, Mac, many smartphones …). Then, after applications distributions, developers have to main tain several source codes. In this study, we tackle this problematic. Our goal is to soften the differences between each OS in order to simplify the development of cross-platform third-party applications. To achieve that, we have defined a framework called COMMON (Component Oriented programming for Mobile Multi OsiNtegration). This framework allows the integration of cross-platform components in any app lication (iOS android). To run our components on an y OS, we provide an implementation for each platform. However, to make their integrations easier, we als o provide a common public interface of each component, which is platform-independent. Besides, we provid e a common language, also platform-independent, allowing the integration and use of any component in any native application (iOS android). This language is based on annotations. Finally, we have implemented a cross-compiler, which translates the source code wr itten with our language to native source code: Obje ctiveC for iOS, Java for Android,… In this study, we hav e shown that our solution offers performance and memory consumption closed to native applications. Finally, with COMMON, mobile developers implement less lines of source code than with a native applic ation. In your test application, we have saved 30 .", "Abstract Smartphones provide a set of native functionalities and another set of functionalities available through third-party applications. The emergence of more and more actors, without standards to provide their devices or OS, stops the cross-platform development. Indeed, a developer would have to learn many programmatic languages and create many user interfaces for many devices. To resolve this problem, several solutions often consist in the creation of a com- mon SDK to only write the application once. Then, the application code is translated in native code for each target platform. In this paper, we propose a solution based on a component model. A set of configurable components is implemented for the targeted platforms. A component will have a common interface independent from the host OS. Finally, a new language will offer developers a single instruction call to any component through its interface. This instruction is common on any platforms to simplify the implementation of a cross-platform application." ] }
1701.06767
2950422009
During last ten years, the number of smartphones and mobile applications has been constantly growing. Android, iOS and Windows Mobile are three mobile platforms that cover almost all smartphones in the world in 2017. Developing a mobile app involves first to choose the platforms the app will run, and then to develop specific solutions (i.e., native apps) for each chosen platform using platform-related toolkits such as AndroidSDK. Across-platform mobile application is an app that runs on two or more mobile platforms. Several frameworks have been proposed to simplify the development of cross-platform mobile applications and to reduce development and maintenance costs.They are called cross-platform mobile app development frameworks.However, to our knowledge, the life-cycle and the quality of cross-platforms mobile applications built using those frameworks have not been studied in depth. Our main goal is to first study the processes of development and maintenance of mobile applications built using cross-platform mobile app development frameworks, focusing particularly on the bug-fixing activity. Then, we aim at defining tools for automated repairing bugs from cross-platform mobile applications.
There are several works that classify, compare and evaluate cross-platform mobile application development tools @cite_4 @cite_1 @cite_9 @cite_3 @cite_14 to build hybrid mobile and native apps. Our goal is to empirically study the life-cycle of mobile apps built using some of those tools, such as Xamarin or React-Native.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_1", "@cite_3" ], "mid": [ "2066681327", "1530054426", "", "2164555494", "1964593581" ], "abstract": [ "People use an increasing number of consumer electronic devices to access their mobile apps. To enhance the applications' immersive user experience, these devices often expose APIs for accessing a wide array of sensors and domain-specific capabilities. Existing mobile application environments, however, only provide limited support for cross-device access of such APIs. To address this limitation, the Webinos platform was designed. Webinos is a virtualized Web-based application platform, aiming to support the collaboration of multiple devices within a single mobile application. In this paper we elaborate on the Webinos platform design. We discuss the encountered design challenges regarding portability, scalability, and privacy, and how these were mitigated.", "The fragmented smartphone market with at least five important mobile platforms makes native development of mobile applications (apps) a challenging and costly endeavour. Cross-platform development might alleviate this situation. Several cross-platform approaches have emerged, which we classify in a first step. In order to compare concrete cross-platform solutions, we compiled a set of criteria to assess cross-platform development approaches. Based on these criteria, we evaluated Web apps, apps developed with PhoneGap or Titanium Mobile, and – for comparison – natively developed apps. We present our findings as reference tables and generalize our results. Our criteria have proven to be viable for follow-up evaluations. With regard to the approaches, we found PhoneGap viable if very close resemblance to a native look & feel can be neglected.", "", "The number and type of mobile platforms is increasing. Each platform has a specific set of native functionalities (i.e., camera, compass) and provides a specific framework to implement mobile applications exploiting these functionalities. The new features offered by HTMLS together with the PhoneGap framework let the web be a potential candidate for multi-platform mobile development. However, programmers are still in charge of implementing the data flow, the control flow and the interaction. In this paper, we propose a development process to allow the implementation of portable web applications that use native device features. This process is based on the Model-View-View-Model architectural pattern and provides a framework that exploits the source code generated starting from the design of a State Transition Diagram. The state application logic is described exploiting Javascript. We also provide an example of generated multi-platform application, named Travel Guide.", "Mobiles are an integral part of daily life. With time, customers are expecting good and very versatile applications in less time. It is a big challenge to develop high performance mobile applications in this competitive market that would meet the expectation of customers. Mobile operating systems vendors are giving their best available resources for making applications in more convenient ways, although the development of new applications for each mobile operating system in short time is fairly a problem. Cross-platform mobile application development tools contribute in solving this problem largely. This paper presents a pragmatic comparison among four very popular cross platform tools, which are Rhodes, PhoneGap, DragonRad and MoSync. One of the main focuses of the comparison is to provide an overview on the availability of application programming interfaces, programming languages, supported mobile operating systems, licences, and integrated development environments. Furthermore, it also presents some critical points such as the factor of extensibility in tools and the effects that they may bring on market share. The comparison is aimed at supporting developers to make the right choice with respect to their needs constraints." ] }
1701.06767
2950422009
During last ten years, the number of smartphones and mobile applications has been constantly growing. Android, iOS and Windows Mobile are three mobile platforms that cover almost all smartphones in the world in 2017. Developing a mobile app involves first to choose the platforms the app will run, and then to develop specific solutions (i.e., native apps) for each chosen platform using platform-related toolkits such as AndroidSDK. Across-platform mobile application is an app that runs on two or more mobile platforms. Several frameworks have been proposed to simplify the development of cross-platform mobile applications and to reduce development and maintenance costs.They are called cross-platform mobile app development frameworks.However, to our knowledge, the life-cycle and the quality of cross-platforms mobile applications built using those frameworks have not been studied in depth. Our main goal is to first study the processes of development and maintenance of mobile applications built using cross-platform mobile app development frameworks, focusing particularly on the bug-fixing activity. Then, we aim at defining tools for automated repairing bugs from cross-platform mobile applications.
Empirical studies of Cross-platforms mobile apps: @cite_12 analyzed the energy consumption of mobile development. Their results showed the adoption of cross-platform frameworks as development tools always implies an increase in energy consumption, even if the final application is a real native application.
{ "cite_N": [ "@cite_12" ], "mid": [ "2559490865" ], "abstract": [ "App Store Analysis studies information about applications obtained from app stores. App stores provide a wealth of information derived from users that would not exist had the applications been distributed via previous software deployment methods. App Store Analysis combines this non-technical information with technical information to learn trends and behaviours within these forms of software repositories. Findings from App Store Analysis have a direct and actionable impact on the software teams that develop software for app stores, and have led to techniques for requirements engineering, release planning, software design, security and testing. This survey describes and compares the areas of research that have been explored thus far, drawing out common aspects, trends and directions future research should take to address open problems and challenges." ] }
1701.06767
2950422009
During last ten years, the number of smartphones and mobile applications has been constantly growing. Android, iOS and Windows Mobile are three mobile platforms that cover almost all smartphones in the world in 2017. Developing a mobile app involves first to choose the platforms the app will run, and then to develop specific solutions (i.e., native apps) for each chosen platform using platform-related toolkits such as AndroidSDK. Across-platform mobile application is an app that runs on two or more mobile platforms. Several frameworks have been proposed to simplify the development of cross-platform mobile applications and to reduce development and maintenance costs.They are called cross-platform mobile app development frameworks.However, to our knowledge, the life-cycle and the quality of cross-platforms mobile applications built using those frameworks have not been studied in depth. Our main goal is to first study the processes of development and maintenance of mobile applications built using cross-platform mobile app development frameworks, focusing particularly on the bug-fixing activity. Then, we aim at defining tools for automated repairing bugs from cross-platform mobile applications.
Hybrid mobile apps: @cite_13 have focused on analyzing hybrid mobile apps (e.g., those that uses PhoneGap) available on the app store Google Play and their meta-data (i.e., user ranking and reviews). One of their finding is that the average of end users ratings for both hybrid and native apps are similar (3.75 and 3.35 (out of 5), respectively).
{ "cite_N": [ "@cite_13" ], "mid": [ "1989688890" ], "abstract": [ "One of the most intriguing challenges in mobile apps development is its fragmentation with respect to mobile platforms (e.g., Android, Apple iOS, Windows Phone). Large companies like IBM and Adobe and a growing community of developers advocate hybrid mobile apps development as a possible solution to mobile platforms fragmentation. Hybrid mobile apps are consistent across platforms and built on web standards. How hybrid apps are performing in production settings is still highly debated, with limited objective evidence.In this paper, we present the first realistic investigation into mobile hybrid apps through a solid empirical strategy. Our goal is exploratory and we aim at identifying, analysing, and understanding the traits and distinctions of publicly available hybrid mobile apps within their real-life context. The study has been conducted by mining 11,917 free apps and 3,041,315 reviews from the Google Play Store, and analyzing them from both a technical and end users' perception perspective. The results of this study build an objective and reproducible snapshot about how hybrid mobile development is performing \"in the wild\" in real projects." ] }
1701.06767
2950422009
During last ten years, the number of smartphones and mobile applications has been constantly growing. Android, iOS and Windows Mobile are three mobile platforms that cover almost all smartphones in the world in 2017. Developing a mobile app involves first to choose the platforms the app will run, and then to develop specific solutions (i.e., native apps) for each chosen platform using platform-related toolkits such as AndroidSDK. Across-platform mobile application is an app that runs on two or more mobile platforms. Several frameworks have been proposed to simplify the development of cross-platform mobile applications and to reduce development and maintenance costs.They are called cross-platform mobile app development frameworks.However, to our knowledge, the life-cycle and the quality of cross-platforms mobile applications built using those frameworks have not been studied in depth. Our main goal is to first study the processes of development and maintenance of mobile applications built using cross-platform mobile app development frameworks, focusing particularly on the bug-fixing activity. Then, we aim at defining tools for automated repairing bugs from cross-platform mobile applications.
Automated software repair: During last ten years several approaches have been proposed to repair C bugs such as @cite_23 or Java bugs such as @cite_0 . The buggy programs that conform evaluation benchmarks are typically libraries or command-line (console) applications.
{ "cite_N": [ "@cite_0", "@cite_23" ], "mid": [ "2344973853", "2145373440" ], "abstract": [ "We propose Nopol , an approach to automatic repair of buggy conditional statements (i.e., if-then-else statements). This approach takes a buggy program as well as a test suite as input and generates a patch with a conditional expression as output. The test suite is required to contain passing test cases to model the expected behavior of the program and at least one failing test case that reveals the bug to be repaired. The process of Nopol consists of three major phases. First, Nopol employs angelic fix localization to identify expected values of a condition during the test execution. Second, runtime trace collection is used to collect variables and their actual values, including primitive data types and objected-oriented features (e.g., nullness checks), to serve as building blocks for patch generation. Third, Nopol encodes these collected data into an instance of a Satisfiability Modulo Theory (SMT) problem; then a feasible solution to the SMT instance is translated back into a code patch. We evaluate Nopol on 22 real-world bugs (16 bugs with buggy if conditions and six bugs with missing preconditions) on two large open-source projects, namely Apache Commons Math and Apache Commons Lang. Empirical analysis on these bugs shows that our approach can effectively fix bugs with buggy if conditions and missing preconditions. We illustrate the capabilities and limitations of Nopol using case studies of real bug fixes.", "This paper describes GenProg, an automated method for repairing defects in off-the-shelf, legacy programs without formal specifications, program annotations, or special coding practices. GenProg uses an extended form of genetic programming to evolve a program variant that retains required functionality but is not susceptible to a given defect, using existing test suites to encode both the defect and required functionality. Structural differencing algorithms and delta debugging reduce the difference between this variant and the original program to a minimal repair. We describe the algorithm and report experimental results of its success on 16 programs totaling 1.25 M lines of C code and 120K lines of module code, spanning eight classes of defects, in 357 seconds, on average. We analyze the generated repairs qualitatively and quantitatively to demonstrate that the process efficiently produces evolved programs that repair the defect, are not fragile input memorizations, and do not lead to serious degradation in functionality." ] }
1701.06751
2582166514
Collective classification of vertices is a task of assigning categories to each vertex in a graph based on both vertex attributes and link structure. Nevertheless, some existing approaches do not use the features of neighbouring vertices properly, due to the noise introduced by these features. In this paper, we propose a graph-based recursive neural network framework for collective vertex classification. In this framework, we generate hidden representations from both attributes of vertices and representations of neighbouring vertices via recursive neural networks. Under this framework, we explore two types of recursive neural units, naive recursive neural unit and long short-term memory unit. We have conducted experiments on four real-world network datasets. The experimental results show that our frame- work with long short-term memory model achieves better results and outperforms several competitive baseline methods.
There has been a growing trend to represent data using graphs @cite_20 . Discovering knowledge from graphs becomes an exciting research area, such as vertex classification @cite_18 and graph classification @cite_21 . Graph classification analyzes the properties of the graph as a whole, while vertex classification focuses on predicting labels of vertices in the graph. In this paper, we discuss the problem of vertex classification. The main-stream approaches for vertex classification are collective vertex classification @cite_4 which classify vertices using information provided by neighbouring vertices. Iterative classification approach @cite_4 models neighbours' label distribution as link features to facilitate classification. Label propagation approach @cite_15 assigns a probabilistic label for each vertex and then fine-tunes the probability using graph structure. However, labels of neighbouring vertices are not representative enough to include all useful information. Some researchers tried to introduce attributes from neighbouring vertices to improve classification performance. Nevertheless, as reported in @cite_10 @cite_17 , naively incorporating these features may reduce the performance of classification, when original features of neighbouring vertices are too noisy.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_21", "@cite_15", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "", "", "2406128552", "1969198379", "2076008912", "2114507260", "2110224739" ], "abstract": [ "", "", "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.", "In many practical data mining applications such as text classification, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semi supervised learning algorithms have aroused considerable interests from the data mining and machine learning fields. In recent years, graph-based semi supervised learning has been becoming one of the most active research areas in the semi supervised learning community. In this paper, a novel graph-based semi supervised learning approach is proposed based on a linear neighborhood model, which assumes that each data point can be linearly reconstructed from its neighborhood. Our algorithm, named linear neighborhood propagation (LNP), can propagate the labels from the labeled points to the whole data set using these linear neighborhoods with sufficient smoothness. A theoretical analysis of the properties of LNP is presented in this paper. Furthermore, we also derive an easy way to extend LNP to out-of-sample data. Promising experimental results are presented for synthetic data, digit, and text classification tasks.", "A major challenge in indexing unstructured hypertext databases is to automatically extract meta-data that enables structured search using topic taxonomies, circumvents keyword ambiguity, and improves the quality of search and profile-based routing and filtering. Therefore, an accurate classifier is an essential component of a hypertext database. Hyperlinks pose new problems not addressed in the extensive text classification literature. Links clearly contain high-quality semantic clues that are lost upon a purely term-based classifier, but exploiting link information is non-trivial because it is noisy. Naive use of terms in the link neighborhood of a document can even degrade accuracy. Our contribution is to propose robust statistical models and a relaxation labeling technique for better classification by exploiting link information in a small neighborhood around documents. Our technique also adapts gracefully to the fraction of neighboring documents having known topics. We experimented with pre-classified samples from Yahoo! 1 and the US Patent Database 2 . In previous work, we developed a text classifier that misclassified only 13 of the documents in the well-known Reuters benchmark; this was comparable to the best results ever obtained. This classifier misclassified 36 of the patents, indicating that classifying hypertext can be more difficult than classifying text. Naively using terms in neighboring documents increased error to 38 ; our hypertext classifier reduced it to 21 . Results with the Yahoo! sample were more dramatic: the text classifier showed 68 error, whereas our hypertext classifier reduced this to only 21 .", "Graph database models can be defined as those in which data structures for the schema and instances are modeled as graphs or generalizations of them, and data manipulation is expressed by graph-oriented operations and type constructors. These models took off in the eighties and early nineties alongside object-oriented models. Their influence gradually died out with the emergence of other database models, in particular geographical, spatial, semistructured, and XML. Recently, the need to manage information with graph-like nature has reestablished the relevance of this area. The main objective of this survey is to present the work that has been conducted in the area of graph database modeling, concentrating on data structures, query languages, and integrity constraints.", "As WWW grows at an increasing speed, a classifier targeted at hypertext has become in high demand. While document categorization is quite a mature, the issue of utilizing hypertext structure and hyperlinks has been relatively unexplored. In this paper, we propose a practical method for enhancing both the speed and the quality of hypertext categorization using hyperlinks. In comparison against a recently proposed technique that appears to be the only one of the kind, we obtained up to 18.5 of improvement in effectiveness while reducing the processing time dramatically. We attempt to explain through experiments what factors contribute to the improvement." ] }
1701.06751
2582166514
Collective classification of vertices is a task of assigning categories to each vertex in a graph based on both vertex attributes and link structure. Nevertheless, some existing approaches do not use the features of neighbouring vertices properly, due to the noise introduced by these features. In this paper, we propose a graph-based recursive neural network framework for collective vertex classification. In this framework, we generate hidden representations from both attributes of vertices and representations of neighbouring vertices via recursive neural networks. Under this framework, we explore two types of recursive neural units, naive recursive neural unit and long short-term memory unit. We have conducted experiments on four real-world network datasets. The experimental results show that our frame- work with long short-term memory model achieves better results and outperforms several competitive baseline methods.
Recently, some researchers analysed graphs using deep neural network technologies. Deepwalk @cite_0 is an unsupervised learning algorithm to learn vertex embeddings using link structure, while content of each vertex is not considered. Convolutional neural network for graphs @cite_21 learns feature representations for the graphs as a whole. Recurrent neural collective classification @cite_16 encodes neighbouring vertices via a recurrent neural network, which is hard to capture the information from vertices that are more than several steps away.
{ "cite_N": [ "@cite_0", "@cite_21", "@cite_16" ], "mid": [ "2154851992", "2406128552", "" ], "abstract": [ "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.", "" ] }
1701.06751
2582166514
Collective classification of vertices is a task of assigning categories to each vertex in a graph based on both vertex attributes and link structure. Nevertheless, some existing approaches do not use the features of neighbouring vertices properly, due to the noise introduced by these features. In this paper, we propose a graph-based recursive neural network framework for collective vertex classification. In this framework, we generate hidden representations from both attributes of vertices and representations of neighbouring vertices via recursive neural networks. Under this framework, we explore two types of recursive neural units, naive recursive neural unit and long short-term memory unit. We have conducted experiments on four real-world network datasets. The experimental results show that our frame- work with long short-term memory model achieves better results and outperforms several competitive baseline methods.
Recursive neural networks (RNN) are a series of models that deal with tree-structured information. RNN has been implemented in natural scenes parsing @cite_11 and tree-structured sentence representation learning @cite_6 . Under this framework, representations can be learned from both input features and representations of child nodes. Graph structures are more widely used and more complicated than tree or sequence structures. Due to the lack of notable order for processing vertices in a graph, few studies have investigated the vertex classification problem using recursive neural network techniques. The graph-based recursive neural network framework proposed in this paper can generate the processing order for neural network according to the vertex to classify and the local graph structure.
{ "cite_N": [ "@cite_6", "@cite_11" ], "mid": [ "2104246439", "1423339008" ], "abstract": [ "Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).", "Recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences. Discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole. We introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences. The same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the Penn Treebank and to outperform alternative approaches for semantic scene segmentation, annotation and classification. For segmentation and annotation our algorithm obtains a new level of state-of-the-art performance on the Stanford background dataset (78.1 ). The features from the image parse tree outperform Gist descriptors for scene classification by 4 ." ] }
1701.06260
2582119505
Abstract This paper presents a new safety specification method that is robust against errors in the probability distribution of disturbances. Our proposed distributionally robust safe policy maximizes the probability of a system remaining in a desired set for all times, subject to the worst possible disturbance distribution in an ambiguity set. We propose a dynamic game formulation of constructing such policies and identify conditions under which a non-randomized Markov policy is optimal. Based on this existence result, we develop a practical design approach to safety-oriented stochastic controllers with limited information about disturbance distributions. However, an associated Bellman equation involves infinite-dimensional minimax optimization problems since the disturbance distribution may have a continuous density. To alleviate computational issues, we propose a duality-based reformulation method that converts the infinite-dimensional minimax problem into a semi-infinite program that can be solved using existing convergent algorithms. We prove that there is no duality gap, and that this approach thus preserves optimality. The results of numerical tests confirm that the proposed method is robust against distributional errors in disturbances, while a standard stochastic safety verification tool is not.
A probabilistic reachability tool for stochastic differential equations with jumps has been proposed; it uses a Markov chain approximation to propagate the transition probabilities of the Markov chain backward in time starting from a target set @cite_49 , @cite_43 , @cite_31 . In @cite_33 , barrier certificates are employed to calculate an upper bound of the probability that a system will reach a target set. Additionally, @cite_30 proposes a toolbox that supports expectation-based reachability problems associated with a class of continuous-time stochastic (hybrid) systems by extending the celebrated Hamilton--Jacobi--Isaacs reachability analysis @cite_42 , @cite_35 . A partial differential equation characterization of continuous-time stochastic reach-avoid problems is studied in @cite_5 based on the theory of discontinuous viscosity solutions. For discrete-time stochastic hybrid systems, an elegant dynamic programming approach has been proposed to compute the maximal probability of safety @cite_54 . This method has been extended to stochastic reach--avoid problems @cite_22 , stochastic hybrid games @cite_50 , and partially observable stochastic hybrid systems @cite_41 , @cite_46 . However, all the aforementioned methods are based on the possibly restrictive assumption that the probability distribution of disturbances is completely known.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_33", "@cite_22", "@cite_41", "@cite_46", "@cite_54", "@cite_42", "@cite_43", "@cite_49", "@cite_50", "@cite_5", "@cite_31" ], "mid": [ "1582899597", "2116364955", "2033118636", "2084772009", "2054675873", "", "2080813827", "2123357397", "2499588876", "2124489106", "2154136656", "", "2506598146" ], "abstract": [ "Hamilton-Jacobi partial differential equations have many applications in the analysis of nondeterministic continuous and hybrid systems. Unfortunately, analytic solutions are seldom available and numerical approximation requires a great deal of programming infrastructure. In this paper we describe the first publicly available toolbox for approximating the solution of such equations, and discuss three examples of how these equations can be used in system analysis: cost to go, stochastic differential games, and stochastic hybrid systems. For each example we briefly summarize the relevant theory, describe the toolbox implementation, and provide results.", "We describe and implement an algorithm for computing the set of reachable states of a continuous dynamic game. The algorithm is based on a proof that the reachable set is the zero sublevel set of the viscosity solution of a particular time-dependent Hamilton-Jacobi-Isaacs partial differential equation. While alternative techniques for computing the reachable set have been proposed, the differential game formulation allows treatment of nonlinear systems with inputs and uncertain parameters. Because the time-dependent equation's solution is continuous and defined throughout the state space, methods from the level set literature can be used to generate more accurate approximations than are possible for formulations with potentially discontinuous solutions. A numerical implementation of our formulation is described and has been released on the web. Its correctness is verified through a two vehicle, three dimensional collision avoidance example for which an analytic solution is available.", "This paper presents a methodology for safety verification of continuous and hybrid systems in the worst-case and stochastic settings. In the worst-case setting, a function of state termed barrier certificate is used to certify that all trajectories of the system starting from a given initial set do not enter an unsafe region. No explicit computation of reachable sets is required in the construction of barrier certificates, which makes it possible to handle nonlinearity, uncertainty, and constraints directly within this framework. In the stochastic setting, our method computes an upper bound on the probability that a trajectory of the system reaches the unsafe set, a bound whose validity is proven by the existence of a barrier certificate. For polynomial systems, barrier certificates can be constructed using convex optimization, and hence the method is computationally tractable. Some examples are provided to illustrate the use of the method.", "We present a dynamic programming based solution to a probabilistic reach-avoid problem for a controlled discrete time stochastic hybrid system. We address two distinct interpretations of the reach-avoid problem via stochastic optimal control. In the first case, a sum-multiplicative cost function is introduced along with a corresponding dynamic recursion which quantifies the probability of hitting a target set at some point during a finite time horizon, while avoiding an unsafe set during each time step preceding the target hitting time. In the second case, we introduce a multiplicative cost function and a dynamic recursion which quantifies the probability of hitting a target set at the terminal time, while avoiding an unsafe set during the preceding time steps. In each case, optimal reach while avoid control policies are derived as the solution to an optimal control problem via dynamic programming. Computational examples motivated by two practical problems in the management of fisheries and finance are provided.", "When designing optimal controllers for any system, it is often the case that the true state of the system is unknown to the controller. Imperfect state information must be taken into account in the controller’s design in order to preserve its optimality. The same is true when performing reachability calculations. To estimate the probability that the state of a stochastic system reaches, or stays within, some set of interest in a given time horizon, it is necessary to find a controller that drives the system to that set with maximum probability, given the controller’s knowledge of the true state of the system. To date, little work has been done on stochastic reachability calculations with partially observable states. The work that has been done relies on converting the reachability optimization problem to one with an additive cost function, for which theoretical results are well known. Our approach is to preserve the multiplicative cost structure when deriving a sufficient statistic that reduces the problem to one of perfect state information. Our transformation includes a change of measure that simplifies the distribution of the sufficient statistic conditioned on its previous value. We develop a dynamic programming recursion for the solution of the equivalent perfect information problem, proving that the recursion is valid, an optimal solution exists, and results in the same solution as to the original problem. We also show that our results are equivalent to those for the reformulated additive cost problem, and so such a reformulation is not required.", "", "In this work, probabilistic reachability over a finite horizon is investigated for a class of discrete time stochastic hybrid systems with control inputs. A suitable embedding of the reachability problem in a stochastic control framework reveals that it is amenable to two complementary interpretations, leading to dual algorithms for reachability computations. In particular, the set of initial conditions providing a certain probabilistic guarantee that the system will keep evolving within a desired 'safe' region of the state space is characterized in terms of a value function, and 'maximally safe' Markov policies are determined via dynamic programming. These results are of interest not only for safety analysis and design, but also for solving those regulation and stabilization problems that can be reinterpreted as safety problems. The temperature regulation problem presented in the paper as a case study is one such case.", "We present a method to design controllers for safety specifications in hybrid systems. The hybrid system combines discrete event dynamics with nonlinear continuous dynamics: the discrete event dynamics model linguistic and qualitative information and naturally accommodate mode switching logic, and the continuous dynamics model the physical processes themselves, such as the continuous response of an aircraft to the forces of aileron and throttle. Input variables model both continuous and discrete control and disturbance parameters. We translate safety specifications into restrictions on the system's reachable sets of states. Then, using analysis based on optimal control and game theory for automata and continuous dynamical systems, we derive Hamilton-Jacobi equations whose solutions describe the boundaries of reachable sets. These equations are the heart of our general controller synthesis technique for hybrid systems, in which we calculate feedback control laws for the continuous and discrete variables, which guarantee that the hybrid system remains in the \"safe subset\" of the reachable set. We discuss issues related to computing solutions to Hamilton-Jacobi equations. Throughout, we demonstrate out techniques on examples of hybrid automata modeling aircraft conflict resolution, autopilot flight mode switching, and vehicle collision avoidance.", "", "In this paper, the problem of automated aircraft conflict prediction is studied for two-aircraft midair encounters. A model is introduced to predict the aircraft positions along some look-ahead time horizon, during which each aircraft is trying to follow a prescribed flight plan despite the presence of additive wind perturbations to its velocity. A spatial correlation structure is assumed for the wind perturbations such that the closer the two aircraft, the stronger the correlation between the perturbations to their velocities. Using this model, a method is introduced to evaluate the criticality of the encounter situation by estimating the probability of conflict, namely, the probability that the two aircraft come closer than a minimum allowed distance at some time instant during the look-ahead time horizon. The proposed method is based on the introduction of a Markov chain approximation of the stochastic processes modeling the aircraft motions. Several generalizations of the proposed approach are also discussed.", "We describe a framework for analyzing probabilistic reachability and safety problems for discrete time stochastic hybrid systems within a dynamic games setting. In particular, we consider finite horizon zero-sum stochastic games in which a control has the objective of reaching a target set while avoiding an unsafe set in the hybrid state space, and a rational adversary has the opposing objective. We derive an algorithm for computing the maximal probability of achieving the control objective, subject to the worst-case adversary behavior. From this algorithm, sufficient conditions of optimality are also derived for the synthesis of optimal control policies and worst-case disturbance strategies. These results are then specialized to the safety problem, in which the control objective is to remain within a safe set. We illustrate our modeling framework and computational approach using both a tutorial example with jump Markov dynamics and a practical application in the domain of air traffic management.", "", "" ] }
1701.06260
2582119505
Abstract This paper presents a new safety specification method that is robust against errors in the probability distribution of disturbances. Our proposed distributionally robust safe policy maximizes the probability of a system remaining in a desired set for all times, subject to the worst possible disturbance distribution in an ambiguity set. We propose a dynamic game formulation of constructing such policies and identify conditions under which a non-randomized Markov policy is optimal. Based on this existence result, we develop a practical design approach to safety-oriented stochastic controllers with limited information about disturbance distributions. However, an associated Bellman equation involves infinite-dimensional minimax optimization problems since the disturbance distribution may have a continuous density. To alleviate computational issues, we propose a duality-based reformulation method that converts the infinite-dimensional minimax problem into a semi-infinite program that can be solved using existing convergent algorithms. We prove that there is no duality gap, and that this approach thus preserves optimality. The results of numerical tests confirm that the proposed method is robust against distributional errors in disturbances, while a standard stochastic safety verification tool is not.
This work also closely relates to , which is an emerging stochastic control method. This method is based on single-stage distributionally robust stochastic optimization that minimizes the worst-case cost, assuming that the probability distribution of uncertain variables lies within an ambiguity set of distributions (e.g., @cite_36 , @cite_23 , @cite_28 , @cite_29 , @cite_8 , @cite_37 ). For multi-stage problems, a distributionally robust Markov decision process (MDP) formulation has recently been developed while focusing on finite-state, finite-action MDPs @cite_0 , @cite_11 . For cases with moment uncertainty, @cite_2 investigates linear feedback strategies in linear-quadratic settings with risk constraints and proposes a semidefinite programming approach. We extend the theory of distributionally robust control to the case of continuous state spaces and apply it to reachability analysis and safety specifications.
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_36", "@cite_28", "@cite_29", "@cite_0", "@cite_23", "@cite_2", "@cite_11" ], "mid": [ "2125417745", "1968355947", "2558813012", "2021472931", "1967729795", "2152790647", "", "1855555568", "2962850106" ], "abstract": [ "Distributionally robust optimization is a paradigm for decision making under uncertainty where the uncertain problem data are governed by a probability distribution that is itself subject to uncertainty. The distribution is then assumed to belong to an ambiguity set comprising all distributions that are compatible with the decision maker's prior information. In this paper, we propose a unifying framework for modeling and solving distributionally robust optimization problems. We introduce standardized ambiguity sets that contain all distributions with prescribed conic representable confidence sets and with mean values residing on an affine manifold. These ambiguity sets are highly expressive and encompass many ambiguity sets from the recent literature as special cases. They also allow us to characterize distributional families in terms of several classical and or robust statistical indicators that have not yet been studied in the context of robust optimization. We determine conditions under which distributionally robust optimization problems based on our standardized ambiguity sets are computationally tractable. We also provide tractable conservative approximations for problems that violate these conditions.", "Stochastic programming can effectively describe many decision-making problems in uncertain environments. Unfortunately, such programs are often computationally demanding to solve. In addition, their solution can be misleading when there is ambiguity in the choice of a distribution for the random parameters. In this paper, we propose a model that describes uncertainty in both the distribution form (discrete, Gaussian, exponential, etc.) and moments (mean and covariance matrix). We demonstrate that for a wide range of cost functions the associated distributionally robust (or min-max) stochastic program can be solved efficiently. Furthermore, by deriving a new confidence region for the mean and the covariance matrix of a random vector, we provide probabilistic arguments for using our model in problems that rely heavily on historical data. These arguments are confirmed in a practical example of portfolio selection, where our framework leads to better-performing policies on the “true” distribution underlying the daily returns of financial assets.", "", "Classical formulations of the portfolio optimization problem, such as mean-variance or Value-at-Risk (VaR) approaches, can result in a portfolio extremely sensitive to errors in the data, such as mean and covariance matrix of the returns. In this paper we propose a way to alleviate this problem in a tractable manner. We assume that the distribution of returns is partially known, in the sense that onlybounds on the mean and covariance matrix are available. We define the worst-case Value-at-Risk as the largest VaR attainable, given the partial information on the returns' distribution. We consider the problem of computing and optimizing the worst-case VaR, and we show that these problems can be cast as semidefinite programs. We extend our approach to various other partial information on the distribution, including uncertainty in factor models, support constraints, and relative entropy information.", "In this paper, we discuss linear programs in which the data that specify the constraints are subject to random uncertainty. A usual approach in this setting is to enforce the constraints up to a given level of probability. We show that, for a wide class of probability distributions (namely, radial distributions) on the data, the probability constraints can be converted explicitly into convex second-order cone constraints; hence, the probability-constrained linear program can be solved exactly with great efficiency. Next, we analyze the situation where the probability distribution of the data is not completely specified, but is only known to belong to a given class of distributions. In this case, we provide explicit convex conditions that guarantee the satisfaction of the probability constraints for any possible distribution belonging to the given class.", "We consider Markov decision processes where the values of the parameters are uncertain. This uncertainty is described by a sequence of nested sets (that is, each set contains the previous one), each of which corresponds to a probabilistic guarantee for a different confidence level. Consequently, a set of admissible probability distributions of the unknown parameters is specified. This formulation models the case where the decision maker is aware of and wants to exploit some (yet imprecise) a priori information of the distribution of parameters, and it arises naturally in practice where methods for estimating the confidence region of parameters abound. We propose a decision criterion based on distributional robustness: the optimal strategy maximizes the expected total reward under the most adversarial admissible probability distributions. We show that finding the optimal distributionally robust strategy can be reduced to the standard robust MDP where parameters are known to belong to a single uncertainty set; hence, it can be computed in polynomial time under mild technical conditions.", "", "We investigate the control of constrained stochastic linear systems when faced with limited information regarding the disturbance process, i.e., when only the first two moments of the disturbance distribution are known. We consider two types of distributionally robust constraints. In the first case, we require that the constraints hold with a given probability for all disturbance distributions sharing the known moments. These constraints are commonly referred to as distributionally robust chance constraints. In the second case, we impose conditional value-at-risk (CVaR) constraints to bound the expected constraint violation for all disturbance distributions consistent with the given moment information. Such constraints are referred to as distributionally robust CVaR constraints with second-order moment specifications. We propose a method for designing linear controllers for systems with such constraints that is both computationally tractable and practically meaningful for both finite and infinite horizon problems. We prove in the infinite horizon case that our design procedure produces the globally optimal linear output feedback controller for distributionally robust CVaR and chance constrained problems. The proposed methods are illustrated for a wind blade control design case study for which distributionally robust constraints constitute sensible design objectives.", "This technical note studies Markov decision processes under parameter uncertainty. We adapt the distributionally robust optimization framework, assume that the uncertain parameters are random variables following an unknown distribution, and seek the strategy which maximizes the expected performance under the most adversarial distribution. In particular, we generalize a previous study [1] which concentrates on distribution sets with very special structure to a considerably more generic class of distribution sets, and show that the optimal strategy can be obtained efficiently under mild technical conditions. This significantly extends the applicability of distributionally robust MDPs by incorporating probabilistic information of uncertainty in a more flexible way." ] }
1701.06250
2950167508
The 2016 U.S. presidential election has witnessed the major role of Twitter in the year's most important political event. Candidates used this social media platform extensively for online campaigns. Meanwhile, social media has been filled with rumors, which might have had huge impacts on voters' decisions. In this paper, we present a thorough analysis of rumor tweets from the followers of two presidential candidates: Hillary Clinton and Donald Trump. To overcome the difficulty of labeling a large amount of tweets as training data, we detect rumor tweets by matching them with verified rumor articles. We analyze over 8 million tweets collected from the followers of the two candidates. Our results provide answers to several primary concerns about rumors in this election, including: which side of the followers posted the most rumors, who posted these rumors, what rumors they posted, and when they posted these rumors. The insights of this paper can help us understand the online rumor behaviors in American politics.
Online social media have gained huge popularity around the world and become a vital platform for politics. However, the openness and convenience of social media also fosters a large amount of fake news and rumors which can spread wildly @cite_12 . Compared with existing rumor detection works that are focused on general social events or emergency events @cite_4 , this paper presents a first analysis of rumors in a political election.
{ "cite_N": [ "@cite_4", "@cite_12" ], "mid": [ "2041614930", "2032897813" ], "abstract": [ "Benefiting from its openness, collaboration and real-time features, Micro blog has become one of the most important news communication media in modern society. However, it is also filled with fake news. Without verification, such information could spread promptly through social network and result in serious consequences. To evaluate news credibility on Micro blog, we propose a hierarchical propagation model. We detect sub-events within a news event to describe its detailed aspects. Thus, for a news event, a three-layer credibility network consisting of event, sub-events and messages can represent it from different scale and reveal vital information for credibility evaluation. After linking these entities with their semantic and social associations, the credibility value of each entity is propagated on this network to achieve the final evaluation result. By formulating this propagation process as a graph optimization problem, we provide a globally optimal solution with an iterative algorithm. Experiments conducted on two real-world datasets show that the proposed model boosts the accuracy by more than 6 and the F-score by more than 16 over a baseline method.", "The problem of identifying rumors is of practical importance especially in online social networks, since information can diffuse more rapidly and widely than the offline counterpart. In this paper, we identify characteristics of rumors by examining the following three aspects of diffusion: temporal, structural, and linguistic. For the temporal characteristics, we propose a new periodic time series model that considers daily and external shock cycles, where the model demonstrates that rumor likely have fluctuations over time. We also identify key structural and linguistic differences in the spread of rumors and non-rumors. Our selected features classify rumors with high precision and recall in the range of 87 to 92 , that is higher than other states of the arts on rumor classification." ] }
1701.06250
2950167508
The 2016 U.S. presidential election has witnessed the major role of Twitter in the year's most important political event. Candidates used this social media platform extensively for online campaigns. Meanwhile, social media has been filled with rumors, which might have had huge impacts on voters' decisions. In this paper, we present a thorough analysis of rumor tweets from the followers of two presidential candidates: Hillary Clinton and Donald Trump. To overcome the difficulty of labeling a large amount of tweets as training data, we detect rumor tweets by matching them with verified rumor articles. We analyze over 8 million tweets collected from the followers of the two candidates. Our results provide answers to several primary concerns about rumors in this election, including: which side of the followers posted the most rumors, who posted these rumors, what rumors they posted, and when they posted these rumors. The insights of this paper can help us understand the online rumor behaviors in American politics.
Most existing rumor detection algorithms follow the traditional supervised machine learning scheme. Features from text content @cite_1 , users, propagation patterns @cite_3 and multimedia content @cite_11 @cite_7 are extracted to train a classifier on labeled training data. Some recent works further improve the classification result with graph-based optimization methods @cite_14 @cite_4 @cite_6 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_1", "@cite_3", "@cite_6", "@cite_11" ], "mid": [ "2398287226", "2041614930", "2531862055", "2084591134", "1546111015", "", "1590495275" ], "abstract": [ "Though Twitter acts as a realtime news source with people acting as sensors and sending event updates from all over the world, rumors spread via Twitter have been noted to cause considerable damage. Given a set of popular Twitter events along with related users and tweets, we study the problem of automatically assessing the credibility of such events. We propose a credibility analysis approach enhanced with event graph-based optimization to solve the problem. First we experiment by performing PageRanklike credibility propagation on a multi-typed network consisting of events, tweets, and users. Further, within each iteration, we enhance the basic trust analysis by updating event credibility scores using regularization on a new graph of events. Our experiments using events extracted from two tweet feed datasets, each with millions of tweets show that our event graph optimization approach outperforms the basic credibility analysis approach. Also, our methods are significantly more accurate (∼86 ) than the decision tree classifier approach (∼72 ).", "Benefiting from its openness, collaboration and real-time features, Micro blog has become one of the most important news communication media in modern society. However, it is also filled with fake news. Without verification, such information could spread promptly through social network and result in serious consequences. To evaluate news credibility on Micro blog, we propose a hierarchical propagation model. We detect sub-events within a news event to describe its detailed aspects. Thus, for a news event, a three-layer credibility network consisting of event, sub-events and messages can represent it from different scale and reveal vital information for credibility evaluation. After linking these entities with their semantic and social associations, the credibility value of each entity is propagated on this network to achieve the final evaluation result. By formulating this propagation process as a graph optimization problem, we provide a globally optimal solution with an iterative algorithm. Experiments conducted on two real-world datasets show that the proposed model boosts the accuracy by more than 6 and the F-score by more than 16 over a baseline method.", "Microblog has been a popular media platform for reporting and propagating news. However, fake news spreading on microblogs would severely jeopardize its public credibility. To identify the truthfulness of news on microblogs, images are very crucial content. In this paper, we explore the key role of image content in the task of automatic news verification on microblogs. Existing approaches to news verification depend on features extracted mainly from the text content of news tweets, while image features for news verification are often ignored. According to our study, however, images are very popular and have a great influence on microblogs news propagation. In addition, fake and real news events have different image distribution patterns. Therefore, we propose several visual and statistical features to characterize these patterns visually and statistically for detecting fake news. Experiments on a real-world multimedia dataset collected from Sina Weibo validate the effectiveness of our proposed image features. The news verification performance of our method outperforms baseline methods. To the best of our knowledge, this is the first attempt that systematically explores image features on news verification task.", "We analyze the information credibility of news propagated through Twitter, a popular microblogging service. Previous research has shown that most of the messages posted on Twitter are truthful, but the service is also used to spread misinformation and false rumors, often unintentionally. On this paper we focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, we analyze microblog postings related to \"trending\" topics, and classify them as credible or not credible, based on features extracted from them. We use features from users' posting and re-posting (\"re-tweeting\") behavior, from the text of the posts, and from citations to external sources. We evaluate our methods using a significant number of human assessments about the credibility of items on a recent sample of Twitter postings. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70 to 80 .", "This paper studies the problem of automatic detection of false rumors on Sina Weibo, the popular Chinese microblogging social network. Traditional feature-based approaches extract features from the false rumor message, its author, as well as the statistics of its responses to form a flat feature vector. This ignores the propagation structure of the messages and has not achieved very good results. We propose a graph-kernel based hybrid SVM classifier which captures the high-order propagation patterns in addition to semantic features such as topics and sentiments. The new model achieves a classification accuracy of 91.3 on randomly selected Weibo dataset, significantly higher than state-of-the-art approaches. Moreover, our approach can be applied at the early stage of rumor propagation and is 88 confident in detecting an average false rumor just 24 hours after the initial broadcast.", "", "Twitter is a microblogging website where users read and write millions of short messages on a variety of topics every day. This study uses the context of the German federal election to investigate whether Twitter is used as a forum for political deliberation and whether online messages on Twitter validly mirror offline political sentiment. Using LIWC text analysis software, we conducted a content-analysis of over 100,000 messages containing a reference to either a political party or a politician. Our results show that Twitter is indeed used extensively for political deliberation. We find that the mere number of messages mentioning a party reflects the election result. Moreover, joint mentions of two parties are in line with real world political ties and coalitions. An analysis of the tweets’ political sentiment demonstrates close correspondence to the parties' and politicians’ political positions indicating that the content of Twitter messages plausibly reflects the offline political landscape. We discuss the use of microblogging message content as a valid indicator of political sentiment and derive suggestions for further research." ] }
1701.06250
2950167508
The 2016 U.S. presidential election has witnessed the major role of Twitter in the year's most important political event. Candidates used this social media platform extensively for online campaigns. Meanwhile, social media has been filled with rumors, which might have had huge impacts on voters' decisions. In this paper, we present a thorough analysis of rumor tweets from the followers of two presidential candidates: Hillary Clinton and Donald Trump. To overcome the difficulty of labeling a large amount of tweets as training data, we detect rumor tweets by matching them with verified rumor articles. We analyze over 8 million tweets collected from the followers of the two candidates. Our results provide answers to several primary concerns about rumors in this election, including: which side of the followers posted the most rumors, who posted these rumors, what rumors they posted, and when they posted these rumors. The insights of this paper can help us understand the online rumor behaviors in American politics.
Although machine learning approaches are very effective under some circumstances, they also have drawbacks. The supervised learning process requires a large amount of labeled training data which are expensive to obtain for the rumor detection problem. They derive features in a black box" and the classification results are difficult to interpret. In @cite_8 , a lexicon-based method is proposed for detecting rumors in a huge tweet stream. They extracted some words and phrases, like rumor", is it true", unconfirmed", for matching rumor tweets. Their lexicon is relatively small, thus the detection results tend to have high precision but low recall of rumors.
{ "cite_N": [ "@cite_8" ], "mid": [ "1638051351" ], "abstract": [ "Many previous techniques identify trending topics in social media, even topics that are not pre-defined. We present a technique to identify trending rumors, which we define as topics that include disputed factual claims. Putting aside any attempt to assess whether the rumors are true or false, it is valuable to identify trending rumors as early as possible. It is extremely difficult to accurately classify whether every individual post is or is not making a disputed factual claim. We are able to identify trending rumors by recasting the problem as finding entire clusters of posts whose topic is a disputed factual claim. The key insight is that when there is a rumor, even though most posts do not raise questions about it, there may be a few that do. If we can find signature text phrases that are used by a few people to express skepticism about factual claims and are rarely used to express anything else, we can use those as detectors for rumor clusters. Indeed, we have found a few phrases that seem to be used exactly that way, including: \"Is this true?\", \"Really?\", and \"What?\". Relatively few posts related to any particular rumor use any of these enquiry phrases, but lots of rumor diffusion processes have some posts that do and have them quite early in the diffusion. We have developed a technique based on searching for the enquiry phrases, clustering similar posts together, and then collecting related posts that do not contain these simple phrases. We then rank the clusters by their likelihood of really containing a disputed factual claim. The detector, which searches for the very rare but very informative phrases, combined with clustering and a classifier on the clusters, yields surprisingly good performance. On a typical day of Twitter, about a third of the top 50 clusters were judged to be rumors, a high enough precision that human analysts might be willing to sift through them." ] }
1701.06250
2950167508
The 2016 U.S. presidential election has witnessed the major role of Twitter in the year's most important political event. Candidates used this social media platform extensively for online campaigns. Meanwhile, social media has been filled with rumors, which might have had huge impacts on voters' decisions. In this paper, we present a thorough analysis of rumor tweets from the followers of two presidential candidates: Hillary Clinton and Donald Trump. To overcome the difficulty of labeling a large amount of tweets as training data, we detect rumor tweets by matching them with verified rumor articles. We analyze over 8 million tweets collected from the followers of the two candidates. Our results provide answers to several primary concerns about rumors in this election, including: which side of the followers posted the most rumors, who posted these rumors, what rumors they posted, and when they posted these rumors. The insights of this paper can help us understand the online rumor behaviors in American politics.
In this paper, we formulate the rumor detection as a text matching task. Several state-of-the-art matching algorithms are utilized for rumor detection. TF-IDF @cite_13 is the most commonly used method for computing documents similarity. BM25 algorithm @cite_0 is also a term-based matching method. Recent research in deep learning for text representation embeds words or documents into a common vector space. Word2Vec @cite_2 and Doc2Vec @cite_5 are two widely used embedding models at the word and paragraph levels, respectively.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_13", "@cite_2" ], "mid": [ "2155482025", "2949547296", "2144211451", "2950133940" ], "abstract": [ "The Probabilistic Relevance Framework (PRF) is a formal framework for document retrieval, grounded in work done in the 1970—1980s, which led to the development of one of the most successful text-retrieval algorithms, BM25. In recent years, research in the PRF has yielded new retrieval models capable of taking into account document meta-data (especially structure and link-graph information). Again, this has led to one of the most successful Web-search and corporate-search algorithms, BM25F. This work presents the PRF from a conceptual point of view, describing the probabilistic modelling assumptions behind the framework and the different ranking algorithms that result from its application: the binary independence model, relevance feedback models, BM25 and BM25F. It also discusses the relation between the PRF and other statistical models for IR, and covers some related topics, such as the use of non-textual features, and parameter optimisation for models with free parameters.", "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.", "The exhaustivity of document descriptions and the specificity of index terms are usually regarded as independent. It is suggested that specificity should be interpreted statistically, as a function of term use rather than of term meaning. The effects on retrieval of variations in term specificity are examined, experiments with three test collections showing in particular that frequently‐occurring terms are required for good overall performance. It is argued that terms should be weighted according to collection frequency, so that matches on less frequent, more specific, terms are of greater value than matches on frequent terms. Results for the test collections show that considerable improvements in performance are obtained with this very simple procedure.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible." ] }
1701.06276
2581509557
In a nutshell, stay-points are locations that a person has stopped for some amount of time. Previous work depends mainly on stay-point identification methods using experimentally fine tuned threshold values. These behave well on their experimental datasets but may exhibit reduced performance on other datasets. In this work, we demonstrate the potential of a geometry-based method for stay-point extraction. This is accomplished by transforming the user's trajectory path to a two-dimensional discrete time series curve that in turn transforms the stay-points to the local minima of the first derivative of this curve. To demonstrate the soundness of the proposed method, we evaluated it on raw, noisy trajectory data acquired over the period of 28 different days using four different techniques. The results demonstrate, among others, that given a good trajectory tracking technique, we can identify correctly 86 to 98 of the stay-points.
@cite_8 record Cell Ids and use a graph based clustering algorithm in order to identify stay points. The graph is built such that the cell ids are the vertices and two cell ids are incident if their time difference is less than @math ; this ensures successive cell ids are connected. Then they cluster vertices, within the same cell id, using their edge weight and vertex degree which are controlled by two arbitrary parameters @math and @math , respectively. Each cluster within a Cell represents a stay-point.
{ "cite_N": [ "@cite_8" ], "mid": [ "1996448158" ], "abstract": [ "Emerging class of context-aware mobile applications, such as Google Now and Foursquare require continuous location sensing to deliver different location-aware services. Existing research, in finding location at higher abstraction, use GPS and WiFi location interfaces to discover places, which result in high power consumption. These interfaces are also not available on all feature phones that are in majority in developing countries. In this paper, we present a framework PlaceMap that discovers different places and routes, solely using GSM information, i.e., Cell ID. PlaceMap stores and manages all the discovered places and routes, which are used to build spatio-temporal mobility profiles for the users. PlaceMap provides algorithms that can complement GSM-based place discovery with an initial WiFi-based training to increase accuracy. We performed a comprehensive offline evaluation of PlaceMap algorithms on two large real-world diverse datasets, self-collected dataset of 62 participants for 4 weeks in India and MDC dataset of 38 participants for 45 weeks in Switzerland. We found that PlaceMap is able to discover up to 81 of the places correctly as compared to GPS. To corroborate the potential of PlaceMap in real-world, we deployed a life-logging application for a small set of 18 participants and observed similar place discovery accuracy." ] }
1701.06276
2581509557
In a nutshell, stay-points are locations that a person has stopped for some amount of time. Previous work depends mainly on stay-point identification methods using experimentally fine tuned threshold values. These behave well on their experimental datasets but may exhibit reduced performance on other datasets. In this work, we demonstrate the potential of a geometry-based method for stay-point extraction. This is accomplished by transforming the user's trajectory path to a two-dimensional discrete time series curve that in turn transforms the stay-points to the local minima of the first derivative of this curve. To demonstrate the soundness of the proposed method, we evaluated it on raw, noisy trajectory data acquired over the period of 28 different days using four different techniques. The results demonstrate, among others, that given a good trajectory tracking technique, we can identify correctly 86 to 98 of the stay-points.
@cite_4 propose a gradient-based visit extractor algorithm. This algorithm works as follows: insert all points into a buffer until the user has moved more than some threshold (computed via the gradient) or the time difference between the first point and the last point in the buffer is greater than another time threshold; then if there is some duration between the first and last points in the buffer, then a stay-point is identified. The threshold for the gradient, that controls the distance the user moved, depends on two parameters, which are experimentally tuned per dataset.
{ "cite_N": [ "@cite_4" ], "mid": [ "2199718034" ], "abstract": [ "Harnessing the latent knowledge present in geospatial trajectories allows for the potential to revolutionise our understanding of behaviour. This paper discusses one component of such analysis, namely the extraction of significant locations. Specifically, we: (i) present the Gradient-based Visit Extractor (GVE) algorithm capable of extracting periods of low mobility from geospatial data, while maintaining resilience to noise, and addressing the drawbacks of existing techniques, (ii) provide a comprehensive analysis of the properties of these visits and consequent locations, extracted through clustering, and (iii) demonstrate the applicability of GVE to the problem of visit extraction with respect to representative use-cases." ] }
1701.05948
2949941926
The generalized second price (GSP) auction has served as the core selling mechanism for sponsored search ads for over a decade. However, recent trends expanding the set of allowed ad formats---to include a variety of sizes, decorations, and other distinguishing features---have raised critical problems for GSP-based platforms. Alternatives such as the Vickrey-Clarke-Groves (VCG) auction raise different complications because they fundamentally change the way prices are computed. In this paper we report on our efforts to redesign a search ad selling system from the ground up in this new context, proposing a mechanism that optimizes an entire slate of ads globally and computes prices that achieve properties analogous to those held by GSP in the original, simpler setting of uniform ads. A careful algorithmic coupling of allocation-optimization and pricing-computation allows our auction to operate within the strict timing constraints inherent in real-time ad auctions. We report performance results of the auction in Yahoo's Gemini Search platform.
The problem of rich ads in search is well known, but not as well studied. In one sense, there is little to do --- the elegant Vickrey-Clarke-Groves (VCG) auction reduces any problem related to rich ads to a modeling and optimization problem if one buys into it, and Facebook and Google have both leveraged VCG for this very reason @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "2080122405" ], "abstract": [ "We describe two auction forms for search engine advertising and present two simple theoretical results concerning i) the estimation of click-through rates and ii) how to adjust the auctions for broad match search. We also describe some of the practical issues involved in implementing a VCG auction." ] }
1701.05948
2949941926
The generalized second price (GSP) auction has served as the core selling mechanism for sponsored search ads for over a decade. However, recent trends expanding the set of allowed ad formats---to include a variety of sizes, decorations, and other distinguishing features---have raised critical problems for GSP-based platforms. Alternatives such as the Vickrey-Clarke-Groves (VCG) auction raise different complications because they fundamentally change the way prices are computed. In this paper we report on our efforts to redesign a search ad selling system from the ground up in this new context, proposing a mechanism that optimizes an entire slate of ads globally and computes prices that achieve properties analogous to those held by GSP in the original, simpler setting of uniform ads. A careful algorithmic coupling of allocation-optimization and pricing-computation allows our auction to operate within the strict timing constraints inherent in real-time ad auctions. We report performance results of the auction in Yahoo's Gemini Search platform.
More broadly, a long line of work starting with Varian @cite_8 and Edelman, Ostrovsky and Schwarz @cite_5 studies GSP and attempts to rationalize its use, e.g., by showing the existence of good equilibria or showing that GSP is more robust when click-through-rates have error @cite_2 @cite_13 . More recently, we argue that advertisers do not have quasilinear utilities and that GSP may in fact be the truthful auction @cite_0 @cite_3 .
{ "cite_N": [ "@cite_8", "@cite_3", "@cite_0", "@cite_2", "@cite_5", "@cite_13" ], "mid": [ "", "2514140583", "2952233827", "2154327930", "1975392791", "" ], "abstract": [ "", "Bidders often want to get as much as they can without violating constraints on what they spend. For example, advertisers seek to maximize the impressions, clicks, sales, or market share generated by their advertising, subject to budget or return-on-investment (ROI) constraints. The quasilinear utility model dramatically fails to capture these preferences, and so we initiate the study of mechanism design in this different context. In single-parameter settings, we show that any monotone allocation can be implemented truthfully. Interestingly, even in unrestricted domains any choice function that optimizes monotone functions of bidders' values can be implemented. For general valuations we show that maximizing the value of the highest-value bidder is the natural analog of welfare maximization. We apply our results to online advertising as a case study. Firstly, the natural analog of welfare maximization directly generalizes the generalized second price (GSP) auction commonly used to sell search ads. Finally, we show that value-maximizing preferences are robust in a practical sense: even though real advertisers' preferences are complex and varied, as long as outcomes are \"sufficiently different,\" any advertiser with a moderate ROI constraint and \"super-quasilinear\" preferences will be behaviorally equivalent to a value maximizer. We empirically establish that for at least 80 of a sample of auctions from the Yahoo Gemini Native ads platform, bidders requiring at least a 100 ROI should behave like value maximizers.", "We exhibit a property of the VCG mechanism that can help explain the surprising rarity with which it is used even in settings with unit demand: a relative lack of robustness to inaccuracies in the choice of its parameters. For a standard position auction environment in which the auctioneer may not know the precise relative values of the positions, we show that under both complete and incomplete information a non-truthful mechanism supports the truthful outcome of the VCG mechanism for a wider range of these values than the VCG mechanism itself. The result for complete information concerns the generalized second-price mechanism and lends additional theoretical support to the use of this mechanism in practice. Particularly interesting from a technical perspective is the case of incomplete information, where a surprising combinatorial equivalence helps us to avoid confrontation with an unwieldy differential equation.", "A mechanism can be simplified by restricting its message space. If the restricted message spaces satisfy a certain \"outcome closure property,\" then the simplification is \"tight\": for every [epsilon][greater-or-equal, slanted]0, any [epsilon]-Nash equilibrium of the simplified mechanism is also an [epsilon]-Nash equilibrium of the unrestricted mechanism. Prominent auction and matching mechanisms are tight simplifications of mechanisms studied in economic theory and often incorporate price-adjustment features that facilitate simplification. The generalized second-price auction used for sponsored-search advertising is a tight simplification of a series of second-price auctions that eliminates the lowest revenue equilibrium outcomes and leaves intact only higher revenue equilibria.", "We investigate the \"generalized second price\" auction (GSP), a new mechanism which is used by search engines to sell online advertising that most Internet users encounter daily. GSP is tailored to its unique environment, and neither the mechanism nor the environment have previously been studied in the mechanism design literature. Although GSP looks similar to the Vickrey-Clarke-Groves (VCG) mechanism, its properties are very different. In particular, unlike the VCG mechanism, GSP generally does not have an equilibrium in dominant strategies, and truth-telling is not an equilibrium of GSP. To analyze the properties of GSP in a dynamic environment, we describe the generalized English auction that corresponds to the GSP and show that it has a unique equilibrium. This is an ex post equilibrium that results in the same payoffs to all players as the dominant strategy equilibrium of VCG.", "" ] }
1701.06178
2953084879
Live virtual machine migration aims at enabling the dynamic balanced use of the networking computing physical resources of virtualized data-centers, so to lead to reduced energy consumption. Here, we analytically characterize, prototype in software and test an optimal bandwidth manager for live migration of VMs in wireless channel. In this paper we present the optimal tunable-complexity bandwidth manager (TCBM) for the QoS live migration of VMs under a wireless channel from smartphone to access point. The goal is the minimization of the migration-induced communication energy under service level agreement (SLA)-induced hard constrains on the total migration time, downtime and overall available bandwidth.
CloneCloud @cite_4 @cite_3 : is a system that has the ability to automatically transform mobile device application in such a way that they can run into the cloud; VOLARE @cite_9 : is a middelware-based solution which allows context-aware adaptive cloud service discovery for the mobile devices. Cuckoo @cite_20 : is a computational offloading framework for mobile devices; Cloudlet @cite_12 : is a set of widely dispersed and decentralized Internet infrastructure components, with non-trivial characteristic to make available for the nearby mobile devices computing resource and storage resources; MAUI @cite_14 : is a system that is able to minimize the energy due to the VM migration by means of fine-grained offloading.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_3", "@cite_12", "@cite_20" ], "mid": [ "", "2023380813", "2073017934", "2336127546", "2135099885", "1517460556" ], "abstract": [ "", "Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device.", "With the recent widespread use of smart mobile devices, as well as the increasing availability of fast and reliable wireless Internet connections for mobile devices, there is increased interest in mobile applications where the majority of the processing occurs on the server side. The flexibility, stability and scalability offered by cloud services make them an ideal architecture to use in client applications in a resource limited mobile environment. This is because mobile application usage patterns tend to be uneven, with various usage spikes according to time and location. However, the mobile setting presents a set of new challenges that cloud service discovery methods developed for non-mobile environments cannot address. The requirements a mobile client device will have from a cloud service may change due to changes in the context of the device, which may include hardware resources, environmental variables or user preferences. Binding to a service offering different quality of service levels from the ones required may lead to excess consumption of mobile resources such as battery life, as well as unnecessarily high provision costs. This paper introduces VOLARE, a middleware-based solution that monitors the resources and context of the device, and dynamically adapts cloud service requests accordingly, at discovery time or at runtime. This approach will allow for more resource-efficient and reliable cloud service discovery, as well as significant cost savings at runtime.", "paper describes a research in the area of mobile cloud computing. Cloud computing can be considered as a model that can provide network access to a shared pool of resources, such as storage and computing power, that can be rapidly provisioned and released with minimal management effort. The solutions discussed in this paper focus on different aspects of cloud computing in connection with mobile usage. By combining the different approaches and merging them into a common solution, it might be possible to generate a new solution that covers most of the issues currently experienced. Such a solution might have the chance to finally make cloud computing usable on mobile devices, resulting in new and interesting usage scenarios and offering execution speedups and energy savings to mobile users. Keywordscomputing, mobile devices, smart phones", "Mobile computing continuously evolve through the sustained effort of many researchers. It seamlessly augments users' cognitive abilities via compute-intensive capabilities such as speech recognition, natural language processing, etc. By thus empowering mobile users, we could transform many areas of human activity. This article discusses the technical obstacles to these transformations and proposes a new architecture for overcoming them. In this architecture, a mobile user exploits virtual machine (VM) technology to rapidly instantiate customized service software on a nearby cloudlet and then uses that service over a wireless LAN; the mobile device typically functions as a thin client with respect to the service. A cloudlet is a trusted, resource-rich computer or cluster of computers that's well-connected to the Internet and available for use by nearby mobile devices. Our strategy of leveraging transiently customized proximate infrastructure as a mobile device moves with its user through the physical world is called cloudlet-based, resource-rich, mobile computing. Crisp interactive response, which is essential for seamless augmentation of human cognition, is easily achieved in this architecture because of the cloudlet's physical proximity and one-hop network latency. Using a cloudlet also simplifies the challenge of meeting the peak bandwidth demand of multiple users interactively generating and receiving media such as high-definition video and high-resolution images. Rapid customization of infrastructure for diverse applications emerges as a critical requirement, and our results from a proof-of-concept prototype suggest that VM technology can indeed help meet this requirement.", "In this paper, a primary-secondary resource-management controller on Vehicular Networks is designed and tested. We cast the resource-management problem into a suitable constrained stochastic Network Utility Maximization problem and derive the optimal cognitive resource management controller, which dynamically allocates the access time-windows. We provide the optimal steady-state memoryless controllers under hard and soft primary-secondary collision constraints, showing as the hard controller does not present any optimality gap in the average utility with respect to the soft one, while, on the contrary, it is able to make the outage-probability vanishing. Then we generalize the framework integrating the controllers with different data fusion techniques, and test the controller behaviour in a non-stationary application scenario. Finally we provide the optimal steady-state hard controller with memory and compare it with the memoryless one." ] }
1701.05954
2949747406
We consider the problem of learning a policy for a Markov decision process consistent with data captured on the state-actions pairs followed by the policy. We assume that the policy belongs to a class of parameterized policies which are defined using features associated with the state-action pairs. The features are known a priori, however, only an unknown subset of them could be relevant. The policy parameters that correspond to an observed target policy are recovered using @math -regularized logistic regression that best fits the observed state-action samples. We establish bounds on the difference between the average reward of the estimated and the original policy (regret) in terms of the generalization error and the ergodic coefficient of the underlying Markov chain. To that end, we combine sample complexity theory and sensitivity analysis of the stationary distribution of Markov chains. Our analysis suggests that to achieve regret within order @math , it suffices to use training sample size on the order of @math , where @math is the number of the features. We demonstrate the effectiveness of our method on a synthetic robot navigation example.
There is substantial work in the literature on learning MDP policies by observing experts; see @cite_17 for a survey. We next discuss papers that are more closely related to our work.
{ "cite_N": [ "@cite_17" ], "mid": [ "1986014385" ], "abstract": [ "We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research." ] }
1701.05996
2580363751
ARM processors have dominated the mobile device market in the last decade due to their favorable computing to energy ratio. In this age of Cloud data centers and Big Data analytics, the focus is increasingly on power efficient processing, rather than just high throughput computing. ARM's first commodity server-grade processor is the recent AMD A1100-series processor, based on a 64-bit ARM Cortex A57 architecture. In this paper, we study the performance and energy efficiency of a server based on this ARM64 CPU, relative to a comparable server running an AMD Opteron 3300-series x64 CPU, for Big Data workloads. Specifically, we study these for Intel's HiBench suite of web, query and machine learning benchmarks on Apache Hadoop v2.7 in a pseudo-distributed setup, for data sizes up to @math files, @math web pages and @math tuples. Our results show that the ARM64 server's runtime performance is comparable to the x64 server for integer-based workloads like Sort and Hive queries, and only lags behind for floating-point intensive benchmarks like PageRank, when they do not exploit data parallelism adequately. We also see that the ARM64 server takes 1 3rd the energy, and has an Energy Delay Product (EDP) that is @math lower than the x64 server. These results hold significant promise for data centers hosting ARM64 servers to reduce their operational costs, while offering a competitive performance for Big Data workloads.
@cite_6 uses a low cost testing system to systematically compare several ARM and x86 devices. It analyzes the system's power efficiency and CPU performance for a web server, database server and floating point computation. The results conclude that ARM is @math more power efficient on a performance per energy comparison although its performance deteriorates when we increase the size of the workload.
{ "cite_N": [ "@cite_6" ], "mid": [ "1968881927" ], "abstract": [ "Servers and clusters are fundamental building blocks of high performance computing systems and the IT infrastructure of many companies and institutions. This paper analyzes the feasibility of building servers based on low power computers through an experimental comparison of server applications running on x86 and ARM computer architectures. The comparison executed on web and database servers includes power usage, CPU load, temperature, request latencies and the number of requests handled by each tested system. Floating point performance and power usage are also evaluated. The use of ARM based systems has shown to be a good choice when power efficiency is needed without losing performance." ] }
1701.05996
2580363751
ARM processors have dominated the mobile device market in the last decade due to their favorable computing to energy ratio. In this age of Cloud data centers and Big Data analytics, the focus is increasingly on power efficient processing, rather than just high throughput computing. ARM's first commodity server-grade processor is the recent AMD A1100-series processor, based on a 64-bit ARM Cortex A57 architecture. In this paper, we study the performance and energy efficiency of a server based on this ARM64 CPU, relative to a comparable server running an AMD Opteron 3300-series x64 CPU, for Big Data workloads. Specifically, we study these for Intel's HiBench suite of web, query and machine learning benchmarks on Apache Hadoop v2.7 in a pseudo-distributed setup, for data sizes up to @math files, @math web pages and @math tuples. Our results show that the ARM64 server's runtime performance is comparable to the x64 server for integer-based workloads like Sort and Hive queries, and only lags behind for floating-point intensive benchmarks like PageRank, when they do not exploit data parallelism adequately. We also see that the ARM64 server takes 1 3rd the energy, and has an Energy Delay Product (EDP) that is @math lower than the x64 server. These results hold significant promise for data centers hosting ARM64 servers to reduce their operational costs, while offering a competitive performance for Big Data workloads.
Query processing (TPC-C and TPC-H on MySQL) and Big Data (K-Means, TeraSort and WordCount) benchmarks have been tried on the ARM Cortex-A7 little ARM Cortex-A15 big cores, to compare against Intel Xeon servers @cite_16 . They evaluate execution time, energy usage and total cost of running the workloads on these self-hosted ARM and Xeon nodes. Their results show that ARM takes more time to perform MapReduce computations compared to others, when implemented in Java. However, this time is reduced significantly by the C++ implementation of the same. Our study shows similar limitations with floating-point performance in ARM64 as well.
{ "cite_N": [ "@cite_16" ], "mid": [ "2199558228" ], "abstract": [ "The continuous increase in volume, variety and velocity of Big Data exposes datacenter resource scaling to an energy utilization problem. Traditionally, datacenters employ x86-64 (big) server nodes with power usage of tens to hundreds of Watts. But lately, low-power (small) systems originally developed for mobile devices have seen significant improvements in performance. These improvements could lead to the adoption of such small systems in servers, as announced by major industry players. In this context, we systematically conduct a performance study of Big Data execution on small nodes in comparison with traditional big nodes, and present insights that would be useful for future development. We run Hadoop MapReduce, MySQL and in-memory Shark workloads on clusters of ARM big. LITTLE boards and Intel Xeon server systems. We evaluate execution time, energy usage and total cost of running the workloads on self-hosted ARM and Xeon nodes. Our study shows that there is no one size fits all rule for judging the efficiency of executing Big Data workloads on small and big nodes. But small memory size, low memory and I O bandwidths, and software immaturity concur in canceling the lower-power advantage of ARM servers. We show that I O-intensive MapReduce workloads are more energy-efficient to run on Xeon nodes. In contrast, database query processing is always more energy-efficient on ARM servers, at the cost of slightly lower throughput. With minor software modifications, CPU-intensive MapReduce workloads are almost four times cheaper to execute on ARM servers." ] }
1701.05996
2580363751
ARM processors have dominated the mobile device market in the last decade due to their favorable computing to energy ratio. In this age of Cloud data centers and Big Data analytics, the focus is increasingly on power efficient processing, rather than just high throughput computing. ARM's first commodity server-grade processor is the recent AMD A1100-series processor, based on a 64-bit ARM Cortex A57 architecture. In this paper, we study the performance and energy efficiency of a server based on this ARM64 CPU, relative to a comparable server running an AMD Opteron 3300-series x64 CPU, for Big Data workloads. Specifically, we study these for Intel's HiBench suite of web, query and machine learning benchmarks on Apache Hadoop v2.7 in a pseudo-distributed setup, for data sizes up to @math files, @math web pages and @math tuples. Our results show that the ARM64 server's runtime performance is comparable to the x64 server for integer-based workloads like Sort and Hive queries, and only lags behind for floating-point intensive benchmarks like PageRank, when they do not exploit data parallelism adequately. We also see that the ARM64 server takes 1 3rd the energy, and has an Energy Delay Product (EDP) that is @math lower than the x64 server. These results hold significant promise for data centers hosting ARM64 servers to reduce their operational costs, while offering a competitive performance for Big Data workloads.
Analytical models have also been developed to study the performance and energy efficiency of ARM processors. @cite_0 proposes a model for energy usage of server workloads on low-power multi-core systems like ARM, and validates this for the ARM Cortex-A9 CPU. It uses insights on the ARM architecture to predict the CPU performance analytically. But its evaluation skews toward floating-point workloads on ARM32, and only is considered as a Big Data workload.
{ "cite_N": [ "@cite_0" ], "mid": [ "2028045612" ], "abstract": [ "There is growing interest to replace traditional servers with low-power multicore systems such as ARM Cortex-A9. However, such systems are typically provisioned for mobile applications that have lower memory and I O requirements than server application. Thus, the impact and extent of the imbalance between application and system resources in exploiting energy efficient execution of server workloads is unclear. This paper proposes a trace-driven analytical model for understanding the energy performance of server workloads on ARM Cortex-A9 multicore systems. Key to our approach is the modeling of the degrees of CPU core, memory and I O resource overlap, and in estimating the number of cores and clock frequency that optimizes energy performance without compromising execution time. Since energy usage is the product of utilized power and execution time, the model first estimates the execution time of a program. CPU time, which accounts for both cores and memory response time, is modeled as an M G 1 queuing system. Workload characterization of high performance computing, web hosting and financial computing applications shows that bursty memory traffic fits a Pareto distribution, and non-bursty memory traffic is exponentially distributed. Our analysis using these server workloads reveals that not all server workloads might benefit from higher number of cores or clock frequencies. Applying our model, we predict the configurations that increase energy efficiency by 10 without turning off cores, and up to one third with shutting down unutilized cores. For memory-bounded programs, we show that the limited memory bandwidth might increase both execution time and energy usage, to the point where energy cost might be higher than on a typical x64 multicore system. Lastly, we show that increasing memory and I O bandwidth can improve both the execution time and the energy usage of server workloads on ARM Cortex-A9 systems." ] }
1701.05996
2580363751
ARM processors have dominated the mobile device market in the last decade due to their favorable computing to energy ratio. In this age of Cloud data centers and Big Data analytics, the focus is increasingly on power efficient processing, rather than just high throughput computing. ARM's first commodity server-grade processor is the recent AMD A1100-series processor, based on a 64-bit ARM Cortex A57 architecture. In this paper, we study the performance and energy efficiency of a server based on this ARM64 CPU, relative to a comparable server running an AMD Opteron 3300-series x64 CPU, for Big Data workloads. Specifically, we study these for Intel's HiBench suite of web, query and machine learning benchmarks on Apache Hadoop v2.7 in a pseudo-distributed setup, for data sizes up to @math files, @math web pages and @math tuples. Our results show that the ARM64 server's runtime performance is comparable to the x64 server for integer-based workloads like Sort and Hive queries, and only lags behind for floating-point intensive benchmarks like PageRank, when they do not exploit data parallelism adequately. We also see that the ARM64 server takes 1 3rd the energy, and has an Energy Delay Product (EDP) that is @math lower than the x64 server. These results hold significant promise for data centers hosting ARM64 servers to reduce their operational costs, while offering a competitive performance for Big Data workloads.
Recently, Laurenzano, have studied the ARM64 architecture using the AppliedMicro X-Gene server running the AppliedMicro 883208-X1 CPU @cite_18 . They offer a detailed study of the architecture for different HPC workloads, examining the performance and energy efficiency of the ARM64 platform against Intel's Atom and Xeon architectures. They also offer a model to understand instruction-level behavior that impacts the relative performances. Our study for Big Data workloads offer higher-level insights from the perspective of the application and Hadoop platform rather than instruction-level tuning.
{ "cite_N": [ "@cite_18" ], "mid": [ "2409483049" ], "abstract": [ "This paper presents the first comprehensive study of the performance, power and energy consumption of the Applied-Micro X-Gene, the first commercially available 64-bit ARMv8 platform, for HPC workloads. Our study includes a detailed comparison of the X-Gene to three other architectural design points common in HPC systems. Across these platforms, we perform careful measurements across 400+ workloads, covering different application domains, parallelization models, floating-point precision models and memory intensities. We find that the X-Gene has an average of 1.2× better energy consumption than an Intel Sandy Bridge, a design commonly found in HPC installations, while the Sandy Bridge is an average of 2.3× faster than X-Gene. Precisely quantifying the causes of performance and energy differences between two platforms is an important but challenging problem that is often addressed via detailed simulation, an approach that has limited ability to scale up to full applications and broad workload mixes. Instead, this paper adopts a statistical framework called Partial Least Squares (PLS) Path Modeling to solve this problem. PLS Path Modeling allows us to capture complex cause-effect relationships and difficult-to-measure performance concepts relating to the effectiveness of architectural units and subsystems in improving application performance using readily available hardware counter measurements. We use PLS Path Modeling to quantify the causes of the performance differences between X-Gene and Sandy Bridge in the HPC domain, finding that the performance of the memory subsystem is the dominant cause of these differences." ] }
1701.05996
2580363751
ARM processors have dominated the mobile device market in the last decade due to their favorable computing to energy ratio. In this age of Cloud data centers and Big Data analytics, the focus is increasingly on power efficient processing, rather than just high throughput computing. ARM's first commodity server-grade processor is the recent AMD A1100-series processor, based on a 64-bit ARM Cortex A57 architecture. In this paper, we study the performance and energy efficiency of a server based on this ARM64 CPU, relative to a comparable server running an AMD Opteron 3300-series x64 CPU, for Big Data workloads. Specifically, we study these for Intel's HiBench suite of web, query and machine learning benchmarks on Apache Hadoop v2.7 in a pseudo-distributed setup, for data sizes up to @math files, @math web pages and @math tuples. Our results show that the ARM64 server's runtime performance is comparable to the x64 server for integer-based workloads like Sort and Hive queries, and only lags behind for floating-point intensive benchmarks like PageRank, when they do not exploit data parallelism adequately. We also see that the ARM64 server takes 1 3rd the energy, and has an Energy Delay Product (EDP) that is @math lower than the x64 server. These results hold significant promise for data centers hosting ARM64 servers to reduce their operational costs, while offering a competitive performance for Big Data workloads.
Many benchmarks for evaluation Big Data applications exist @cite_19 . Big Bench @cite_12 is a popular benchmark suite that uses an retail eCommerce application as a case study to evaluate processing of high volume and high velocity datasets in the Enterprise. Hibench @cite_20 is another Big Data benchmark from Intel that includes both micro-benchmarks and common application benchmarks from web search, NoSQL queries and Machine Learning. We favor this suite and use it in our evaluation due to the diversity of workloads it targets, for data volume than velocity. There are also benchmarks that specifically target fast data applications, such as for Internet of Things, but we defer a study of ARM64 for such Big Data stream processing platforms to future work @cite_17 .
{ "cite_N": [ "@cite_19", "@cite_20", "@cite_12", "@cite_17" ], "mid": [ "1975912085", "2155072926", "2052312648", "2460931878" ], "abstract": [ "Recently, big data has been evolved into a buzzword from academia to industry all over the world. Benchmarks are important tools for evaluating an IT system. However, benchmarking big data systems is much more challenging than ever before. First, big data systems are still in their infant stage and consequently they are not well understood. Second, big data systems are more complicated compared to previous systems such as a single node computing platform. While some researchers started to design benchmarks for big data systems, they do not consider the redundancy between their benchmarks. Moreover, they use artificial input data sets rather than real world data for their benchmarks. It is therefore unclear whether these benchmarks can be used to precisely evaluate the performance of big data systems. In this paper, we first analyze the redundancy among benchmarks from ICTBench, HiBench and typical workloads from real world applications: spatio-temporal data analysis for Shenzhen transportation system. Subsequently, we present an initial idea of a big data benchmark suite for spatio-temporal data. There are three findings in this work: (1) redundancy exists in these pioneering benchmark suites and some of them can be removed safely. (2) The workload behavior of trajectory data analysis applications is dramatically affected by their input data sets. (3) The benchmarks created for academic research cannot represent the cases of real world applications.", "The MapReduce model is becoming prominent for the large-scale data analysis in the cloud. In this paper, we present the benchmarking, evaluation and characterization of Hadoop, an open-source implementation of MapReduce. We first introduce HiBench, a new benchmark suite for Hadoop. It consists of a set of Hadoop programs, including both synthetic micro-benchmarks and real-world Hadoop applications. We then evaluate and characterize the Hadoop framework using HiBench, in terms of speed (i.e., job running time), throughput (i.e., the number of tasks completed per minute), HDFS bandwidth, system resource (e.g., CPU, memory and I O) utilizations, and data access patterns.", "There is a tremendous interest in big data by academia, industry and a large user base. Several commercial and open source providers unleashed a variety of products to support big data storage and processing. As these products mature, there is a need to evaluate and compare the performance of these systems. In this paper, we present BigBench, an end-to-end big data benchmark proposal. The underlying business model of BigBench is a product retailer. The proposal covers a data model and synthetic data generator that addresses the variety, velocity and volume aspects of big data systems containing structured, semi-structured and unstructured data. The structured part of the BigBench data model is adopted from the TPC-DS benchmark, which is enriched with semi-structured and unstructured data components. The semi-structured part captures registered and guest user clicks on the retailer's website. The unstructured data captures product reviews submitted online. The data generator designed for BigBench provides scalable volumes of raw data based on a scale factor. The BigBench workload is designed around a set of queries against the data model. From a business prospective, the queries cover the different categories of big data analytics proposed by McKinsey. From a technical prospective, the queries are designed to span three different dimensions based on data sources, query processing types and analytic techniques. We illustrate the feasibility of BigBench by implementing it on the Teradata Aster Database. The test includes generating and loading a 200 Gigabyte BigBench data set and testing the workload by executing the BigBench queries (written using Teradata Aster SQL-MR) and reporting their response times.", "Internet of Things (IoT) is a technology paradigm where millions of sensors monitor, and help inform or manage, physical, environmental and human systems in real-time. The inherent closed-loop responsiveness and decision making of IoT applications makes them ideal candidates for using low latency and scalable stream processing platforms. Distributed Stream Processing Systems (DSPS) are becoming essential components of any IoT stack, but the efficacy and performance of contemporary DSPS have not been rigorously studied for IoT data streams and applications. Here, we develop a benchmark suite and performance metrics to evaluate DSPS for streaming IoT applications. The benchmark includes 13 common IoT tasks classified across functional categories and forming micro-benchmarks, and two IoT applications for statistical summarization and predictive analytics that leverage various dataflow patterns of DSPS. These are coupled with stream workloads from real IoT observations on smart cities. We validate the benchmark for the popular Apache Storm DSPS, and present the results." ] }
1701.05996
2580363751
ARM processors have dominated the mobile device market in the last decade due to their favorable computing to energy ratio. In this age of Cloud data centers and Big Data analytics, the focus is increasingly on power efficient processing, rather than just high throughput computing. ARM's first commodity server-grade processor is the recent AMD A1100-series processor, based on a 64-bit ARM Cortex A57 architecture. In this paper, we study the performance and energy efficiency of a server based on this ARM64 CPU, relative to a comparable server running an AMD Opteron 3300-series x64 CPU, for Big Data workloads. Specifically, we study these for Intel's HiBench suite of web, query and machine learning benchmarks on Apache Hadoop v2.7 in a pseudo-distributed setup, for data sizes up to @math files, @math web pages and @math tuples. Our results show that the ARM64 server's runtime performance is comparable to the x64 server for integer-based workloads like Sort and Hive queries, and only lags behind for floating-point intensive benchmarks like PageRank, when they do not exploit data parallelism adequately. We also see that the ARM64 server takes 1 3rd the energy, and has an Energy Delay Product (EDP) that is @math lower than the x64 server. These results hold significant promise for data centers hosting ARM64 servers to reduce their operational costs, while offering a competitive performance for Big Data workloads.
BigDataBench compares different Big Data benchmarks including HiBench @cite_20 and BigBench @cite_12 , and proposes a set of data-intensive applications that is a union of these various existing ones @cite_3 . They use this to study specific micro-architectural and cache features of the Intel Xeon E5 processor. Our goal is not to analyze the specific internal architecture of ARM. But we do compare the relative performance of the ARM processor on runtime and energy efficiency, compared to an AMD Opteron-based server, for end-users to benefit.
{ "cite_N": [ "@cite_3", "@cite_12", "@cite_20" ], "mid": [ "2150478767", "2052312648", "2155072926" ], "abstract": [ "As architecture, systems, and data management communities pay greater attention to innovative big data systems and architecture, the pressure of benchmarking and evaluating these systems rises. However, the complexity, diversity, frequently changed workloads, and rapid evolution of big data systems raise great challenges in big data benchmarking. Considering the broad use of big data systems, for the sake of fairness, big data benchmarks must include diversity of data and workloads, which is the prerequisite for evaluating big data systems and architecture. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purposes mentioned above.", "There is a tremendous interest in big data by academia, industry and a large user base. Several commercial and open source providers unleashed a variety of products to support big data storage and processing. As these products mature, there is a need to evaluate and compare the performance of these systems. In this paper, we present BigBench, an end-to-end big data benchmark proposal. The underlying business model of BigBench is a product retailer. The proposal covers a data model and synthetic data generator that addresses the variety, velocity and volume aspects of big data systems containing structured, semi-structured and unstructured data. The structured part of the BigBench data model is adopted from the TPC-DS benchmark, which is enriched with semi-structured and unstructured data components. The semi-structured part captures registered and guest user clicks on the retailer's website. The unstructured data captures product reviews submitted online. The data generator designed for BigBench provides scalable volumes of raw data based on a scale factor. The BigBench workload is designed around a set of queries against the data model. From a business prospective, the queries cover the different categories of big data analytics proposed by McKinsey. From a technical prospective, the queries are designed to span three different dimensions based on data sources, query processing types and analytic techniques. We illustrate the feasibility of BigBench by implementing it on the Teradata Aster Database. The test includes generating and loading a 200 Gigabyte BigBench data set and testing the workload by executing the BigBench queries (written using Teradata Aster SQL-MR) and reporting their response times.", "The MapReduce model is becoming prominent for the large-scale data analysis in the cloud. In this paper, we present the benchmarking, evaluation and characterization of Hadoop, an open-source implementation of MapReduce. We first introduce HiBench, a new benchmark suite for Hadoop. It consists of a set of Hadoop programs, including both synthetic micro-benchmarks and real-world Hadoop applications. We then evaluate and characterize the Hadoop framework using HiBench, in terms of speed (i.e., job running time), throughput (i.e., the number of tasks completed per minute), HDFS bandwidth, system resource (e.g., CPU, memory and I O) utilizations, and data access patterns." ] }
1701.06233
2950211925
Many aspects of people's lives are proven to be deeply connected to their jobs. In this paper, we first investigate the distinct characteristics of major occupation categories based on tweets. From multiple social media platforms, we gather several types of user information. From users' LinkedIn webpages, we learn their proficiencies. To overcome the ambiguity of self-reported information, a soft clustering approach is applied to extract occupations from crowd-sourced data. Eight job categories are extracted, including Marketing, Administrator, Start-up, Editor, Software Engineer, Public Relation, Office Clerk, and Designer. Meanwhile, users' posts on Twitter provide cues for understanding their linguistic styles, interests, and personalities. Our results suggest that people of different jobs have unique tendencies in certain language styles and interests. Our results also clearly reveal distinctive levels in terms of Big Five Traits for different jobs. Finally, a classifier is built to predict job types based on the features extracted from tweets. A high accuracy indicates a strong discrimination power of language features for job prediction task.
The psychological meaning of words is well studied in Computer Science and Linguistics. Linguistic Inquiry and Word Count (LIWC), as a computerized text method, is introduced in @cite_11 . @cite_12 investigates the difference of individual linguistic styles. This work reports the significant difference across their language patterns, and proves the effectiveness of LIWC. Based on the linguistic features, @cite_18 introduce an approach to recognizing personalities from conversation. The boom of social media attracts a lot of research that are based on this new data source. It is shown that personalities can be recognized using people's social media network structures @cite_20 , profiles @cite_1 , and contents of posts @cite_23 @cite_5 . @cite_0 provide a nice survey on computing personality from social media data. In this paper, we follow the approaches described in @cite_22 to extract linguistic patterns. In their work, propose two approaches to learning people's language styles from social media texts. They report significant differences in language styles across several features including genders, ages, and personalities.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_1", "@cite_0", "@cite_23", "@cite_5", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "", "", "2532755691", "2089809964", "2113166392", "", "1972820248", "2130815877", "2140910804" ], "abstract": [ "", "", "Social media is a place where users present themselves to the world, revealing personal details and insights into their lives. We are beginning to understand how some of this information can be utilized to improve the users' experiences with interfaces and with one another. In this paper, we are interested in the personality of users. Personality has been shown to be relevant to many types of interactions, it has been shown to be useful in predicting job satisfaction, professional and romantic relationship success, and even preference for different interfaces. Until now, to accurately gauge users' personalities, they needed to take a personality test. This made it impractical to use personality analysis in many social media domains. In this paper, we present a method by which a user's personality can be accurately predicted through the publicly available information on their Twitter profile. We will describe the type of data collected, our methods of analysis, and the machine learning techniques that allow us to successfully predict personality. We then discuss the implications this has for social media design, interface design, and broader domains.", "Personality is a psychological construct aimed at explaining the wide variety of human behaviors in terms of a few, stable and measurable individual characteristics. In this respect, any technology involving understanding, prediction and synthesis of human behavior is likely to benefit from Personality Computing approaches, i.e. from technologies capable of dealing with human personality. This paper is a survey of such technologies and it aims at providing not only a solid knowledge base about the state-of-the-art, but also a conceptual model underlying the three main problems addressed in the literature, namely Automatic Personality Recognition (inference of the true personality of an individual from behavioral evidence), Automatic Personality Perception (inference of personality others attribute to an individual based on her observable behavior) and Automatic Personality Synthesis (generation of artificial personalities via embodied agents). Furthermore, the article highlights the issues still open in the field and identifies potential application areas.", "Microblogging services such as Twitter have become increasingly popular in recent years. However, little is known about how personality is manifested and perceived in microblogs. In this study, we measured the Big Five personality traits of 142 participants and collected their tweets over a 1-month period. Extraversion, agreeableness, openness, and neuroticism were associated with specific linguistic markers, suggesting that personality manifests in microblogs. Meanwhile, eight observers rated the participants’ personality on the basis of their tweets. Results showed that observers relied on specific linguistic cues when making judgments, and could only judge agreeableness and neuroticism accurately. This study provides new empirical evidence of personality expression in naturalistic settings, and points to the potential of utilizing social media for personality research.", "", "Can language use reflect personality style? Studies examined the reliability, factor structure, and validity of written language using a word-based, computerized text analysis program. Daily diaries from 15 substance abuse inpatients, daily writing assignments from 35 students, and journal abstracts from 40 social psychologists demonstrated good internal consistency for over 36 language dimensions. Analyses of the best 15 language dimensions from essays by 838 students yielded 4 factors that replicated across written samples from another 381 students. Finally, linguistic profiles from writing samples were compared with Thematic Apperception Test coding, self-reports, and behavioral measures from 79 students and with self-reports of a 5-factor measure and health markers from more than 1,200 students. Despite modest effect sizes, the data suggest that linguistic style is an independent and meaningful way of exploring personality.", "In this work, we investigate the relationships between social network structure and personality; we assess the performances of different subsets of structural network features, and in particular those concerned with ego-networks, in predicting the Big-5 personality traits. In addition to traditional survey-based data, this work focuses on social networks derived from real-life data gathered through smartphones. Besides showing that the latter are superior to the former for the task at hand, our results provide a fine-grained analysis of the contribution the various feature sets are able to provide to personality classification, along with an assessment of the relative merits of the various networks exploited.", "We are in the midst of a technological revolution whereby, for the first time, researchers can link daily word use to a broad array of real-world behaviors. This article reviews several computerized text analysis methods and describes how Linguistic Inquiry and Word Count (LIWC) was created and validated. LIWC is a transparent text analysis program that counts words in psychologically meaningful categories. Empirical results using LIWC demonstrate its ability to detect meaning in a wide variety of experimental settings, including to show attentional focus, emotionality, social relationships, thinking styles, and individual differences." ] }
1701.06071
2950267794
We describe the grasping and manipulation strategy that we employed at the autonomous track of the Robotic Grasping and Manipulation Competition at IROS 2016. A salient feature of our architecture is the tight coupling between visual (Asus Xtion) and tactile perception (Robotic Materials), to reduce the uncertainty in sensing and actuation. We demonstrate the importance of tactile sensing and reactive control during the final stages of grasping using a Kinova Robotic arm. The set of tools and algorithms for object grasping presented here have been integrated into the open-source Robot Operating System (ROS).
Planning for grasping and manipulation tasks has been traditionally studied using two distinct approaches: knowledge-based approaches and analytic approaches. The former is based on empirical studies of human grasping and manipulation @cite_29 , while the latter is based on physical models, that is the interactions between the hand and grasped object are modeled in terms of motions and forces, using the laws of physics @cite_1 . However, each approach has its own disadvantages. As the mechanical and sensorial mechanisms of the human hand are difficult to reproduce and it is yet unclear how sensing and actuation interact, knowledge-based approaches are only of limited use @cite_35 . Also, it is not clear how to generalize human-inspired grasps for novel objects.
{ "cite_N": [ "@cite_35", "@cite_29", "@cite_1" ], "mid": [ "1980602022", "", "1510186039" ], "abstract": [ "We present a novel and simple experimental method called physical human interactive guidance to study human-planned grasping. Instead of studying how the human uses his her own biological hand or how a human teleoperates a robot hand in a grasping task, the method involves a human interacting physically with a robot arm and hand, carefully moving and guiding the robot into the grasping pose, while the robot's configuration is recorded. Analysis of the grasps from this simple method has produced two interesting results. First, the grasps produced by this method perform better than grasps generated through a state-of-the-art automated grasp planner. Second, this method when combined with a detailed statistical analysis using a variety of grasp measures (physics-based heuristics considered critical for a good grasp) offered insights into how the human grasping method is similar or different from automated grasping synthesis techniques. Specifically, data from the physical human interactive guidance method showed that the human-planned grasping method provides grasps that are similar to grasps from a state-of-the-art automated grasp planner, but differed in one key aspect. The robot wrists were aligned with the object's principal axes in the human-planned grasps (termed low skewness in this paper), while the automated grasps used arbitrary wrist orientation. Preliminary tests show that grasps with low skewness were significantly more robust than grasps with high skewness (77-93 ). We conclude with a detailed discussion of how the physical human interactive guidance method relates to existing methods to extract the human principles for physical interaction.", "", "A robotic grasping simulator, called Graspit!, is presented as versatile tool for the grasping community. The focus of the grasp analysis has been on force-closure grasps, which are useful for pick-and-place type tasks. This work discusses the different types of world elements and the general robot definition, and presented the robot library. The paper also describes the user interface of Graspit! and present the collision detection and contact determination system. The grasp analysis and visualization method were also presented that allow a user to evaluate a grasp and compute optimal grasping forces. A brief overview of the dynamic simulation system was provided." ] }
1701.06071
2950267794
We describe the grasping and manipulation strategy that we employed at the autonomous track of the Robotic Grasping and Manipulation Competition at IROS 2016. A salient feature of our architecture is the tight coupling between visual (Asus Xtion) and tactile perception (Robotic Materials), to reduce the uncertainty in sensing and actuation. We demonstrate the importance of tactile sensing and reactive control during the final stages of grasping using a Kinova Robotic arm. The set of tools and algorithms for object grasping presented here have been integrated into the open-source Robot Operating System (ROS).
Although the analytic approaches may allow a robot to reason about how to grasp a certain object by itself, the abstractions made in the analysis to make it tractable results in models that often are only applicable to simulations or carefully structured laboratory experiments @cite_25 . Due to the limitations of the knowledge-based and empirical approaches, machine learning as a solution to these tasks has been on the rise. Methods vary from observing how humans grasp an object and reducing the configuration space of the robot to find pre-grasp postures @cite_26 , learning potential grasp points from 2D images @cite_9 , learning via reinforcement and imitation learning @cite_15 , to learning graspable and non-graspable objects via 2D and 3D features @cite_23 . In our work, we ignore the problem of grasp generation and hard-code strategies that work well for the competition tasks and the mechanism sensorial capabilities of our hand.
{ "cite_N": [ "@cite_26", "@cite_9", "@cite_23", "@cite_15", "@cite_25" ], "mid": [ "2106215490", "2041376653", "2016876803", "2070678636", "2088043683" ], "abstract": [ "In this paper we focus on the concept of low-dimensional posture subspaces for artificial hands. We begin by discussing the applicability of a hand configuration subspace to the problem of automated grasp synthesis; our results show that low-dimensional optimization can be instrumental in deriving effective pre-grasp shapes for a number of complex robotic hands. We then show that the computational advantages of using a reduced dimensionality framework enable it to serve as an interface between the human and automated components of an interactive grasping system. We present an on-line grasp planner that allows a human operator to perform dexterous grasping tasks using an artificial hand. In order to achieve the computational rates required for effective user interaction, grasp planning is performed in a hand posture subspace of highly reduced dimensionality. The system also uses real-time input provided by the operator, further simplifying the search for stable grasps to the point where solutions can be found at interactive rates. We demonstrate our approach on a number of different hand models and target objects, in both real and virtual environments.", "We consider the problem of grasping novel objects, specifically objects that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Furthermore, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires nor tries to build a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained by means of supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers.", "We consider the task of grasping novel objects and cleaning fairly cluttered tables with many novel objects. Recent successful approaches employ machine learning algorithms to identify points on the scene that the robot should grasp. In this paper, we show that the task can be significantly simplified by using segmentation, especially with depth information. A supervised localization method is employed to select graspable segments. We also propose a shape completion and grasp planner method which takes partial 3D information and plans the most stable grasping strategy. Extensive experiments on our robot demonstrate the effectiveness of our approach.", "Grasping an object is a task that inherently needs to be treated in a hybrid fashion. The system must decide both where and how to grasp the object. While selecting where to grasp requires learning about the object as a whole, the execution only needs to reactively adapt to the context close to the grasp's location. We propose a hierarchical controller that reflects the structure of these two sub-problems, and attempts to learn solutions that work for both. A hybrid architecture is employed by the controller to make use of various machine learning methods that can cope with the large amount of uncertainty inherent to the task. The controller's upper level selects where to grasp the object using a reinforcement learner, while the lower level comprises an imitation learner and a vision-based reactive controller to determine appropriate grasping motions. The resulting system is able to quickly learn good grasps of a novel object in an unstructured environment, by executing smooth reaching motions and preshaping the hand depending on the object's geometry. The system was evaluated both in simulation and on a real robot.", "Grasp quality metrics which analyze the contact wrench space are commonly used to synthesize and analyze preplanned grasps. Preplanned grasping approaches rely on the robustness of stored solutions. Analyzing the robustness of such solutions for large databases of preplanned grasps is a limiting factor for the applicability of data driven approaches to grasping. In this work, we will focus on the stability of the widely used grasp wrench space epsilon quality metric over a large range of poses in simulation. We examine a large number of grasps from the Columbia Grasp Database for the Barrett hand. We find that in most cases the grasp with the most robust force closure with respect to pose error for a particular object is not the grasp with the highest epsilon quality. We demonstrate that grasps can be reranked by an estimate of the stability of their epsilon quality. We find that the grasps ranked best by this method are successful more often in physical experiments than grasps ranked best by the epsilon quality." ] }
1701.05616
2582203709
Accurately predicting and detecting interstitial lung disease (ILD) patterns given any computed tomography (CT) slice without any pre-processing prerequisites, such as manually delineated regions of interest (ROIs), is a clinically desirable, yet challenging goal. The majority of existing work relies on manually-provided ILD ROIs to extract sampled 2D image patches from CT slices and, from there, performs patch-based ILD categorization. Acquiring manual ROIs is labor intensive and serves as a bottleneck towards fully-automated CT imaging ILD screening over large-scale populations. Furthermore, despite the considerable high frequency of more than one ILD pattern on a single CT slice, previous works are only designed to detect one ILD pattern per slice or patch. To tackle these two critical challenges, we present multi-label deep convolutional neural networks (CNNs) for detecting ILDs from holistic CT slices (instead of ROIs or sub-images). Conventional single-labeled CNN models can be augmented to cope with the possible presence of multiple ILD pattern labels, via 1) continuous-valued deep regression based robust norm loss functions or 2) a categorical objective as the sum of element-wise binary logistic losses. Our methods are evaluated and validated using a publicly available database of 658 patient CT scans under five-fold cross-validation, achieving promising performance on detecting four major ILD patterns: Ground Glass, Reticular, Honeycomb, and Emphysema. We also investigate the effectiveness of a CNN activation-based deep-feature encoding scheme using Fisher vector encoding, which treats ILD detection as spatially-unordered deep texture classification.
An early work on computer-aided ILD recognition is proposed to employ neural networks and expert rules to detect ground glass opacity (GGO) on CT images @cite_32 . The follow-up work includes GGO detection and segmentation @cite_46 @cite_34 . @cite_42 describe a human-in-the-loop approach where the human annotator delineates the region of interest and anatomical landmarks in the images, followed by classification on image attributes related to variations in intensity, texture, shape descriptors and so on. @cite_3 analyze 3D ILD imaging regions that are combined from multiple candidates detected beforehand on 2D slices. @cite_38 evaluate the diagnostic performance of an artificial neural network. There are many types of hand-crafted image features that are adopted for ILD classification, such as filter banks @cite_4 @cite_44 @cite_9 , local binary patterns (LBPs) @cite_52 @cite_47 , morphological operators followed by geometric measures, histogram of oriented gradients @cite_44 , texton based approaches @cite_7 , and wavelet and contourlet transforms @cite_13 @cite_15 . 2D texture features have also been extended into three dimensions @cite_49 @cite_35 @cite_34 . Typical feature encoding scheme and classifiers include bag of words @cite_0 , support vector machines (SVMs) @cite_52 @cite_14 @cite_16 , random forest @cite_9 and k-nearest neighbors (kNN) @cite_4 .
{ "cite_N": [ "@cite_35", "@cite_42", "@cite_3", "@cite_44", "@cite_15", "@cite_38", "@cite_4", "@cite_52", "@cite_49", "@cite_46", "@cite_7", "@cite_32", "@cite_16", "@cite_34", "@cite_14", "@cite_9", "@cite_0", "@cite_47", "@cite_13" ], "mid": [ "", "2043305993", "", "1975020933", "1891629850", "", "", "", "", "1486026951", "", "", "", "2129403502", "", "", "", "", "" ], "abstract": [ "", "It is now recognized in many domains that content-based image retrieval from a database of images cannot be carried out by using completely automated approaches. One such domain is medical radiology for which the clinically useful information in an image typically consists of gray level variations in highly localized regions of the image. Currently, it is not possible to extract these regions by automatic image segmentation techniques. To address this problem, we have implemented a human-in-the-loop (a physician-in-the-loop, more specifically) approach in which the human delineates the pathology bearing regions (PBR) and a set of anatomical landmarks in the image when the image is entered into the database. To the regions thus marked, our approach applies low-level computer vision and image processing algorithms to extract attributes related to the variations in gray scale, texture, shape, etc. In addition, the system records attributes that capture relational information such as the position of a PBR with respect to certain anatomical landmarks. An overall multidimensional index is assigned to each image based on these attribute values.", "", "In this paper, we propose a new classification method for five categories of lung tissues in high-resolution computed tomography (HRCT) images, with feature-based image patch approximation. We design two new feature descriptors for higher feature descriptiveness, namely the rotation-invariant Gabor-local binary patterns (RGLBP) texture descriptor and multi-coordinate histogram of oriented gradients (MCHOG) gradient descriptor. Together with intensity features, each image patch is then labeled based on its feature approximation from reference image patches. And a new patch-adaptive sparse approximation (PASA) method is designed with the following main components: minimum discrepancy criteria for sparse-based classification, patch-specific adaptation for discriminative approximation, and feature-space weighting for distance computation. The patch-wise labelings are then accumulated as probabilistic estimations for region-level classification. The proposed method is evaluated on a publicly available ILD database, showing encouraging performance improvements over the state-of-the-arts.", "Our aim is to optimize wavelet-based feature extraction for differentiating between the classical versus atypical pattern of usual interstitial pneumonia (UIP) in volumetric CT. Our proposal is to act on the bandwidth of steerable wavelets while maintaining their tight frame property. To that end, we designed a family of maximally localized wavelet pyramids in 3-D for a continuously adjustable radial bandwidth [Ω,π], Ω G [π 4, π 2]. The proposed wavelets are coupled with a rotation-covariant directional operator based on the Riesz transform, which provides characterizations of the organization image directions independently from their local orientations. The influence of the wavelet bandwidth on the classification performance was found to be large with area under the receiver operating characteristic curve (AUC) values in [0.784,0.921]. This demonstrated the importance of finding the minimum spatial support of the wavelet required to leverage the wealth of morphological tissue properties in the vicinity of the lung boundaries.", "", "", "", "", "Ground Glass Opacity (GGO) is defined as hazy increased attenuation within a lung that is not associated with obscured underlying vessels. Since pure (nonsolid) or mixed (partially solid) GGO at the thin-section CT are more likely to be malignant than those with solid opacity, early detection and treatment of GGO can improve a prognosis of lung cancer. However, due to indistinct boundaries and inter- or intra-observer variation, consistent manual detection and segmentation of GGO have proved to be problematic. In this paper, we propose a novel method for automatic detection and segmentation of GGO from chest CT images. For GGO detection, we develop a classifier by boosting k-NN, whose distance measure is the Euclidean distance between the nonparametric density estimates of two examples. The detected GGO region is then automatically segmented by analyzing the texture likelihood map of the region. We applied our method to clinical chest CT volumes containing 10 GGO nodules. The proposed method detected all of the 10 nodules with only one false positive nodule. We also present the statistical validation of the proposed classifier for GGO detection as well as very promising results for automatic GGO segmentation. The proposed method provides a new powerful tool for automatic detection as well as accurate and reproducible segmentation of GGO.", "", "", "", "Early detection of Ground Glass Nodule (GGN) in lung Computed Tomography (CT) images is important for lung cancer prognosis. Due to its indistinct boundaries, manual detection and segmentation of GGN is labor-intensive and problematic. In this paper, we propose a novel multi-level learning-based framework for automatic detection and segmentation of GGN in lung CT images. Our main contributions are: firstly, a multi-level statistical learning-based approach that seamlessly integrates segmentation and detection to improve the overall accuracy for GGN detection (in a subvolume). The classification is done at two levels, both voxel-level and object-level. The algorithm starts with a three-phase voxel-level classification step, using volumetric features computed per voxel to generate a GGN class-conditional probability map. GGN candidates are then extracted from this probability map by integrating prior knowledge of shape and location, and the GGN object-level classifier is used to determine the occurrence of the GGN. Secondly, an extensive set of volumetric features are used to capture the GGN appearance. Finally, to our best knowledge, the GGN dataset used for experiments is an order of magnitude larger than previous work. The effectiveness of our method is demonstrated on a dataset of 1100 subvolumes (100 containing GGNs) extracted from about 200 subjects.", "", "", "", "", "" ] }
1701.05616
2582203709
Accurately predicting and detecting interstitial lung disease (ILD) patterns given any computed tomography (CT) slice without any pre-processing prerequisites, such as manually delineated regions of interest (ROIs), is a clinically desirable, yet challenging goal. The majority of existing work relies on manually-provided ILD ROIs to extract sampled 2D image patches from CT slices and, from there, performs patch-based ILD categorization. Acquiring manual ROIs is labor intensive and serves as a bottleneck towards fully-automated CT imaging ILD screening over large-scale populations. Furthermore, despite the considerable high frequency of more than one ILD pattern on a single CT slice, previous works are only designed to detect one ILD pattern per slice or patch. To tackle these two critical challenges, we present multi-label deep convolutional neural networks (CNNs) for detecting ILDs from holistic CT slices (instead of ROIs or sub-images). Conventional single-labeled CNN models can be augmented to cope with the possible presence of multiple ILD pattern labels, via 1) continuous-valued deep regression based robust norm loss functions or 2) a categorical objective as the sum of element-wise binary logistic losses. Our methods are evaluated and validated using a publicly available database of 658 patient CT scans under five-fold cross-validation, achieving promising performance on detecting four major ILD patterns: Ground Glass, Reticular, Honeycomb, and Emphysema. We also investigate the effectiveness of a CNN activation-based deep-feature encoding scheme using Fisher vector encoding, which treats ILD detection as spatially-unordered deep texture classification.
A preliminary version of this work appears in @cite_40 . In this paper, we propose, extend and fully evaluate two different multi-label CNN classification architectures to address the phenomenon of multiple ILDs' co-occurrence on single CT images. Robust deep regression loss function under multi-label setting is also addressed. The improved algorithms are extensively validated with a more complete dataset, using comprehensive evaluation metrics, and by conducting comparable experiments against patch based ILD classification, which constitutes the majority of previous work. Superior quantitative performance in both detection accuracy and time efficiency is demonstrated.
{ "cite_N": [ "@cite_40" ], "mid": [ "2525724907" ], "abstract": [ "Holistically detecting interstitial lung disease (ILD) patterns from CT images is challenging yet clinically important. Unfortunately, most existing solutions rely on manually provided regions of interest, limiting their clinical usefulness. In addition, no work has yet focused on predicting more than one ILD from the same CT slice, despite the frequency of such occurrences. To address these limitations, we propose two variations of multi-label deep convolutional neural networks (CNNs). The first uses a deep CNN to detect the presence of multiple ILDs using a regression-based loss function. Our second variant further improves performance, using spatially invariant Fisher Vector encoding of the CNN feature activations. We test our algorithms on a dataset of 533 patients using five-fold cross-validation, achieving high area-under-curve (AUC) scores of 0.982, 0.972, 0.893 and 0.993 for Ground Glass, Reticular, Honeycomb and Emphysema, respectively. As such, our work represents an important step forward in providing clinically effective ILD detection." ] }
1701.05703
2582695981
This paper addresses the automatic generation of a typographic font from a subset of characters. Specifically, we use a subset of a typographic font to extrapolate additional characters. Consequently, we obtain a complete font containing a number of characters sufficient for daily use. The automated generation of Japanese fonts is in high demand because a Japanese font requires over 1,000 characters. Unfortunately, professional typographers create most fonts, resulting in significant financial and time investments for font generation. The proposed method can be a great aid for font creation because designers do not need to create the majority of the characters for a new font. The proposed method uses strokes from given samples for font generation. The strokes, from which we construct characters, are extracted by exploiting a character skeleton dataset. This study makes three main contributions: a novel method of extracting strokes from characters, which is applicable to both standard fonts and their variations; a fully automated approach for constructing characters; and a selection method for sample characters. We demonstrate our proposed method by generating 2,965 characters in 47 fonts. Objective and subjective evaluations verify that the generated characters are similar to handmade characters.
There have been attempts to analyzing handwritten characters. Bayoudh used Freeman chain-code to analyze alphabetical characters @cite_32 . Djioua and Plamondon used a Sigma lognormal model supported by the kinematic theory @cite_16 . An interactive system was developed so that the user can easily fit a Sigma-lognormal model to alphabetical characters. Wada extracted the trajectories of alphabetical characters and replaced them using a genetic algorithm @cite_20 . Zheng and Doermann adopted a thin plate spline to model an alphabetical character and generated a new alphabetical character by calculating the intermediate of the two @cite_13 . Handwriting models for robot arms were developed in @cite_3 @cite_27 .
{ "cite_N": [ "@cite_32", "@cite_3", "@cite_27", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "1551338108", "", "", "2120118130", "2139894056", "2123659765" ], "abstract": [ "This paper is basically concerned with a practical problem: the on-the-fly quick learning of handwritten character recognition systems. More generally, it explores the problem of generating new learning examples, especially from very scarce (2 to 5 per class) original learning data. It presents two different methods. The first one is based on applying distortions on original characters using knowledgeon handwriting properties like speed, curvature etc. The second one consists in generation based on the notion of analogical dissimilaritywhich quantifies the analogical relation \"Ais to Balmost as Cis to D\". We give an algorithm to compute the k-least dissimilar objects D, hence generating knew objects from three examples A, Band C. Finally, we experimentally prove the efficiency of both methods, especially when used in conjunction.", "", "", "The Sigma-Lognormal model of the kinematic theory of rapid human movements, has been implemented in an interactive software tool, allowing the generation of databases of unlimited size from a few online handwriting specimens of letters and words. Online trajectories of a target word produced by a few writers are fitted by the Sigma-Lognormal parameters; using the interactive system. Thereafter, the fiducial pattern of the word is constructed and the writer variability is circumscribed respectively from the mean values and the standard deviations of the extracted parameters. Typical simulation results obtained by randomly fixing the parameters inside these realistic intervals are presented to highlight the ability of the generator to produce a large variety of multi-writer and writer-dependent handwriting patterns as observed in real data. Overall, this software tool provides new insights on the development of huge databases for the training and testing of online handwriting classifiers and recognizers.", "Since it is extremely expensive to collect a large volume of handwriting samples, synthesized data are often used to enlarge the training set. We argue that, in order to generate good handwriting samples, a synthesis algorithm should learn the shape deformation characteristics of handwriting from real samples. In this paper, we present a point matching algorithm to learn the deformation, and apply it to handwriting synthesis. Preliminary experiments show the advantages of our approach.", "In pattern recognition, a large number of diversiform characters is necessary to train test a handwritten character recognition system. In this paper, we show that a handwriting model can be applied to the diversification of characters. The characters diversified by the model can be used as a database of character images for training testing purposes. Wada-Kawato's handwriting model is based on an optimal principle and the feature space of the characters includes sets of via-points extracted from actual handwritten characters. The handwriting model can be used to generate a variety of characters by changing via-point information. In this paper, we propose a method for generating a large variety of characters by changing via-point information based on a genetic algorithm, and show that the accuracy of a handwritten character recognition system that uses the characters generated by the proposed method as the training data, is equivalent to that of a system composed by using natural data." ] }
1701.05703
2582695981
This paper addresses the automatic generation of a typographic font from a subset of characters. Specifically, we use a subset of a typographic font to extrapolate additional characters. Consequently, we obtain a complete font containing a number of characters sufficient for daily use. The automated generation of Japanese fonts is in high demand because a Japanese font requires over 1,000 characters. Unfortunately, professional typographers create most fonts, resulting in significant financial and time investments for font generation. The proposed method can be a great aid for font creation because designers do not need to create the majority of the characters for a new font. The proposed method uses strokes from given samples for font generation. The strokes, from which we construct characters, are extracted by exploiting a character skeleton dataset. This study makes three main contributions: a novel method of extracting strokes from characters, which is applicable to both standard fonts and their variations; a fully automated approach for constructing characters; and a selection method for sample characters. We demonstrate our proposed method by generating 2,965 characters in 47 fonts. Objective and subjective evaluations verify that the generated characters are similar to handmade characters.
* Font blending methods Methods in this category receive several complete fonts and generate a font by blending them. Xu generated Chinese calligraphy characters using a weighted blend of strokes in different styles @cite_15 . They decomposed samples into radicals and single strokes based on rules defined by expert knowledge. An improved method @cite_37 considers the spatial relationship of strokes. Choi generated handwriting Hangul characters using a Bayesian network trained with a given font @cite_23 .
{ "cite_N": [ "@cite_15", "@cite_37", "@cite_23" ], "mid": [ "2007121438", "2061384433", "" ], "abstract": [ "Chinese calligraphy is among the finest and most important of all Chinese art forms and an inseparable part of Chinese history. Its delicate aesthetic effects are generally considered to be unique among all calligraphic arts. Its subtle power is integral to traditional Chinese painting. A novel intelligent system uses a constraint-based analogous-reasoning process to automatically generate original Chinese calligraphy that meets visually aesthetic requirements. We propose an intelligent system that can automatically create novel, aesthetically appealing Chinese calligraphy from a few training examples of existing calligraphic styles. To demonstrate the proposed methodology's feasibility, we have implemented a prototype system that automatically generates new Chinese calligraphic art from a small training set.", "An automatic algorithm can generate Chinese calligraphy by quantitatively representing the characteristics of personal handwriting acquired from learning examples.", "" ] }
1701.05703
2582695981
This paper addresses the automatic generation of a typographic font from a subset of characters. Specifically, we use a subset of a typographic font to extrapolate additional characters. Consequently, we obtain a complete font containing a number of characters sufficient for daily use. The automated generation of Japanese fonts is in high demand because a Japanese font requires over 1,000 characters. Unfortunately, professional typographers create most fonts, resulting in significant financial and time investments for font generation. The proposed method can be a great aid for font creation because designers do not need to create the majority of the characters for a new font. The proposed method uses strokes from given samples for font generation. The strokes, from which we construct characters, are extracted by exploiting a character skeleton dataset. This study makes three main contributions: a novel method of extracting strokes from characters, which is applicable to both standard fonts and their variations; a fully automated approach for constructing characters; and a selection method for sample characters. We demonstrate our proposed method by generating 2,965 characters in 47 fonts. Objective and subjective evaluations verify that the generated characters are similar to handmade characters.
Suveeranont and Igarashi addressed a generation of alphabetical characters for typographic fonts @cite_8 . They generated characters by blending predefined characters from miscellaneous complete fonts. This method is based on a vector format, in contrast with the proposed method, which accepts an image format. Campbell and Kautz learned a manifold of standard fonts of alphabetical characters @cite_17 . Locations on the manifold represent a new font. Feng used a wavelet transform to blend two fonts @cite_2 . Tenenbaum and Freeman used complete fonts as references to form characters from the generation results @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_2", "@cite_17", "@cite_8" ], "mid": [ "2170653751", "2129771415", "2069728322", "" ], "abstract": [ "Perceptual systems routinely separate “content” from “style,” classifying familiar words spoken in an unfamiliar accent, identifying a font or handwriting style across letters, or recognizing a familiar face or object seen under unfamiliar viewing conditions. Yet a general and tractable computational model of this ability to untangle the underlying factors of perceptual observations remains elusive (Hofstadter, 1985). Existing factor models (Mardia, Kent, & Bibby, 1979; Hinton & Zemel, 1994; Ghahramani, 1995; Bell & Sejnowski, 1995; Hinton, Dayan, Frey, & Neal, 1995; Dayan, Hinton, Neal, & Zemel, 1995; Hinton & Ghahramani, 1997) are either insufficiently rich to capture the complex interactions of perceptually meaningful factors such as phoneme and speaker accent or letter and font, or do not allow efficient learning algorithms. We present a general framework for learning to solve two-factor tasks using bilinear models, which provide sufficiently expressive representations of factor interactions but can nonetheless be fit to data using efficient algorithms based on the singular value decomposition and expectation-maximization. We report promising results on three different tasks in three different perceptual domains: spoken vowel classification with a benchmark multi-speaker database, extrapolation of fonts to unseen letters, and translation of faces to novel illuminants.", "Based on the cubic B-spline curve, new Chinese fonts are generated by wavelet transforms in this paper. The outlines of Chinese fonts are first transformed into B-spline curves. Then, using wavelet transforms, the control points of each curve are decomposed into hierarchies containing the detailed features of the Chinese fonts. Using the synthesis procedure of wavelet transforms, new fonts can be generated by modifying details at selected hierarchies.", "The design and manipulation of typefaces and fonts is an area requiring substantial expertise; it can take many years of study to become a proficient typographer. At the same time, the use of typefaces is ubiquitous; there are many users who, while not experts, would like to be more involved in tweaking or changing existing fonts without suffering the learning curve of professional typography packages. Given the wealth of fonts that are available today, we would like to exploit the expertise used to produce these fonts, and to enable everyday users to create, explore, and edit fonts. To this end, we build a generative manifold of standard fonts. Every location on the manifold corresponds to a unique and novel typeface, and is obtained by learning a non-linear mapping that intelligently interpolates and extrapolates existing fonts. Using the manifold, we can smoothly interpolate and move between existing fonts. We can also use the manifold as a constraint that makes a variety of new applications possible. For instance, when editing a single character, we can update all the other glyphs in a font simultaneously to keep them compatible with our changes.", "" ] }
1701.05703
2582695981
This paper addresses the automatic generation of a typographic font from a subset of characters. Specifically, we use a subset of a typographic font to extrapolate additional characters. Consequently, we obtain a complete font containing a number of characters sufficient for daily use. The automated generation of Japanese fonts is in high demand because a Japanese font requires over 1,000 characters. Unfortunately, professional typographers create most fonts, resulting in significant financial and time investments for font generation. The proposed method can be a great aid for font creation because designers do not need to create the majority of the characters for a new font. The proposed method uses strokes from given samples for font generation. The strokes, from which we construct characters, are extracted by exploiting a character skeleton dataset. This study makes three main contributions: a novel method of extracting strokes from characters, which is applicable to both standard fonts and their variations; a fully automated approach for constructing characters; and a selection method for sample characters. We demonstrate our proposed method by generating 2,965 characters in 47 fonts. Objective and subjective evaluations verify that the generated characters are similar to handmade characters.
* Character extrapolation methods The aim of methods in this category is to extrapolate characters not included in a given subset. Attempts to complete this process on handwritten fonts can be found in @cite_4 @cite_19 @cite_25 @cite_12 @cite_33 @cite_7 @cite_29 @cite_11 . Lin generated characters with components extracted from given characters @cite_4 using an annotated font in which the positions and sizes of components were labeled. The extraction of the components was performed on electronic devices so that characters can be easily decomposed. Zong and Zhu developed a character generation method using machine learning @cite_19 . They decomposed the given characters into components by analyzing the orientation of strokes. Components were assigned to a reference font with a similarity function trained by a semi-supervised algorithm. Wang focused on the spatial relationships of the character components for decomposition and generation @cite_25 . An active contour model was used for decomposition @cite_12 .
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_7", "@cite_29", "@cite_19", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2292856805", "2153635757", "2077168696", "2047782836", "2293091492", "2127005171", "2134238040", "1983636646" ], "abstract": [ "Since a complete Chinese font has typically several thousand or more Chinese characters and symbols, and most of them are much more complicated than English alphabets, it takes a lot of time and efforts for even professional font engineers to create a Chinese font. Although several attempts had been made to synthesize Chinese characters from strokes and components, it is still not easy to synthesize so many Chinese characters at one time. In this paper, we present an easy and fast solution for an ordinary user to create a Chinese font of his or her handwriting style. We adopt the approach: to synthesize Chinese characters using components extracted from the user's handwritings. In the preprocessing phase, we built a Web interface for crowds to label the positions and sizes of components of every Chinese character in the target character set. The standard Kai font was selected as a reference. We also devised an algorithm to find a small subset of Chinese characters having all required components to synthesize other Chinese characters. To create a personal handwriting font, with commonly-used 3,914 traditional Chinese characters, a user only has to handwrite 400 or so Chinese characters on a pad. One character by one character, our system can track every stroke, recognize and extract components from the user's handwritings. Then, every target Chinese character is synthesized from the extracted components, by placing them properly according to their position and size information. The experiment results show that although manually fine-tune is still required for few synthesized Chinese characters, users can create a Chinese font of their personal handwriting styles more easily and quickly.", "Prior knowledge of Chinese calligraphy is modeled in this paper, and the hierarchical relationship of strokes and radicals is represented by a novel five layer framework. Calligraphist’s unique calligraphy skill is analyzed and his particular strokes, radicals and layout patterns provide raw element for the proposed five layers. The criteria of visual aesthetics based on Marr’s vision assumption are built for the proposed algorithm of automatic generation of Chinese character. The Bayesian statistics is introduced to characterize the character generation process as a Bayesian dynamic model, in which, parameters to translate, rotate and scale strokes, radicals are controlled by the state equation, as well as the proposed visual aesthetics is employed by the measurement equation. Experimental results show the automatically generated characters have almost the same visual acceptance compared to calligraphist’s artwork.", "This paper presents a novel algorithmic method for automatically generating personal handwriting styles of Chinese characters through an example-based approach. The method first splits a whole Chinese character into multiple constituent parts, such as strokes, radicals, and frequent character components. The algorithm then analyzes and learns the characteristics of character handwriting styles both defined in the Chinese national font standard and those exhibited in a person's own handwriting records. In such an analysis process, we adopt a parametric representation of character shapes and also examine the spatial relationships between multiple constituent components of a character. By imitating shapes of individual character components as well as the spatial relationships between them, the proposed method can automatically generate personalized handwritings following an example-based approach. To explore the quality of our automatic generation algorithm, we compare the computer generated results with the authentic human handwriting samples, which appear satisfying for entertainment or mobile applications as agreed by Chinese subjects in our user study.", "The existing method of contour-based font description is difficult to meet the personalized need for various style font generations because of the large size of Chinese character set. In this paper, we propose a novel glyph description method which treats the Chinese character as a constitution of the stable part called \"structure\" and the mutable part called \"style\". The structures of all characters are clustered by an improved K-Medoids method to guide the following generation of sample set which covers all kinds of style information of the whole character set. The result of cluster procedure indicates that radicals are bottlenecks for the reduction of sample set due to the low repetition rate in all characters. To address this problem, we present the radicals as a set of stroke-to-stroke layout structures, and render them from these substructures available in the sample set. Experiment results shows that the substitution enables us to learn the style information from a small set of sample characters (less than 10 of total amount) and generate the rest with the similar writing style.", "Machine learning techniques have been successfully applied to Chinese character recognition; nonetheless, automatic generation of stylized Chinese handwriting remains a challenge. In this paper, we propose Stroke-Bank, a novel approach to automating personalized Chinese handwriting generation. We use a semi-supervised algorithm to construct a dictionary of component mappings from a small seeding set. Unlike previous work, our approach does not require human supervision in stroke extraction or knowledge of the structure of Chinese characters. This dictionary is used to generate handwriting that preserves stylistic variations, including cursiveness and spatial layout of strokes. We demonstrate the effectiveness of our model by a survey-based evaluation. The results show that our generated characters are nearly indistinguishable from ground truth handwritings.", "with English, and they are not suitable for Chinese character synthesis. In this paper, we propose an unified approach for modeling and synthesizing Chinese characters. Using a three-level hierarchical representation, each character is decomposed into basic components, which forms the stroke database and radical database. In the synthesis process, we use a wavelet-based approach to select proper strokes and radicals, and some aesthetic constraints are defined based on the relationships between components, then genetic algorithm is employed to search for the optimal results which best match the aesthetic constraints. Experimental results demonstrates the effectiveness of our method.", "Creating personal Chinese handwritten font library is a very time-consuming job, with the majority of time spent on users manually writing a large number of Chinese characters. To dramatically cut down the time cost, we propose an efficient solution to generate Chinese handwritten fonts by effectively reusing the sample characters that users write. Our solution first builds a Chinese Character Radical Composition Model based on the images of standard printed characters. The use of contour curve based radical clustering approach facilitates the critical task of learning the model. We then use the model to decide a much smaller set of character that users need to write. The same model is also used to guide the automatic segmentation of user's hand input characters and construction of other characters. Our prototype only needs users to input around 20 characters as usual to create their own qualified handwritten fonts.", "In this paper we present novel algorithm for the cursive style Korean handwriting synthesis method based on the shape analysis with minimized input data. Hangul (Korean Character) has the different characteristics in structure, that is, character consists of consonants and vowels like English and also it is a combination character like Chinese character. This research aims at using minimal input character data, that is namely consonant and vowel, to synthesize all possible combination of Hangul character. First, we propose the method how to get an inter-strokes information for making position of the natural Korean character. Second, we propose how to synthesize all the Korean characters using small amount of input data and normalized one from some of representative input data. Finally we add a function that concatenates each consonants and vowels for cursive style. By the experiment, we show that the proposed method is effective for synthesis of Hangul by human evaluation." ] }
1701.05703
2582695981
This paper addresses the automatic generation of a typographic font from a subset of characters. Specifically, we use a subset of a typographic font to extrapolate additional characters. Consequently, we obtain a complete font containing a number of characters sufficient for daily use. The automated generation of Japanese fonts is in high demand because a Japanese font requires over 1,000 characters. Unfortunately, professional typographers create most fonts, resulting in significant financial and time investments for font generation. The proposed method can be a great aid for font creation because designers do not need to create the majority of the characters for a new font. The proposed method uses strokes from given samples for font generation. The strokes, from which we construct characters, are extracted by exploiting a character skeleton dataset. This study makes three main contributions: a novel method of extracting strokes from characters, which is applicable to both standard fonts and their variations; a fully automated approach for constructing characters; and a selection method for sample characters. We demonstrate our proposed method by generating 2,965 characters in 47 fonts. Objective and subjective evaluations verify that the generated characters are similar to handmade characters.
Character component decomposition is a crucial technique for the methods in this category. However, most use naive decomposition that rely on spatial relationships @cite_12 @cite_25 @cite_7 @cite_29 @cite_33 or special devices @cite_4 @cite_11 . It is difficult to extract components when they are connected or decorated. To the best of our knowledge, the method in @cite_6 is the only method that is applicable to characters in fonts with decorations. Saito applied a patch transform @cite_30 to samples and generated alphabetical characters in wide range of fonts @cite_6 . However, the generated results did not meet the criteria for practical use. In this paper, we propose an adaptive active contour model for component extraction. With the proposed method, we can obtain natural character strokes even in decorated characters.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_33", "@cite_7", "@cite_29", "@cite_6", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2141155330", "2292856805", "2153635757", "2077168696", "2047782836", "2600703211", "2127005171", "2134238040", "1983636646" ], "abstract": [ "We introduce the patch transform, where an image is broken into non-overlapping patches, and modifications or constraints are applied in the ldquopatch domainrdquo. A modified image is then reconstructed from the patches, subject to those constraints. When no constraints are given, the reconstruction problem reduces to solving a jigsaw puzzle. Constraints the user may specify include the spatial locations of patches, the size of the output image, or the pool of patches from which an image is reconstructed. We define terms in a Markov network to specify a good image reconstruction from patches: neighboring patches must fit to form a plausible image, and each patch should be used only once. We find an approximate solution to the Markov network using loopy belief propagation, introducing an approximation to handle the combinatorially difficult patch exclusion constraint. The resulting image reconstructions show the original image, modified to respect the userpsilas changes. We apply the patch transform to various image editing tasks and show that the algorithm performs well on real world images.", "Since a complete Chinese font has typically several thousand or more Chinese characters and symbols, and most of them are much more complicated than English alphabets, it takes a lot of time and efforts for even professional font engineers to create a Chinese font. Although several attempts had been made to synthesize Chinese characters from strokes and components, it is still not easy to synthesize so many Chinese characters at one time. In this paper, we present an easy and fast solution for an ordinary user to create a Chinese font of his or her handwriting style. We adopt the approach: to synthesize Chinese characters using components extracted from the user's handwritings. In the preprocessing phase, we built a Web interface for crowds to label the positions and sizes of components of every Chinese character in the target character set. The standard Kai font was selected as a reference. We also devised an algorithm to find a small subset of Chinese characters having all required components to synthesize other Chinese characters. To create a personal handwriting font, with commonly-used 3,914 traditional Chinese characters, a user only has to handwrite 400 or so Chinese characters on a pad. One character by one character, our system can track every stroke, recognize and extract components from the user's handwritings. Then, every target Chinese character is synthesized from the extracted components, by placing them properly according to their position and size information. The experiment results show that although manually fine-tune is still required for few synthesized Chinese characters, users can create a Chinese font of their personal handwriting styles more easily and quickly.", "Prior knowledge of Chinese calligraphy is modeled in this paper, and the hierarchical relationship of strokes and radicals is represented by a novel five layer framework. Calligraphist’s unique calligraphy skill is analyzed and his particular strokes, radicals and layout patterns provide raw element for the proposed five layers. The criteria of visual aesthetics based on Marr’s vision assumption are built for the proposed algorithm of automatic generation of Chinese character. The Bayesian statistics is introduced to characterize the character generation process as a Bayesian dynamic model, in which, parameters to translate, rotate and scale strokes, radicals are controlled by the state equation, as well as the proposed visual aesthetics is employed by the measurement equation. Experimental results show the automatically generated characters have almost the same visual acceptance compared to calligraphist’s artwork.", "This paper presents a novel algorithmic method for automatically generating personal handwriting styles of Chinese characters through an example-based approach. The method first splits a whole Chinese character into multiple constituent parts, such as strokes, radicals, and frequent character components. The algorithm then analyzes and learns the characteristics of character handwriting styles both defined in the Chinese national font standard and those exhibited in a person's own handwriting records. In such an analysis process, we adopt a parametric representation of character shapes and also examine the spatial relationships between multiple constituent components of a character. By imitating shapes of individual character components as well as the spatial relationships between them, the proposed method can automatically generate personalized handwritings following an example-based approach. To explore the quality of our automatic generation algorithm, we compare the computer generated results with the authentic human handwriting samples, which appear satisfying for entertainment or mobile applications as agreed by Chinese subjects in our user study.", "The existing method of contour-based font description is difficult to meet the personalized need for various style font generations because of the large size of Chinese character set. In this paper, we propose a novel glyph description method which treats the Chinese character as a constitution of the stable part called \"structure\" and the mutable part called \"style\". The structures of all characters are clustered by an improved K-Medoids method to guide the following generation of sample set which covers all kinds of style information of the whole character set. The result of cluster procedure indicates that radicals are bottlenecks for the reduction of sample set due to the low repetition rate in all characters. To address this problem, we present the radicals as a set of stroke-to-stroke layout structures, and render them from these substructures available in the sample set. Experiment results shows that the substitution enables us to learn the style information from a small set of sample characters (less than 10 of total amount) and generate the rest with the similar writing style.", "", "with English, and they are not suitable for Chinese character synthesis. In this paper, we propose an unified approach for modeling and synthesizing Chinese characters. Using a three-level hierarchical representation, each character is decomposed into basic components, which forms the stroke database and radical database. In the synthesis process, we use a wavelet-based approach to select proper strokes and radicals, and some aesthetic constraints are defined based on the relationships between components, then genetic algorithm is employed to search for the optimal results which best match the aesthetic constraints. Experimental results demonstrates the effectiveness of our method.", "Creating personal Chinese handwritten font library is a very time-consuming job, with the majority of time spent on users manually writing a large number of Chinese characters. To dramatically cut down the time cost, we propose an efficient solution to generate Chinese handwritten fonts by effectively reusing the sample characters that users write. Our solution first builds a Chinese Character Radical Composition Model based on the images of standard printed characters. The use of contour curve based radical clustering approach facilitates the critical task of learning the model. We then use the model to decide a much smaller set of character that users need to write. The same model is also used to guide the automatic segmentation of user's hand input characters and construction of other characters. Our prototype only needs users to input around 20 characters as usual to create their own qualified handwritten fonts.", "In this paper we present novel algorithm for the cursive style Korean handwriting synthesis method based on the shape analysis with minimized input data. Hangul (Korean Character) has the different characteristics in structure, that is, character consists of consonants and vowels like English and also it is a combination character like Chinese character. This research aims at using minimal input character data, that is namely consonant and vowel, to synthesize all possible combination of Hangul character. First, we propose the method how to get an inter-strokes information for making position of the natural Korean character. Second, we propose how to synthesize all the Korean characters using small amount of input data and normalized one from some of representative input data. Finally we add a function that concatenates each consonants and vowels for cursive style. By the experiment, we show that the proposed method is effective for synthesis of Hangul by human evaluation." ] }
1701.05818
2950548252
In this work, we present a novel module to perform fusion of heterogeneous data using fully convolutional networks for semantic labeling. We introduce residual correction as a way to learn how to fuse predictions coming out of a dual stream architecture. Especially, we perform fusion of DSM and IRRG optical data on the ISPRS Vaihingen dataset over a urban area and obtain new state-of-the-art results.
Most works related to deep learning for urban semantic labeling use 3-channels networks designed for RGB (and sometimes IRRG), fine-tuned from a model trained on the ImageNet dataset @cite_12 @cite_2 @cite_9 . Dual stream neural networks for data fusion have been introduced in @cite_14 in an unsupervised framework for joint audio-video representation learning, using a dual stream auto-encoder. The same principles have been transposed to supervised learning in @cite_3 for classification of RGB-D data. Data fusion using CNN for classification of remote sensing images has also been explored in the Data Fusion Contest (DFC) 2015, where CNN have been used for multimodal and multi-scale feature extraction in combination with a SVM classifier @cite_13 . Semantic labeling on the ISPRS Vaihingen dataset was further improved using fusion of CNN-based and expert-crafted features with random forests @cite_12 . In the DFC 2016, semantic labeling based on a high resolution multispectral image and tracklet analysis on a spaceborne video were combined for traffic density and activity analysis @cite_10 .
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_9", "@cite_3", "@cite_2", "@cite_13", "@cite_12" ], "mid": [ "2184188583", "2547812480", "", "1012273433", "2469938794", "", "1909515874" ], "abstract": [ "Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.", "Spaceborne remote sensing videos are becoming indispensable resources, opening up opportunities for new remote sensing applications. To exploit this new type of data, we need sophisticated algorithms for semantic scene interpretation. The main difficulties are: 1) Due to the relatively poor spatial resolution of the video acquired from space, moving objects, like cars, are very difficult to detect, not to mention track; 2) camera movement handicaps scene interpretation. To address these challenges, in this paper we propose a novel framework that fuses multispectral images and space videos for spatiotemporal analysis. Taking a multispectral image and a spaceborne video as input, an innovative deep neural network is proposed to fuse them in order to achieve a fine-resolution spatial scene labeling map. Moreover, a sophisticated approach is proposed to analyze activities and estimate traffic density from 150,000+ tracklets produced by a Kanade-Lucas-Tomasi keypoint tracker. The proposed framework is validated using data provided for the 2016 IEEE GRSS data fusion contest, including a video acquired from the International Space Station and a DEIMOS-2 multispectral image. Both visual and quantitative analysis of the experimental results demonstrates the effectiveness of our approach.", "", "Robust object recognition is a crucial ingredient of many, if not all, real-world robotics applications. This paper leverages recent progress on Convolutional Neural Networks (CNNs) and proposes a novel RGB-D architecture for object recognition. Our architecture is composed of two separate CNN processing streams - one for each modality - which are consecutively combined with a late fusion network. We focus on learning with imperfect sensor data, a typical problem in real-world robotics tasks. For accurate learning, we introduce a multi-stage training methodology and two crucial ingredients for handling depth data with CNNs. The first, an effective encoding of depth information for CNNs that enables learning without the need for large depth datasets. The second, a data augmentation scheme for robust learning with depth images by corrupting them with realistic noise patterns. We present state-of-the-art results on the RGB-D object dataset and show recognition in challenging RGB-D real-world noisy settings.", "This paper describes a deep learning approach to semantic segmentation of very high resolution (aerial) images. Deep neural architectures hold the promise of end-to-end learning from raw images, making heuristic feature design obsolete. Over the last decade this idea has seen a revival, and in recent years deep convolutional neural networks (CNNs) have emerged as the method of choice for a range of image interpretation tasks like visual recognition and object detection. Still, standard CNNs do not lend themselves to per-pixel semantic segmentation, mainly because one of their fundamental principles is to gradually aggregate information over larger and larger image regions, making it hard to disentangle contributions from different pixels. Very recently two extensions of the CNN framework have made it possible to trace the semantic information back to a precise pixel position: deconvolutional network layers undo the spatial downsampling, and Fully Convolution Networks (FCNs) modify the fully connected classification layers of the network in such a way that the location of individual activations remains explicit. We design a FCN which takes as input intensity and range data and, with the help of aggressive deconvolution and recycling of early network layers, converts them into a pixelwise classification at full resolution. We discuss design choices and intricacies of such a network, and demonstrate that an ensemble of several networks achieves excellent results on challenging data such as the ISPRS semantic labeling benchmark, using only the raw data as input.", "", "Large amounts of available training data and increasing computing power have led to the recent success of deep convolutional neural networks (CNN) on a large number of applications. In this paper, we propose an effective semantic pixel labelling using CNN features, hand-crafted features and Conditional Random Fields (CRFs). Both CNN and hand-crafted features are applied to dense image patches to produce per-pixel class probabilities. The CRF infers a labelling that smooths regions while respecting the edges present in the imagery. The method is applied to the ISPRS 2D semantic labelling challenge dataset with competitive classification accuracy." ] }
1701.05818
2950548252
In this work, we present a novel module to perform fusion of heterogeneous data using fully convolutional networks for semantic labeling. We introduce residual correction as a way to learn how to fuse predictions coming out of a dual stream architecture. Especially, we perform fusion of DSM and IRRG optical data on the ISPRS Vaihingen dataset over a urban area and obtain new state-of-the-art results.
Finally, residual learning @cite_1 was introduced with the idea that deep networks have trouble learning the identity function. Using bypass connections, the network would only have to learn a residual addition to the input, which would be easier.
{ "cite_N": [ "@cite_1" ], "mid": [ "2949650786" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ] }
1701.05818
2950548252
In this work, we present a novel module to perform fusion of heterogeneous data using fully convolutional networks for semantic labeling. We introduce residual correction as a way to learn how to fuse predictions coming out of a dual stream architecture. Especially, we perform fusion of DSM and IRRG optical data on the ISPRS Vaihingen dataset over a urban area and obtain new state-of-the-art results.
Building on these works, our residual correction is a generic module fully integrated in the CNN pipeline that can be added on any multiple stream architecture. Moreover, it uses recent advances in deep learning by linking residual learning to the signal processing viewpoint on error correction. Especially, we integrate the fusion with the recent fully convolutional networks (FCN) @cite_4 that are able to perform end-to-end dense semantic labeling.
{ "cite_N": [ "@cite_4" ], "mid": [ "2952632681" ], "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image." ] }
1701.05724
2580521434
Logical inference, an integral feature of the Semantic Web, is the process of deriving new triples by applying entailment rules on knowledge bases. The entailment rules are determined by the model-theoretic semantics. Incorporating context of an RDF triple (e.g., provenance, time, and location) into the inferencing process requires the formal semantics to be capable of describing the context of RDF triples also in the form of triples, or in other words, RDF contextual triples about triples. The formal semantics should also provide the rules that could entail new contextual triples about triples. In this paper, we propose the first inferencing mechanism that allows context of RDF triples, represented in the form of RDF triples about triples, to be the first-class citizens in the model-theoretic semantics and in the logical rules. Our inference mechanism is well-formalized with all new concepts being captured in the model-theoretic semantics. This formal semantics also allows us to derive a new set of entailment rules that could entail new contextual triples about triples. To demonstrate the feasibility and the scalability of the proposed mechanism, we implement a new tool in which we transform the existing knowledge bases to our representation of RDF triples about triples and provide the option for this tool to compute the inferred triples for the proposed rules. We evaluate the computation of the proposed rules on a large scale using various real-world knowledge bases such as Bio2RDF NCBI Genes and DBpedia. The results show that the computation of the inferred triples can be highly scalable. On average, one billion inferred triples adds 5-6 minutes to the overall transformation process. NCBI Genes, with 20 billion triples in total, took only 232 minutes for the transformation of 12 billion triples and added 42 minutes for inferring 8 billion triples to the overall process.
Representing and querying contextual information about triples has received significant attention, and several approaches have been proposed. We can classify these approaches into three categories: triple (reification, singleton property), quadruple (named graph), and quintuple (RDF+ @cite_22 ). However, logical inferences with contextual information about triples remain largely underdeveloped due to the lack of a model-theoretic semantics that would determine entailment rules. Without such a model-theoretic semantics, we can make up some rules using the syntax of RDF reification to simulate our proposed rules. Nevertheless, these syntactical rules are not logically valid since they are neither logically derived from nor proven in a model-theoretic semantics. Therefore, we chose the singleton property approach over other approaches to develop the proposed inferencing mechanism mainly because it comes with a formal semantics. To the best of our knowledge, our proposal is the first one to provide the model-theoretic semantics with entailment rules that enables the entailment of new contextual triples about triples.
{ "cite_N": [ "@cite_22" ], "mid": [ "2134840053" ], "abstract": [ "The Semantic Web is based on accessing and reusing RDF data from many different sources, which one may assign different levels of authority and credibility. Existing Semantic Web query languages, like SPARQL, have targeted the retrieval, combination and reuse of facts, but have so far ignored all aspects of meta knowledge, such as origins, authorship, recency or certainty of data, to name but a few. In this paper, we present an original, generic, formalized and implemented approach for managing many dimensions of meta knowledge, like source, authorship, certainty and others. The approach re-uses existing RDF modeling possibilities in order to represent meta knowledge. Then, it extends SPARQL query processing in such a way that given a SPARQL query for data, one may request meta knowledge without modifying the original query. Thus, our approach achieves highly flexible and automatically coordinated querying for data and meta knowledge, while completely separating the two areas of concern." ] }
1701.05676
2580153890
We present a novel feature matching algorithm that systematically utilizes the geometric properties of features such as position, scale, and orientation, in addition to the conventional descriptor vectors. In challenging scenes with the presence of repetitive patterns or with a large viewpoint change, it is hard to find the correct correspondences using feature descriptors only, since the descriptor distances of the correct matches may not be the least among the candidates due to appearance changes. Assuming that the layout of the nearby features does not changed much, we propose the bidirectional transfer measure to gauge the geometric consistency of a pair of feature correspondences. The feature matching problem is formulated as a Markov random field (MRF) which uses descriptor distances and relative geometric similarities together. The unmatched features are explicitly modeled in the MRF to minimize its negative impact. For speed and stability, instead of solving the MRF on the entire features at once, we start with a small set of confident feature matches, and then progressively search the candidates in nearby features and expand the MRF with them. Experimental comparisons show that the proposed algorithm finds better feature correspondences, i.e. more matches with higher inlier ratio, in many challenging scenes with much lower computational cost than the state-of-the-art algorithms.
The proposed work differs from these algorithms in several ways. We directly measure the geometric dissimilarity between correspondences using the position, scale, and orientation provided by local features, instead of the distances between neighboring features and their corresponding features @cite_22 , the length and direction differences of correspondences @cite_15 or homography transformation using additional affine region detector @cite_7 @cite_39 @cite_10 . The proposed algorithm can be used with most available local features as long as the geometric properties are provided, not limited to a few specific local features. Also, unlike the previous approaches @cite_33 @cite_31 @cite_37 @cite_1 @cite_17 that perform estimation on the entire feature set at once, our algorithm incrementally finds the geometrically reasonable matches in whole feature set from the initial set of features. Because of this, the proposed algorithm can handle a large number of features and produce more accurate matches with minimal computational cost.
{ "cite_N": [ "@cite_31", "@cite_37", "@cite_22", "@cite_7", "@cite_33", "@cite_1", "@cite_39", "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "", "", "2166820607", "2101639445", "2076080556", "", "1587878450", "1744214816", "35665245", "" ], "abstract": [ "", "", "We present an efficient spectral method for finding consistent correspondences between two sets of features. We build the adjacency matrix M of a graph whose nodes represent the potential correspondences and the weights on the links represent pairwise agreements between potential correspondences. Correct assignments are likely to establish links among each other and thus form a strongly connected cluster. Incorrect correspondences establish links with the other correspondences only accidentally, so they are unlikely to belong to strongly connected clusters. We recover the correct assignments based on how strongly they belong to the main cluster of M, by using the principal eigenvector of M and imposing the mapping constraints required by the overall correspondence mapping (one-to-one or one-to-many). The experimental evaluation shows that our method is robust to outliers, accurate in terms of matching rate, while being much faster than existing methods", "We present an efficient method for feature correspondence and object-based image matching, which exploits both photometric similarity and pairwise geometric consistency from local invariant features. We formulate object-based image matching as an unsupervised multi-class clustering problem on a set of candidate feature matches, and propose a novel pairwise dissimilarity measure and a robust linkage model in the framework of hierarchical agglomerative clustering. The algorithm handles significant amount of outliers and deformation as well as multiple clusters, thus enabling simultaneous feature matching and clustering from real-world image pairs with significant clutter and multiple deformable objects. The experimental evaluation on feature correspondence, object recognition, and object-based image matching demonstrates that our method is robust to both outliers and deformation, and applicable to a wide range of image matching problems.", "Graph matching plays a central role in solving correspondence problems in computer vision. Graph matching problems that incorporate pair-wise constraints can be cast as a quadratic assignment problem (QAP). Unfortunately, QAP is NP-hard and many algorithms have been proposed to solve different relaxations. This paper presents factorized graph matching (FGM), a novel framework for interpreting and optimizing graph matching problems. In this work we show that the affinity matrix can be factorized as a Kronecker product of smaller matrices. There are three main benefits of using this factorization in graph matching: (1) There is no need to compute the costly (in space and time) pair-wise affinity matrix; (2) The factorization provides a taxonomy for graph matching and reveals the connection among several methods; (3) Using the factorization we derive a new approximation of the original problem that improves state-of-the-art algorithms in graph matching. Experimental results in synthetic and real databases illustrate the benefits of FGM. The code is available at http: humansensing.cs.cmu.edu fgm.", "", "Graph matching is an essential problem in computer vision and machine learning. In this paper, we introduce a random walk view on the problem and propose a robust graph matching algorithm against outliers and deformation. Matching between two graphs is formulated as node selection on an association graph whose nodes represent candidate correspondences between the two graphs. The solution is obtained by simulating random walks with reweighting jumps enforcing the matching constraints on the association graph. Our algorithm achieves noise-robust graph matching by iteratively updating and exploiting the confidences of candidate correspondences. In a practical sense, our work is of particular importance since the real-world matching problem is made difficult by the presence of noise and outliers. Extensive and comparative experiments demonstrate that it outperforms the state-of-the-art graph matching algorithms especially in the presence of outliers and deformation.", "In this paper we present a new approach for establishing correspondences between sparse image features related by an unknown non-rigid mapping and corrupted by clutter and occlusion, such as points extracted from a pair of images containing a human figure in distinct poses. We formulate this matching task as an energy minimization problem by defining a complex objective function of the appearance and the spatial arrangement of the features. Optimization of this energy is an instance of graph matching, which is in general a NP-hard problem. We describe a novel graph matching optimization technique, which we refer to as dual decomposition (DD), and demonstrate on a variety of examples that this method outperforms existing graph matching algorithms. In the majority of our examples DD is able to find the global minimum within a minute. The ability to globally optimize the objective allows us to accurately learn the parameters of our matching model from training examples. We show on several matching tasks that our learned model yields results superior to those of state-of-the-art methods.", "Graph matching is a powerful tool for computer vision and machine learning. In this paper, a novel approach to graph matching is developed based on the sequential Monte Carlo framework. By constructing a sequence of intermediate target distributions, the proposed algorithm sequentially performs a sampling and importance resampling to maximize the graph matching objective. Through the sequential sampling procedure, the algorithm effectively collects potential matches under one-to-one matching constraints to avoid the adverse effect of outliers and deformation. Experimental evaluations on synthetic graphs and real images demonstrate its higher robustness to deformation and outliers.", "" ] }
1701.05308
2963926583
Graphics processing units (GPUs) support dynamic voltage and frequency scaling to balance computational performance and energy consumption. However, simple and accurate performance estimation for a given GPU kernel under different frequency settings is still lacking for real hardware, which is important to decide the best frequency configuration for energy saving. We reveal a fine-grained analytical model to estimate the execution time of GPU kernels with both core and memory frequency scaling. Over a 2 x range of both core and memory frequencies among 20 GPU kernels, our model achieves accurate results (4.83 error on average) with real hardware. Compared to the cycle-level simulators, our model only needs simple micro-benchmarks to extract a set of hardware parameters and kernel performance counters to produce such high accuracy without kernel source analysis.
@cite_15 @cite_1 proposed an analytical model by estimating different degree of memory parallelism and computation parallelism with some offline information of the kernel program. Furthermore, @cite_9 improves the above MWP-CWP model by considering cache effects, SFU characteristics and instruction throughput. However, their methods ignore the effects of shared memory latency and DRAM memory latency divergence, which may bring some significant biases in some memory-bounded application. @cite_3 extend their models and address more on different types of memory access by collecting some simple counters. However, the model averages the cache effects among all the warps and potentially ignores memory latency divergence in some asymmetric applications.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_1", "@cite_3" ], "mid": [ "2142769604", "2167334577", "", "2170059648" ], "abstract": [ "Tuning code for GPGPU and other emerging many-core platforms is a challenge because few models or tools can precisely pinpoint the root cause of performance bottlenecks. In this paper, we present a performance analysis framework that can help shed light on such bottlenecks for GPGPU applications. Although a handful of GPGPU profiling tools exist, most of the traditional tools, unfortunately, simply provide programmers with a variety of measurements and metrics obtained by running applications, and it is often difficult to map these metrics to understand the root causes of slowdowns, much less decide what next optimization step to take to alleviate the bottleneck. In our approach, we first develop an analytical performance model that can precisely predict performance and aims to provide programmer-interpretable metrics. Then, we apply static and dynamic profiling to instantiate our performance model for a particular input code and show how the model can predict the potential performance benefits. We demonstrate our framework on a suite of micro-benchmarks as well as a variety of computations extracted from real codes.", "GPU architectures are increasingly important in the multi-core era due to their high number of parallel processors. Programming thousands of massively parallel threads is a big challenge for software engineers, but understanding the performance bottlenecks of those parallel programs on GPU architectures to improve application performance is even more difficult. Current approaches rely on programmers to tune their applications by exploiting the design space exhaustively without fully understanding the performance characteristics of their applications. To provide insights into the performance bottlenecks of parallel applications on GPU architectures, we propose a simple analytical model that estimates the execution time of massively parallel programs. The key component of our model is estimating the number of parallel memory requests (we call this the memory warp parallelism) by considering the number of running threads and memory bandwidth. Based on the degree of memory warp parallelism, the model estimates the cost of memory requests, thereby estimating the overall execution time of a program. Comparisons between the outcome of the model and the actual execution time in several GPUs show that the geometric mean of absolute error of our model on micro-benchmarks is 5.4 and on GPU computing applications is 13.3 . All the applications are written in the CUDA programming language.", "", "Emergent heterogeneous systems must be optimized for both power and performance at exascale. Massive parallelism combined with complex memory hierarchies form a barrier to efficient application and architecture design. These challenges are exacerbated with GPUs as parallelism increases orders of magnitude and power consumption can easily double. Models have been proposed to isolate power and performance bottlenecks and identify their root causes. However, no current models combine simplicity, accuracy, and support for emergent GPU architectures (e.g. NVIDIA Fermi). We combine hardware performance counter data with machine learning and advanced analytics to model power-performance efficiency for modern GPU-based systems. Our performance counter based approach is simpler than previous approaches and does not require detailed understanding of the underlying architecture. The resulting model is accurate for predicting power (within 2.1 ) and performance (within 6.7 ) for application kernels on modern GPUs. Our model can identify power-performance bottlenecks and their root causes for various complex computation and memory access patterns (e.g. global, shared, texture). We measure the accuracy of our power and performance models on a NVIDIA Fermi C2075 GPU for more than a dozen CUDA applications. We show our power model is more accurate and robust than the best available GPU power models - multiple linear regression models MLR and MLR+. We demonstrate how to use our models to identify power-performance bottlenecks and suggest optimization strategies for high-performance codes such as GEM, a biomolecular electrostatic analysis application. We verify our power-performance model is accurate on clusters of NVIDIA Fermi M2090s and useful for suggesting optimal runtime configurations on the Keeneland supercomputer at Georgia Tech." ] }
1701.05308
2963926583
Graphics processing units (GPUs) support dynamic voltage and frequency scaling to balance computational performance and energy consumption. However, simple and accurate performance estimation for a given GPU kernel under different frequency settings is still lacking for real hardware, which is important to decide the best frequency configuration for energy saving. We reveal a fine-grained analytical model to estimate the execution time of GPU kernels with both core and memory frequency scaling. Over a 2 x range of both core and memory frequencies among 20 GPU kernels, our model achieves accurate results (4.83 error on average) with real hardware. Compared to the cycle-level simulators, our model only needs simple micro-benchmarks to extract a set of hardware parameters and kernel performance counters to produce such high accuracy without kernel source analysis.
@cite_14 present CRISP model which analyze the performance in the face of different frequencies of compute cores. They pointed out that DVFS on GPU is different from that on CPU since computation operations and memory transactions from different threads can overlap in most of the time. Based on the characteristics of GPU performance with varying frequencies found from experiments, they classify different execution stages in the kernel program and compute them with various frequencies. However, CRISP only works for the case of either scaling down the core frequency or scaling up the memory frequency. Also the model may be more complicated if considering the memory frequency. @cite_33 proposed a GPU power estimation model with both core and memory frequency scaling. They designed well crafted microbenchmarks to extract the model parameters of each GPU components under the default frequency setting. Then they attempted to predict the power consumption of an application under over a wide range of frequency scaling.
{ "cite_N": [ "@cite_14", "@cite_33" ], "mid": [ "2233640613", "2794513473" ], "abstract": [ "This paper presents CRISP, the first runtime analytical model of performance in the face of changing frequency in a GPGPU. It shows that prior models not targeted at a GPGPU fail to account for important characteristics of GPGPU execution, including the high degree of overlap between memory access and computation and the frequency of store-related stalls. CRISP provides significantly greater accuracy than prior runtime performance models, being within 4 on average when scaling frequency by up to 7X. Using CRISP to drive a runtime energy efficiency controller yields a 10.7 improvement in energy-delay product, vs 6.2 attainable via the best prior performance model.", "Dynamic Voltage and Frequency Scaling (DVFS) on Graphics Processing Units (GPUs) components is one of the most promising power management strategies, due to its potential for significant power and energy savings. However, there is still a lack of simple and reliable models for the estimation of the GPU power consumption under a set of different voltage and frequency levels. Accordingly, a novel GPU power estimation model with both core and memory frequency scaling is herein proposed. This model combines information from both the GPU architecture and the executing GPU application and also takes into account the non-linear changes in the GPU voltage when the core and memory frequencies are scaled. The model parameters are estimated using a collection of 83 microbenchmarks carefully crafted to stress the main GPU components. Based on the hardware performance events gathered during the execution of GPU applications on a single frequency configuration, the proposed model allows to predict the power consumption of the application over a wide range of frequency configurations, as well as to decompose the contribution of different parts of the GPU pipeline to the overall power consumption. Validated on 3 GPU devices from the most recent NVIDIA microarchitectures (Pascal, Maxwell and Kepler), by using a collection of 26 standard benchmarks, the proposed model is able to achieve accurate results (7 , 6 and 12 mean absolute error) for the target GPUs (Titan Xp, GTX Titan X and Tesla K40c)." ] }
1701.05308
2963926583
Graphics processing units (GPUs) support dynamic voltage and frequency scaling to balance computational performance and energy consumption. However, simple and accurate performance estimation for a given GPU kernel under different frequency settings is still lacking for real hardware, which is important to decide the best frequency configuration for energy saving. We reveal a fine-grained analytical model to estimate the execution time of GPU kernels with both core and memory frequency scaling. Over a 2 x range of both core and memory frequencies among 20 GPU kernels, our model achieves accurate results (4.83 error on average) with real hardware. Compared to the cycle-level simulators, our model only needs simple micro-benchmarks to extract a set of hardware parameters and kernel performance counters to produce such high accuracy without kernel source analysis.
Recent state-of-art works witness the advantages of machine learning methods on GPU performance and power modeling. Gene @cite_20 built a performance model based on different patterns of scaling with various core frequency and memory frequency. He firstly adopted K-means to cluster different patterns of scaling behavior among 37 kernels and then explored the relationship between performance counters and clustering patterns with ANN modeling. With the model trained with large amount of data, one can predict the performance of one kernel under any setting of core frequency and memory frequency with the predicted scaling pattern. @cite_29 adopted SVR algorithm to estimate GPU power consumption considering both core and memory frequency scaling.
{ "cite_N": [ "@cite_29", "@cite_20" ], "mid": [ "2762077810", "2038666141" ], "abstract": [ "With the increasing installation of Graphics Processing Units (GPUs) in supercomputers and data centers, their huge electricity cost brings new environmental and economic concerns. Although Dynamic Voltage and Frequency Scaling (DVFS) techniques have been successfully applied on traditional CPUs to reserve energy, the impact of GPU DVFS on application performance and power consumption is not yet fully understood, mainly due to the complicated GPU memory system. This paper proposes a fast prediction model based on Support Vector Regression (SVR), which can estimate the average runtime power of a given GPU kernel using a set of profiling parameters under different GPU core and memory frequencies. Our experimental data set includes 931 samples obtained from 19 GPU kernels running on a real GPU platform with the core and memory frequencies ranging between 400MHz and 1000MHz. We evaluate the accuracy of the SVR-based prediction model by ten-fold cross validation. We achieve greater accuracy than prior models, being Mean Square Error (MSE) of 0.797 Watt and Mean Absolute Percentage Error (MAPE) of 3.08 on average. Combined with an existing performance prediction model, we can find the optimal GPU frequency settings that can save an average of 13.2 energy across those GPU kernels with no more than 10 performance penalty compared to applying the default setting.", "Graphics Processing Units (GPUs) have numerous configuration and design options, including core frequency, number of parallel compute units (CUs), and available memory bandwidth. At many stages of the design process, it is important to estimate how application performance and power are impacted by these options. This paper describes a GPU performance and power estimation model that uses machine learning techniques on measurements from real GPU hardware. The model is trained on a collection of applications that are run at numerous different hardware configurations. From the measured performance and power data, the model learns how applications scale as the GPU's configuration is changed. Hardware performance counter values are then gathered when running a new application on a single GPU configuration. These dynamic counter values are fed into a neural network that predicts which scaling curve from the training data best represents this kernel. This scaling curve is then used to estimate the performance and power of the new application at different GPU configurations. Over an 8× range of the number of CUs, a 3.3× range of core frequencies, and a 2.9× range of memory bandwidth, our model's performance and power estimates are accurate to within 15 and 10 of real hardware, respectively. This is comparable to the accuracy of cycle-level simulators. However, after an initial training phase, our model runs as fast as, or faster than the program running natively on real hardware." ] }
1701.05369
2952088488
We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout. We extend Variational Dropout to the case when dropout rates are unbounded, propose a way to reduce the variance of the gradient estimator and report first experimental results with individual dropout rates per weight. Interestingly, it leads to extremely sparse solutions both in fully-connected and convolutional layers. This effect is similar to automatic relevance determination effect in empirical Bayes but has a number of advantages. We reduce the number of parameters up to 280 times on LeNet architectures and up to 68 times on VGG-like networks with a negligible decrease of accuracy.
Deep Neural Nets are prone to overfitting and regularization is used to address this problem. Several successful techniques have been proposed for DNN regularization, among them are Dropout @cite_41 , DropConnect @cite_11 , Max Norm Constraint @cite_41 , Batch Normalization @cite_42 , etc.
{ "cite_N": [ "@cite_41", "@cite_42", "@cite_11" ], "mid": [ "2095705004", "2949117887", "4919037" ], "abstract": [ "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models." ] }
1701.05369
2952088488
We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout. We extend Variational Dropout to the case when dropout rates are unbounded, propose a way to reduce the variance of the gradient estimator and report first experimental results with individual dropout rates per weight. Interestingly, it leads to extremely sparse solutions both in fully-connected and convolutional layers. This effect is similar to automatic relevance determination effect in empirical Bayes but has a number of advantages. We reduce the number of parameters up to 280 times on LeNet architectures and up to 68 times on VGG-like networks with a negligible decrease of accuracy.
Another way to regularize deep model is to reduce the number of parameters. One possible approach is to use tensor decompositions @cite_20 @cite_39 . Another approach is to induce sparsity into weight matrices. Most recent works on sparse neural networks use pruning @cite_4 , elastic net regularization @cite_23 @cite_14 @cite_25 @cite_26 or composite techniques @cite_22 @cite_43 @cite_32 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_26", "@cite_22", "@cite_32", "@cite_39", "@cite_43", "@cite_23", "@cite_25", "@cite_20" ], "mid": [ "", "2963674932", "", "2119144962", "2952746978", "", "", "2949273893", "", "2952689122" ], "abstract": [ "", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "The success of deep learning in numerous application domains created the de- sire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression. Recent work by (2015a) propose a pipeline that involves retraining, pruning and quantization of neural network weights, obtaining state-of-the-art compression rates. In this paper, we show that competitive compression rates can be achieved by using a version of soft weight-sharing (Nowlan & Hinton, 1992). Our method achieves both quantization and pruning in one simple (re-)training procedure. This point of view also exposes the relation between compression and the minimum description length (MDL) principle.", "", "", "We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion by adding group-sparsity regularization to the standard training process. After such group-wise pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. In the comparison on AlexNet, the method achieves very competitive performance.", "", "Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times." ] }
1701.05343
2580972649
For argumentation mining, there are several sub-tasks such as argumentation component type classification, relation classification. Existing research tends to solve such sub-tasks separately, but ignore the close relation between them. In this paper, we present a joint framework incorporating logical relation between sub-tasks to improve the performance of argumentation structure generation. We design an objective function to combine the predictions from individual models for each sub-task and solve the problem with some constraints constructed from background knowledge. We evaluate our proposed model on two public corpora and the experiment results show that our model can outperform the baseline that uses a separate model significantly for each sub-task. Our model also shows advantages on component-related sub-tasks compared to a state-of-the-art joint model based on the evidence graph.
Previous research on argumentation mining focuses on several sub-tasks, including (1) splitting text into discourse units (DU) @cite_16 @cite_5 , (2) identification of ADUs from non-argumentative ones @cite_10 @cite_1 , (3) identification of ADU types @cite_4 @cite_17 @cite_8 @cite_11 and (4) identification of relation between ADUs @cite_14 @cite_6 @cite_15 @cite_13 . We will concentrate on the latter two sub-tasks in this part and introduce some existing joint models.
{ "cite_N": [ "@cite_13", "@cite_11", "@cite_4", "@cite_14", "@cite_8", "@cite_1", "@cite_6", "@cite_5", "@cite_15", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "", "2250653239", "2250287365", "", "2251663606", "2251314334", "2250309026", "", "2251661596", "2154976315", "1977155386", "2251478708" ], "abstract": [ "", "Analyzing arguments in user-generated Web discourse has recently gained attention in argumentation mining, an evolving field of NLP. Current approaches, which employ fully-supervised machine learning, are usually domain dependent and suffer from the lack of large and diverse annotated corpora. However, annotating arguments in discourse is costly, error-prone, and highly context-dependent. We asked whether leveraging unlabeled data in a semi-supervised manner can boost the performance of argument component identification and to which extent is the approach independent of domain and register. We propose novel features that exploit clustering of unlabeled data from debate portals based on a word embeddings representation. Using these features, we significantly outperform several baselines in the cross-validation, cross-domain, and cross-register evaluation scenarios.", "The ability to analyze the adequacy of supporting information is necessary for determining the strength of an argument.1 This is especially the case for online user comments, which often consist of arguments lacking proper substantiation and reasoning. Thus, we develop a framework for automatically classifying each proposition as UNVERIFIABLE, VERIFIABLE NONEXPERIENTIAL, or VERIFIABLE EXPERIENTIAL2, where the appropriate type of support is reason, evidence, and optional evidence, respectively3. Once the existing support for propositions are identified, this classification can provide an estimate of how adequately the arguments have been supported. We build a goldstandard dataset of 9,476 sentences and clauses from 1,047 comments submitted to an eRulemaking platform and find that Support Vector Machine (SVM) classifiers trained with n-grams and additional features capturing the verifiability and experientiality exhibit statistically significant improvement over the unigram baseline, achieving a macro-averaged F1 of 68.99 .", "", "This paper presents a study on the role of discourse markers in argumentative discourse. We annotated a German corpus with arguments according to the common claim-premise model of argumentation and performed various statistical analyses regarding the discriminative nature of discourse markers for claims and premises. Our experiments show that particular semantic groups of discourse markers are indicative of either claims or premises and constitute highly predictive features for discriminating between them.", "In this paper we describe an application of language technology to policy formulation, where it can support policy makers assess the acceptance of a yet-unpublished policy before the policy enters public consultation. One of the key concepts is that instead of relying on thematic similarity, we extract arguments expressed in support or opposition of positions that are general statements that are, themselves, consistent with the policy or not. The focus of this paper in this overall pipeline, is identifying arguments in text: we present and empirically evaluate the hypothesis that verbal tense and mood are good indicators of arguments that have not been explored in the relevant literature.", "In this paper, we present a novel approach for identifying argumentative discourse structures in persuasive essays. The structure of argumentation consists of several components (i.e. claims and premises) that are connected with argumentative relations. We consider this task in two consecutive steps. First, we identify the components of arguments using multiclass classification. Second, we classify a pair of argument components as either support or non-support for identifying the structure of argumentative discourse. For both tasks, we evaluate several classifiers and propose novel feature sets including structural, lexical, syntactic and contextual features. In our experiments, we obtain a macro F1-score of 0.726 for identifying argument components and 0.722 for argumentative relations.", "", "We introduce a new approach to argumentation mining that we applied to a parallel German English corpus of short texts annotated with argumentation structure. We focus on structure prediction, which we break into a number of subtasks: relation identification, central claim identification, role classification, and function classification. Our new model jointly predicts different aspects of the structure by combining the different subtask predictions in the edge weights of an evidence graph; we then apply a standard MST decoding algorithm. This model not only outperforms two reasonable baselines and two datadriven models of global argument structure for the difficult subtask of relation identification, but also improves the results for central claim identification and function classification and it compares favorably to a complex mstparser pipeline.", "Argumentative discourse contains not only language expressing claims and evidence, but also language used to organize these claims and pieces of evidence. Differentiating between the two may be useful for many applications, such as those that focus on the content (e.g., relation extraction) of arguments and those that focus on the structure of arguments (e.g., automated essay scoring). We propose an automated approach to detecting high-level organizational elements in argumentative discourse that combines a rule-based system and a probabilistic sequence model in a principled manner. We present quantitative results on a dataset of human-annotated persuasive essays, and qualitative analyses of performance on essays and on political debates.", "This paper provides the results of experiments on the detection of arguments in texts among which are legal texts. The detection is seen as a classification problem. A classifier is trained on a set of annotated arguments. Different feature sets are evaluated involving lexical, syntactic, semantic and discourse properties of the texts. The experiments are a first step in the context of automatically classifying arguments in legal texts according to their rhetorical type and their visualization for convenient access and search.", "Argument mining studies in natural language text often use lexical (e.g. n-grams) and syntactic (e.g. grammatical production rules) features with all possible values. In prior work on a corpus of academic essays, we demonstrated that such large and sparse feature spaces can cause difficulty for feature selection and proposed a method to design a more compact feature space. The proposed feature design is based on post-processing a topic model to extract argument and domain words. In this paper we investigate the generality of this approach, by applying our methodology to a new corpus of persuasive essays. Our experiments show that replacing n-grams and syntactic rules with features and constraints using extracted argument and domain words significantly improves argument mining performance for persuasive essays." ] }
1701.05159
2580872874
In this work we present a novel recurrent neural network architecture designed to model systems characterized by multiple characteristic timescales in their dynamics. The proposed network is composed by several recurrent groups of neurons that are trained to separately adapt to each timescale, in order to improve the system identification process. We test our framework on time series prediction tasks and we show some promising, preliminary results achieved on synthetic data. To evaluate the capabilities of our network, we compare the performance with several state-of-the-art recurrent architectures.
An initial attempt to model multiple dynamics and timescales was based on the idea that temporal relationships are structured hierarchically, hence the RNN should be organized accordingly @cite_13 . The resulting architecture managed to improve the modeling of slow-changing contexts. An analogous hierarchical organization has been implemented by stacking multiple recurrent layers @cite_16 . In the same spirit, a more complex stacked architecture called Gated Feedback Recurrent Neural Networks has been proposed in Ref. @cite_19 . In this architecture, the recurrent layers are connected by gated-feedback connections which allow them to operate at different timescales. By referring to the unfolding in time of the recurrent network, the states of consecutive time steps are fully connected and the strength of the connections is trained by gradient descent. The main shortcoming of these layered architectures is the high amount of parameters that must be learned in order to adapt the network to process the right time scales. This results in a long training time, with the possibility of overfitting the training data.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_13" ], "mid": [ "2953061907", "1810943226", "2099257174" ], "abstract": [ "In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GF-RNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions.", "This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles.", "We have already shown that extracting long-term dependencies from sequential data is difficult, both for determimstic dynamical systems such as recurrent networks, and probabilistic models such as hidden Markov models (HMMs) or input output hidden Markov models (IOHMMs). In practice, to avoid this problem, researchers have used domain specific a-priori knowledge to give meaning to the hidden or state variables representing past context. In this paper, we propose to use a more general type of a-priori knowledge, namely that the temporal dependencies are structured hierarchically. This implies that long-term dependencies are represented by variables with a long time scale. This principle is applied to a recurrent network which includes delays and multiple time scales. Experiments confirm the advantages of such structures. A similar approach is proposed for HMMs and IOHMMs." ] }
1701.04739
2571710472
Governments and businesses increasingly rely on data analytics and machine learning (ML) for improving their competitive edge in areas such as consumer satisfaction, threat intelligence, decision making, and product efficiency. However, by cleverly corrupting a subset of data used as input to a target's ML algorithms, an adversary can perturb outcomes and compromise the effectiveness of ML technology. While prior work in the field of adversarial machine learning has studied the impact of input manipulation on correct ML algorithms, we consider the exploitation of bugs in ML implementations. In this paper, we characterize the attack surface of ML programs, and we show that malicious inputs exploiting implementation bugs enable strictly more powerful attacks than the classic adversarial machine learning techniques. We propose a semi-automated technique, called steered fuzzing, for exploring this attack surface and for discovering exploitable bugs in machine learning programs, in order to demonstrate the magnitude of this threat. As a result of our work, we responsibly disclosed five vulnerabilities, established three new CVE-IDs, and illuminated a common insecure practice across many machine learning systems. Finally, we outline several research directions for further understanding and mitigating this threat.
In an analysis of a neural network trained for image processing tasks, @cite_7 identified that an adversary can apply a perturbation to an image that is imperceptible to humans yet it changes the network's prediction. @cite_30 present a fast method for generating adversarial perturbations to fool an image classifier. Research from @cite_19 further explores automated generation of such perturbations. Utilizing a well-formed seed input, a mutational fuzzer iteratively manipulates the seed to achieve maximum path traversal in a target program. This technique can isolate particular sets of input that cause the program to enter a state that might be of interest for an attacker.
{ "cite_N": [ "@cite_30", "@cite_19", "@cite_7" ], "mid": [ "1945616565", "", "1673923490" ], "abstract": [ "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "", "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input." ] }
1701.04739
2571710472
Governments and businesses increasingly rely on data analytics and machine learning (ML) for improving their competitive edge in areas such as consumer satisfaction, threat intelligence, decision making, and product efficiency. However, by cleverly corrupting a subset of data used as input to a target's ML algorithms, an adversary can perturb outcomes and compromise the effectiveness of ML technology. While prior work in the field of adversarial machine learning has studied the impact of input manipulation on correct ML algorithms, we consider the exploitation of bugs in ML implementations. In this paper, we characterize the attack surface of ML programs, and we show that malicious inputs exploiting implementation bugs enable strictly more powerful attacks than the classic adversarial machine learning techniques. We propose a semi-automated technique, called steered fuzzing, for exploring this attack surface and for discovering exploitable bugs in machine learning programs, in order to demonstrate the magnitude of this threat. As a result of our work, we responsibly disclosed five vulnerabilities, established three new CVE-IDs, and illuminated a common insecure practice across many machine learning systems. Finally, we outline several research directions for further understanding and mitigating this threat.
@cite_28 discuss the process and importance of casting out demons,'' sanitizing ML training datasets for anomaly detection (AD) sensors. AD systems inherently receive malicious input and anomalous events that may drastically impact the system's tuning and instrumentation. Accounting for data that may negatively impact the accuracy of the system's classifier can enhance its overall robustness.
{ "cite_N": [ "@cite_28" ], "mid": [ "2103154003" ], "abstract": [ "The efficacy of anomaly detection (AD) sensors depends heavily on the quality of the data used to train them. Artificial or contrived training data may not provide a realistic view of the deployment environment. Most realistic data sets are dirty; that is, they contain a number of attacks or anomalous events. The size of these high-quality training data sets makes manual removal or labeling of attack data infeasible. As a result, sensors trained on this data can miss attacks and their variations. We propose extending the training phase of AD sensors (in a manner agnostic to the underlying AD algorithm) to include a sanitization phase. This phase generates multiple models conditioned on small slices of the training data. We use these \"micro- models\" to produce provisional labels for each training input, and we combine the micro-models in a voting scheme to determine which parts of the training data may represent attacks. Our results suggest that this phase automatically and significantly improves the quality of unlabeled training data by making it as \"attack-free\" and \"regular\" as possible in the absence of absolute ground truth. We also show how a collaborative approach that combines models from different networks or domains can further refine the sanitization process to thwart targeted training or mimicry attacks against a single site." ] }
1701.04743
2949854475
Finding the camera pose is an important step in many egocentric video applications. It has been widely reported that, state of the art SLAM algorithms fail on egocentric videos. In this paper, we propose a robust method for camera pose estimation, designed specifically for egocentric videos. In an egocentric video, the camera views the same scene point multiple times as the wearer's head sweeps back and forth. We use this specific motion profile to perform short loop closures aligned with wearer's footsteps. For egocentric videos, depth estimation is usually noisy. In an important departure, we use 2D computations for rotation averaging which do not rely upon depth estimates. The two modification results in much more stable algorithm as is evident from our experiments on various egocentric video datasets for different egocentric applications. The proposed algorithm resolves a long standing problem in egocentric vision and unlocks new usage scenarios for future applications.
Dense methods use the entire image and not just few selected features @cite_15 . The camera poses are estimated as the set of parameters which minimize the image difference over all pixels in the image. To increase the accuracy of estimation, semi-dense methods are usually adopted, which perform photometric error minimization only in regions of sufficient gradient @cite_14 @cite_24 .
{ "cite_N": [ "@cite_24", "@cite_15", "@cite_14" ], "mid": [ "2140599684", "2108134361", "612478963" ], "abstract": [ "We propose a fundamentally novel approach to real-time visual odometry for a monocular camera. It allows to benefit from the simplicity and accuracy of dense tracking - which does not depend on visual features - while running in real-time on a CPU. The key idea is to continuously estimate a semi-dense inverse depth map for the current frame, which in turn is used to track the motion of the camera using dense image alignment. More specifically, we estimate the depth of all pixels which have a non-negligible image gradient. Each estimate is represented as a Gaussian probability distribution over the inverse depth. We propagate this information over time, and update it with new measurements as new images arrive. In terms of tracking accuracy and computational speed, the proposed method compares favorably to both state-of-the-art dense and feature-based visual odometry and SLAM algorithms. As our method runs in real-time on a CPU, it is of large practical value for robotics and augmented reality applications.", "DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.", "We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU." ] }
1701.04743
2949854475
Finding the camera pose is an important step in many egocentric video applications. It has been widely reported that, state of the art SLAM algorithms fail on egocentric videos. In this paper, we propose a robust method for camera pose estimation, designed specifically for egocentric videos. In an egocentric video, the camera views the same scene point multiple times as the wearer's head sweeps back and forth. We use this specific motion profile to perform short loop closures aligned with wearer's footsteps. For egocentric videos, depth estimation is usually noisy. In an important departure, we use 2D computations for rotation averaging which do not rely upon depth estimates. The two modification results in much more stable algorithm as is evident from our experiments on various egocentric video datasets for different egocentric applications. The proposed algorithm resolves a long standing problem in egocentric vision and unlocks new usage scenarios for future applications.
The work closest to ours is -slam @cite_14 which does dense tracking directly on @math , to explicitly detect scale-drifts. -slam builds upon @cite_24 , to continuously estimate a semi-dense inverse depth map for the current frame, which in turn is used to track the motion of the camera using dense image alignment. Given an inverse depth map, both methods estimate the camera motion using non-linear minimization in combination with a coarse-to-fine scheme, as originally proposed in @cite_2 . The minimization is done using weighted Gauss-Newton optimization on Lie-Manifolds. Our method also uses a similar optimization technique for the initial camera pose estimation.
{ "cite_N": [ "@cite_24", "@cite_14", "@cite_2" ], "mid": [ "2140599684", "612478963", "2021930164" ], "abstract": [ "We propose a fundamentally novel approach to real-time visual odometry for a monocular camera. It allows to benefit from the simplicity and accuracy of dense tracking - which does not depend on visual features - while running in real-time on a CPU. The key idea is to continuously estimate a semi-dense inverse depth map for the current frame, which in turn is used to track the motion of the camera using dense image alignment. More specifically, we estimate the depth of all pixels which have a non-negligible image gradient. Each estimate is represented as a Gaussian probability distribution over the inverse depth. We propagate this information over time, and update it with new measurements as new images arrive. In terms of tracking accuracy and computational speed, the proposed method compares favorably to both state-of-the-art dense and feature-based visual odometry and SLAM algorithms. As our method runs in real-time on a CPU, it is of large practical value for robotics and augmented reality applications.", "We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.", "The goal of our work is to provide a fast and accurate method to estimate the camera motion from RGB-D images. Our approach registers two consecutive RGB-D frames directly upon each other by minimizing the photometric error. We estimate the camera motion using non-linear minimization in combination with a coarse-to-fine scheme. To allow for noise and outliers in the image data, we propose to use a robust error function that reduces the influence of large residuals. Furthermore, our formulation allows for the inclusion of a motion model which can be based on prior knowledge, temporal filtering, or additional sensors like an IMU. Our method is attractive for robots with limited computational resources as it runs in real-time on a single CPU core and has a small, constant memory footprint. In an extensive set of experiments carried out both on a benchmark dataset and synthetic data, we demonstrate that our approach is more accurate and robust than previous methods. We provide our software under an open source license." ] }
1701.04743
2949854475
Finding the camera pose is an important step in many egocentric video applications. It has been widely reported that, state of the art SLAM algorithms fail on egocentric videos. In this paper, we propose a robust method for camera pose estimation, designed specifically for egocentric videos. In an egocentric video, the camera views the same scene point multiple times as the wearer's head sweeps back and forth. We use this specific motion profile to perform short loop closures aligned with wearer's footsteps. For egocentric videos, depth estimation is usually noisy. In an important departure, we use 2D computations for rotation averaging which do not rely upon depth estimates. The two modification results in much more stable algorithm as is evident from our experiments on various egocentric video datasets for different egocentric applications. The proposed algorithm resolves a long standing problem in egocentric vision and unlocks new usage scenarios for future applications.
As discussed in detail in @cite_6 , loop closures are detected using three major approaches in literature - map-to-map, image-to-image and image-to-map. L. Clemente al @cite_17 use a map-to-map approach where they find correspondences between common features in two sub-maps. M. Cummins and P. Newman @cite_5 use visual features for image-to-image loop-closures. Matching is performed based on presence or absence of these features from a visual vocabulary. B. Williams al @cite_32 use an image-to-map approach and find loop-closure using re-localization of camera by estimating the pose relative to map correspondences. Using the assumption that aerial video views a roughly planar ground and homographies can be used to register the frames, Leotta al @cite_25 have proposed a homography-guided loop closure algorithm to address the long-term loop closure problem. In this paper, we use an image-to-image approach for the purpose of detecting loop closures.
{ "cite_N": [ "@cite_32", "@cite_6", "@cite_5", "@cite_25", "@cite_17" ], "mid": [ "2107486774", "", "2144824356", "2400830868", "120566300" ], "abstract": [ "In this paper we present a loop closure method for a handheld single-camera SLAM system based on our previous work on relocalization. By finding correspondences between the current image and the map, our system is able to reliably detect loop closures. We compare our algorithm to existing techniques for loop closure in single-camera SLAM based on both image-to-image and map-to-map correspondences and discuss both the reliability and suitability of each algorithm in the context of monocular SLAM.", "", "This paper describes a probabilistic approach to the problem of recognizing places based on their appearance. The system we present is not limited to localization, but can determine that a new observation comes from a previously unseen place, and so augment its map. Effectively this is a SLAM system in the space of appearance. Our probabilistic approach allows us to explicitly account for perceptual aliasing in the environment—identical but indistinctive observations receive a low probability of having come from the same place. We achieve this by learning a generative model of place appearance. By partitioning the learning problem into two parts, new place models can be learned online from only a single observation of a place. The algorithm complexity is linear in the number of places in the map, and is particularly suitable for online loop closure detection in mobile robotics.", "Structure-from-motion (SfM) is a well-studied problem in the computer vision field and is of particular interest for aerial imaging applications like mapping, terrain modeling, crop monitoring, etc. With the current rapid growth in the commercial UAV and small satellite markets, aerial SfM is becoming even more important. In recent years, free and open source software has enabled almost anyone to apply SfM to their data at no cost. Existing free packages are oriented toward processing unordered collections of photographs and are less efficient at processing ordered collections or video. They are also prone to failure with nearly planar scenes that often arise in aerial photography. While commercial solutions for aerial SfM exist, they are proprietary and expensive. This paper presents a new open source software toolkit named MAP-Tk that targets SfM for aerial video. It exploits temporal continuity and the aerial nature of the data to speed up and improve feature tracking, loop closure, and bundle adjustment. The system is highly configurable and modular. Dynamic plugins provide state-of-the art algorithms from other open source projects like OpenCV, VXL, and Ceres Solver. This paper presents the modular system design, user interface, novel algorithms exploiting aerial video, and results on various aerial data sets.", "This paper presents a method for Simultaneous Localization and Mapping (SLAM) relying on a monocular camera as the only sensor which is able to build outdoor, closedloop maps much larger than previously achieved with such input. Our system, based on the Hierarchical Map approach [1], builds independent local maps in real-time using the EKF-SLAM technique and the inverse depth representation proposed in [2]. The main novelty in the local mapping process is the use of a data association technique that greatly improves its robustness in dynamic and complex environments. A new visual map matching algorithm stitches these maps together and is able to detect large loops automatically, taking into account the unobservability of scale intrinsic to pure monocular SLAM. The loop closing constraint is applied at the upper level of the Hierarchical Map in near real-time. We present experimental results demonstrating monocular SLAM as a human carries a camera over long walked trajectories in outdoor areas with people and other clutter, even in the more difficult case of forward-looking camera, and show the closing of loops of several hundred meters." ] }
1701.04653
2573459536
In this paper, we investigate whether text from a Community Question Answering (QA) platform can be used to predict and describe real-world attributes. We experiment with predicting a wide range of 62 demographic attributes for neighbourhoods of London. We use the text from QA platform of Yahoo! Answers and compare our results to the ones obtained from Twitter microblogs. Outcomes show that the correlation between the predicted demographic attributes using text from Yahoo! Answers discussions and the observed demographic attributes can reach an average Pearson correlation coefficient of ho = 0.54, slightly higher than the predictions obtained using Twitter data. Our qualitative analysis indicates that there is semantic relatedness between the highest correlated terms extracted from both datasets and their relative demographic attributes. Furthermore, the correlations highlight the different natures of the information contained in Yahoo! Answers and Twitter. While the former seems to offer a more encyclopedic content, the latter provides information related to the current sociocultural aspects or phenomena.
The availability of a huge amount of data from many social media platforms has inspired researchers to study the relation between the data on these platforms and many real-world attributes. Twitter data, in particular, has been widely used as a social media source to make predictions in many domains. For example, box-office revenues are predicted using text from Twitter microblogs @cite_3 . Prediction results have been predicted by performing content analysis on tweets @cite_18 . It is shown that correlations exist between mood states of the collective tweets to the value of Dow Jones Industrial Average (DJIA) @cite_6 .
{ "cite_N": [ "@cite_18", "@cite_6", "@cite_3" ], "mid": [ "1590495275", "2171468534", "2015186536" ], "abstract": [ "Twitter is a microblogging website where users read and write millions of short messages on a variety of topics every day. This study uses the context of the German federal election to investigate whether Twitter is used as a forum for political deliberation and whether online messages on Twitter validly mirror offline political sentiment. Using LIWC text analysis software, we conducted a content-analysis of over 100,000 messages containing a reference to either a political party or a politician. Our results show that Twitter is indeed used extensively for political deliberation. We find that the mere number of messages mentioning a party reflects the election result. Moreover, joint mentions of two parties are in line with real world political ties and coalitions. An analysis of the tweets’ political sentiment demonstrates close correspondence to the parties' and politicians’ political positions indicating that the content of Twitter messages plausibly reflects the offline political landscape. We discuss the use of microblogging message content as a valid indicator of political sentiment and derive suggestions for further research.", "Behavioral economics tells us that emotions can profoundly affect individual behavior and decision-making. Does this also apply to societies at large, i.e. can societies experience mood states that affect their collective decision making? By extension is the public mood correlated or even predictive of economic indicators? Here we investigate whether measurements of collective mood states derived from large-scale Twitter feeds are correlated to the value of the Dow Jones Industrial Average (DJIA) over time. We analyze the text content of daily Twitter feeds by two mood tracking tools, namely OpinionFinder that measures positive vs. negative mood and Google-Profile of Mood States (GPOMS) that measures mood in terms of 6 dimensions (Calm, Alert, Sure, Vital, Kind, and Happy). We cross-validate the resulting mood time series by comparing their ability to detect the public's response to the presidential election and Thanksgiving day in 2008. A Granger causality analysis and a Self-Organizing Fuzzy Neural Network are then used to investigate the hypothesis that public mood states, as measured by the OpinionFinder and GPOMS mood time series, are predictive of changes in DJIA closing values. Our results indicate that the accuracy of DJIA predictions can be significantly improved by the inclusion of specific public mood dimensions but not others. We find an accuracy of 87.6 in predicting the daily up and down changes in the closing values of the DJIA and a reduction of the Mean Average Percentage Error by more than 6 . Index Terms—stock market prediction — twitter — mood analysis.", "In recent years, social media has become ubiquitous and important for social networking and content sharing. And yet, the content that is generated from these websites remains largely untapped. In this paper, we demonstrate how social media content can be used to predict real-world outcomes. In particular, we use the chatter from Twitter.com to forecast box-office revenues for movies. We show that a simple model built from the rate at which tweets are created about particular topics can outperform market-based predictors. We further demonstrate how sentiments extracted from Twitter can be utilized to improve the forecasting power of social media." ] }
1701.04653
2573459536
In this paper, we investigate whether text from a Community Question Answering (QA) platform can be used to predict and describe real-world attributes. We experiment with predicting a wide range of 62 demographic attributes for neighbourhoods of London. We use the text from QA platform of Yahoo! Answers and compare our results to the ones obtained from Twitter microblogs. Outcomes show that the correlation between the predicted demographic attributes using text from Yahoo! Answers discussions and the observed demographic attributes can reach an average Pearson correlation coefficient of ho = 0.54, slightly higher than the predictions obtained using Twitter data. Our qualitative analysis indicates that there is semantic relatedness between the highest correlated terms extracted from both datasets and their relative demographic attributes. Furthermore, the correlations highlight the different natures of the information contained in Yahoo! Answers and Twitter. While the former seems to offer a more encyclopedic content, the latter provides information related to the current sociocultural aspects or phenomena.
Predicting demographics of individual users using their language on social media platforms, especially Twitter, has been the focus of many research. Text from blogs, telephone conversations, and forum posts are utilised for predicting author's age @cite_1 with a Pearson's correlation of @math . Geo-tagged Twitter data have been used to predict the demographic information of authors such as first language, race, and ethnicity with correlations up to @math @cite_8 .
{ "cite_N": [ "@cite_1", "@cite_8" ], "mid": [ "1914856875", "2144364794" ], "abstract": [ "While the study of the connection between discourse patterns and personal identification is decades old, the study of these patterns using language technologies is relatively recent. In that more recent tradition we frame author age prediction from text as a regression problem. We explore the same task using three very different genres of data simultaneously: blogs, telephone conversations, and online forum posts. We employ a technique from domain adaptation that allows us to train a joint model involving all three corpora together as well as separately and analyze differences in predictive features across joint and corpus-specific aspects of the model. Effective features include both stylistic ones (such as POS patterns) as well as content oriented ones. Using a linear regression model based on shallow text features, we obtain correlations up to 0.74 and mean absolute errors between 4.1 and 6.8 years.", "We present a method to discover robust and interpretable sociolinguistic associations from raw geotagged text data. Using aggregate demographic statistics about the authors' geographic communities, we solve a multi-output regression problem between demographics and lexical frequencies. By imposing a composite e1,∞ regularizer, we obtain structured sparsity, driving entire rows of coefficients to zero. We perform two regression studies. First, we use term frequencies to predict demographic attributes; our method identifies a compact set of words that are strongly associated with author demographics. Next, we conjoin demographic attributes into features, which we use to predict term frequencies. The composite regularizer identifies a small number of features, which correspond to communities of authors united by shared demographic and linguistic properties." ] }
1701.04653
2573459536
In this paper, we investigate whether text from a Community Question Answering (QA) platform can be used to predict and describe real-world attributes. We experiment with predicting a wide range of 62 demographic attributes for neighbourhoods of London. We use the text from QA platform of Yahoo! Answers and compare our results to the ones obtained from Twitter microblogs. Outcomes show that the correlation between the predicted demographic attributes using text from Yahoo! Answers discussions and the observed demographic attributes can reach an average Pearson correlation coefficient of ho = 0.54, slightly higher than the predictions obtained using Twitter data. Our qualitative analysis indicates that there is semantic relatedness between the highest correlated terms extracted from both datasets and their relative demographic attributes. Furthermore, the correlations highlight the different natures of the information contained in Yahoo! Answers and Twitter. While the former seems to offer a more encyclopedic content, the latter provides information related to the current sociocultural aspects or phenomena.
One aspect of urban area life that has been the focus of many research work in urban data mining is finding correlations between different sources of data and the deprivation index (IMD), of neighbourhoods across a city or a country @cite_15 @cite_14 . Cellular data @cite_2 and the elements present in an urban area @cite_17 are among non-textual data sources that are shown to have correlations with a deprivation index. Also, flow of public transport data has been used to find correlations (with a correlation coefficient of @math ) with IMD of urban areas available in UK census @cite_15 . Research shows that correlations of @math exists between the sentiment expressed in tweets of users in a community and the deprivation index of the community @cite_14 .
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_17", "@cite_2" ], "mid": [ "86330731", "2099352442", "2115195106", "2019597798" ], "abstract": [ "A key facet of urban design, planning, and monitoring is measuring communities' well-being. Historically, researchers have established a link between well-being and visibility of city neighbourhoods and have measured visibility via quantitative studies with willing participants, a process that is invariably manual and cumbersome. However, the influx of the world's population into urban centres now calls for methods that can easily be implemented, scaled, and analysed. We propose that one such method is offered by pervasive technology: we test whether urban mobility--as measured by public transport fare collection sensors--is a viable proxy for the visibility of a city's communities. We validate this hypothesis by examining the correlation between London urban flow of public transport and census-based indices of the well-being of London's census areas. We find that not only are the two correlated, but a number of insights into the flow between areas of varying social standing can be uncovered with readily available transport data. For example, we find that deprived areas tend to preferentially attract people living in other deprived areas, suggesting a segregation effect.", "Policy makers are calling for new socio-economic measures that reflect subjective well-being, to complement traditional measures of material welfare as the Gross Domestic Product (GDP). Self-reporting has been found to be reasonably accurate in measuring one's well-being and conveniently tallies with sentiment expressed on social media (e.g., those satisfied with life use more positive than negative words in their Facebook status updates). Social media content can thus be used to track well-being of individuals. A question left unexplored is whether such content can be used to track well-being of entire physical communities as well. To this end, we consider Twitter users based in a variety of London census communities, and study the relationship between sentiment expressed in tweets and community socio-economic well-being. We find that the two are highly correlated: the higher the normalized sentiment score of a community's tweets, the higher the community's socio-economic well-being. This suggests that monitoring tweets is an effective way of tracking community well-being too.", "Measuring socioeconomic deprivation of cities in an accurate and timely fashion has become a priority for governments around the world, as the massive urbanization process we are witnessing is causing high levels of inequalities which require intervention. Traditionally, deprivation indexes have been derived from census data, which is however very expensive to obtain, and thus acquired only every few years. Alternative computational methods have been proposed in recent years to automatically extract proxies of deprivation at a fine spatio-temporal level of granularity; however, they usually require access to datasets (e.g., call details records) that are not publicly available to governments and agencies. To remedy this, we propose a new method to automatically mine deprivation at a fine level of spatio-temporal granularity that only requires access to freely available user-generated content. More precisely, the method needs access to datasets describing what urban elements are present in the physical environment; examples of such datasets are Foursquare and OpenStreetMap. Using these datasets, we quantitatively describe neighborhoods by means of a metric, called Offering Advantage, that reflects which urban elements are distinctive features of each neighborhood. We then use that metric to (i) build accurate classifiers of urban deprivation and (ii) interpret the outcomes through thematic analysis. We apply the method to three UK urban areas of different scale and elaborate on the results in terms of precision and recall.", "Governments and other organisations often rely on data collected by household surveys and censuses to identify areas in most need of regeneration and development projects. However, due to the high cost associated with the data collection process, many developing countries conduct such surveys very infrequently and include only a rather small sample of the population, thus failing to accurately capture the current socio-economic status of the country's population. In this paper, we address this problem by means of a methodology that relies on an alternative source of data from which to derive up to date poverty indicators, at a very fine level of spatio-temporal granularity. Taking two developing countries as examples, we show how to analyse the aggregated call detail records of mobile phone subscribers and extract features that are strongly correlated with poverty indexes currently derived from census data." ] }
1701.04653
2573459536
In this paper, we investigate whether text from a Community Question Answering (QA) platform can be used to predict and describe real-world attributes. We experiment with predicting a wide range of 62 demographic attributes for neighbourhoods of London. We use the text from QA platform of Yahoo! Answers and compare our results to the ones obtained from Twitter microblogs. Outcomes show that the correlation between the predicted demographic attributes using text from Yahoo! Answers discussions and the observed demographic attributes can reach an average Pearson correlation coefficient of ho = 0.54, slightly higher than the predictions obtained using Twitter data. Our qualitative analysis indicates that there is semantic relatedness between the highest correlated terms extracted from both datasets and their relative demographic attributes. Furthermore, the correlations highlight the different natures of the information contained in Yahoo! Answers and Twitter. While the former seems to offer a more encyclopedic content, the latter provides information related to the current sociocultural aspects or phenomena.
Social media data has been used in many domains to find links to the real-world attributes. Data generated on QA platforms, however, has not been used in the past for predicting such attributes. In this paper, we use discussions on QA platform to make predictions of demographic attribute of city neighbourhoods. Previous work in this domain has mainly focused on predicting the deprivation index of areas @cite_14 . In this work, we look at a wide range of attributes and report prediction results on @math demographic attributes. Additionally, work in urban prediction uses geolocation-based platforms such as Twitter. QA data that has been utilised in this paper does not include geolocation information. Utilising such data presents its own challenges.
{ "cite_N": [ "@cite_14" ], "mid": [ "2099352442" ], "abstract": [ "Policy makers are calling for new socio-economic measures that reflect subjective well-being, to complement traditional measures of material welfare as the Gross Domestic Product (GDP). Self-reporting has been found to be reasonably accurate in measuring one's well-being and conveniently tallies with sentiment expressed on social media (e.g., those satisfied with life use more positive than negative words in their Facebook status updates). Social media content can thus be used to track well-being of individuals. A question left unexplored is whether such content can be used to track well-being of entire physical communities as well. To this end, we consider Twitter users based in a variety of London census communities, and study the relationship between sentiment expressed in tweets and community socio-economic well-being. We find that the two are highly correlated: the higher the normalized sentiment score of a community's tweets, the higher the community's socio-economic well-being. This suggests that monitoring tweets is an effective way of tracking community well-being too." ] }
1701.04600
2951261017
There has been considerable work on improving popular clustering algorithm K-means' in terms of mean squared error (MSE) and speed, both. However, most of the k-means variants tend to compute distance of each data point to each cluster centroid for every iteration. We propose a fast heuristic to overcome this bottleneck with only marginal increase in MSE. We observe that across all iterations of K-means, a data point changes its membership only among a small subset of clusters. Our heuristic predicts such clusters for each data point by looking at nearby clusters after the first iteration of k-means. We augment well known variants of k-means with our heuristic to demonstrate effectiveness of our heuristic. For various synthetic and real-world datasets, our heuristic achieves speed-up of up-to 3 times when compared to efficient variants of k-means.
In last three decades, there has been significant work on improving Lloyd's algorithm @cite_0 both in terms of reducing MSE and running time. The follow up work on Lloyd's algorithm can be broadly divided into three categories: Better seed selection @cite_6 @cite_5 , Selecting ideal value for number of clusters @cite_2 , and Bounds on data point to cluster centroid distance @cite_3 @cite_1 @cite_7 . Arthur et. al. @cite_6 provided a better method for seed selection based on a probability distribution over closest cluster centroid distances for each data point. Likas et. al. @cite_5 proposed the Global k-means method for selecting one seed at a time to reduce final mean squared error. Pham et. al. @cite_2 designed a novel function to evaluate goodness of clustering for various potential values of number of clusters. Elkan @cite_3 use triangle inequality to avoid redundant computations of distance between data points and cluster centroids. Pelleg and Moore @cite_1 and @cite_7 proposed similar algorithms that use k-d trees. Both these algorithms construct a k-d tree over the dataset to be clustered. Though these approaches have shown good results, k-d trees perform poorly for datasets in higher dimensions.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_2", "@cite_5" ], "mid": [ "2161160262", "1999668761", "2171852577", "2073459066", "2150593711", "2006907251", "2140405352" ], "abstract": [ "In k-means clustering, we are given a set of n data points in d-dimensional space R sup d and an integer k and the problem is to determine a set of k points in Rd, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's (1982) algorithm. We present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation.", "Abstract : We present new algorithms for the k-means clustering problem. They use the kd-tree data structure to reduce the large number of nearest-neighbor queries issued by the traditional algorithm. Sufficient statistics are stored in the nodes of the kd-tree. Then an analysis of the geometry of the current cluster centers results in great reduction of the work needed to update the centers. Our algorithms behave exactly as the traditional k-means algorithm. Proofs of correctness are included. The kd-tree can also be used to initialize the k-means starting centers efficiently. Our algorithms can be easily extended to provide fast ways of computing the error of a given cluster assignment regardless of the method in which those clusters were obtained. We also show how to use them in a setting which allows approximate clustering results, with the benefit of running faster. We have implemented and tested our algorithms on both real and simulated data. Results show a speedup factor of up to 170 on real astrophysical data, and superiority over the naive algorithm on simulated data in up to 5 dimensions. Our algorithms scale well with respect to the number of points and number of centers allowing for clustering with tens of thousands of centers.", "The k-means algorithm is by far the most widely used method for discovering clusters in data. We show how to accelerate it dramatically, while still always computing exactly the same result as the standard algorithm. The accelerated algorithm avoids unnecessary distance calculations by applying the triangle inequality in two different ways, and by keeping track of lower and upper bounds for distances between points and centers. Experiments show that the new algorithm is effective for datasets with up to 1000 dimensions, and becomes more and more effective as the number k of clusters increases. For k ≥ 20 it is many times faster than the best previously known accelerated k-means method.", "The k-means method is a widely used clustering technique that seeks to minimize the average squared distance between points in the same cluster. Although it offers no accuracy guarantees, its simplicity and speed are very appealing in practice. By augmenting k-means with a very simple, randomized seeding technique, we obtain an algorithm that is Θ(logk)-competitive with the optimal clustering. Preliminary experiments show that our augmentation improves both the speed and the accuracy of k-means, often quite dramatically.", "It has long been realized that in pulse-code modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the one-third power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result as the number of quanta become large. The optimum quautization schemes for 2^ b quanta, b=1,2, , 7 , are given numerically for Gaussian and for Laplacian distribution of signal amplitudes.", "AbstractThe K-means algorithm is a popular data-clustering algorithm. However, one of its drawbacks is the requirement for the number of clusters, K, to be specified before the algorithm is applied. This paper first reviews existing methods for selecting the number of clusters for the algorithm. Factors that affect this selection are then discussed and a new measure to assist the selection is proposed. The paper concludes with an analysis of the results of using the proposed measure to determine the number of clusters for the K-means algorithm for different data sets.", "We present the global k-means algorithm which is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure consisting of N (with N being the size of the data set) executions of the k-means algorithm from suitable initial positions. We also propose modifications of the method to reduce the computational load without significantly affecting solution quality. The proposed clustering methods are tested on well-known data sets and they compare favorably to the k-means algorithm with random restarts." ] }
1701.04540
2949893671
Automatic continuous time, continuous value assessment of a patient's pain from face video is highly sought after by the medical profession. Despite the recent advances in deep learning that attain impressive results in many domains, pain estimation risks not being able to benefit from this due to the difficulty in obtaining data sets of considerable size. In this work we propose a combination of hand-crafted and deep-learned features that makes the most of deep learning techniques in small sample settings. Encoding shape, appearance, and dynamics, our method significantly outperforms the current state of the art, attaining a RMSE error of less than 1 point on a 16-level pain scale, whilst simultaneously scoring a 67.3 Pearson correlation coefficient between our predicted pain level time series and the ground truth.
In terms of binary pain classification, Ashraf et.al @cite_20 predicted pain in patients with shoulder injuries using a combination of shape and Active Appearance Models (AAM) both at frame and sequence levels. Lucey et.al @cite_0 extended this further by using non-rigid normalized 3D-AAMs to tackle the problem of spontaneous head movements associated with pain. Head movements were represented by pitch, yaw and roll computed from 3D parameters derived from the AAM. Similar to @cite_20 they found that fusing the shape and appearance features yielded significant performance improvement.Taking this further, researchers have attempted to distinguish between real and posed pain @cite_12 @cite_9 .
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_12", "@cite_20" ], "mid": [ "2104067190", "2005733474", "2062024333", "2090495691" ], "abstract": [ "In a clinical setting, pain is reported either through patient self-report or via an observer. Such measures are problematic as they are: 1) subjective, and 2) give no specific timing information. Coding pain as a series of facial action units (AUs) can avoid these issues as it can be used to gain an objective measure of pain on a frame-by-frame basis. Using video data from patients with shoulder injuries, in this paper, we describe an active appearance model (AAM)-based system that can automatically detect the frames in video in which a patient is in pain. This pain data set highlights the many challenges associated with spontaneous emotion detection, particularly that of expression and head movement due to the patient's reaction to pain. In this paper, we show that the AAM can deal with these movements and can achieve significant improvements in both the AU and pain detection performance compared to the current-state-of-the-art approaches which utilize similarity-normalized appearance features only.", "We present initial results from the application of an automated facial expression recognition system to spontaneous facial expressions of pain. In this study, 26 participants were videotaped under three experimental conditions: baseline, posed pain, and real pain. The real pain condition consisted of cold pressor pain induced by submerging the arm in ice water. Our goal was to (1) assess whether the automated measurements were consistent with expression measurements obtained by human experts, and (2) develop a classifier to automatically differentiate real from faked pain in a subject-independent manner from the automated measurements. We employed a machine learning approach in a two-stage system. In the first stage, a set of 20 detectors for facial actions from the Facial Action Coding System operated on the continuous video stream. These data were then passed to a second machine learning stage, in which a classifier was trained to detect the difference between expressions of real pain and fake pain. Naive human subjects tested on the same videos were at chance for differentiating faked from real pain, obtaining only 49 accuracy. The automated system was successfully able to differentiate faked from real pain. In an analysis of 26 subjects with faked pain before real pain, the system obtained 88 correct for subject independent discrimination of real versus fake pain on a 2-alternative forced choice. Moreover, the most discriminative facial actions in the automated system were consistent with findings using human expert FACS codes.", "Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55 . However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85 accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.", "Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?" ] }
1701.04540
2949893671
Automatic continuous time, continuous value assessment of a patient's pain from face video is highly sought after by the medical profession. Despite the recent advances in deep learning that attain impressive results in many domains, pain estimation risks not being able to benefit from this due to the difficulty in obtaining data sets of considerable size. In this work we propose a combination of hand-crafted and deep-learned features that makes the most of deep learning techniques in small sample settings. Encoding shape, appearance, and dynamics, our method significantly outperforms the current state of the art, attaining a RMSE error of less than 1 point on a 16-level pain scale, whilst simultaneously scoring a 67.3 Pearson correlation coefficient between our predicted pain level time series and the ground truth.
Though action unit based pain recognition has witnessed significant advancements, there are still challenges that have inhibited its practical use in clinical settings. Some of these include poor recognition rates due to out of plane head movements, illumination changes, inaccessible faces in the case of newborns in ICUs and the general aversion to being 'watched' by cameras. To mitigate this problems, a number of studies have explored pain recognition from other pain indicators such as audible sounds or cry, body movement and physiological signals. Contextual variables has also received attention in emotion recognition @cite_18 based on the idea that the same facial expression can have different connotations depending on the current scenario. However, not much has been done in this area due to the difficulty involved with capturing and measuring such data.
{ "cite_N": [ "@cite_18" ], "mid": [ "1989461254" ], "abstract": [ "The current paper presents an automatic and context sensitive system for the dynamic recognition of pain expression among the six basic facial expressions and neutral on acted and spontaneous sequences. A machine learning approach based on the Transferable Belief Model, successfully used previously to categorize the six basic facial expressions in static images [2,61], is extended in the current paper for the automatic and dynamic recognition of pain expression from video sequences in a hospital context application. The originality of the proposed method is the use of the dynamic information for the recognition of pain expression and the combination of different sensors, permanent facial features behavior, transient features behavior, and the context of the study, using the same fusion model. Experimental results, on 2-alternative forced choices and, for the first time, on 8-alternative forced choices (i.e. pain expression is classified among seven other facial expressions), show good classification rates even in the case of spontaneous pain sequences. The mean classification rates on acted and spontaneous data reach 81.2 and 84.5 for the 2-alternative and 8-alternative forced choices, respectively. Moreover, the system performances compare favorably to the human observer rates (76 ), and lead to the same doubt states in the case of blend expressions." ] }
1701.04540
2949893671
Automatic continuous time, continuous value assessment of a patient's pain from face video is highly sought after by the medical profession. Despite the recent advances in deep learning that attain impressive results in many domains, pain estimation risks not being able to benefit from this due to the difficulty in obtaining data sets of considerable size. In this work we propose a combination of hand-crafted and deep-learned features that makes the most of deep learning techniques in small sample settings. Encoding shape, appearance, and dynamics, our method significantly outperforms the current state of the art, attaining a RMSE error of less than 1 point on a 16-level pain scale, whilst simultaneously scoring a 67.3 Pearson correlation coefficient between our predicted pain level time series and the ground truth.
Sound or cry analysis is mostly common in newborns as it is a major form of expression at that age. Acoustic characteristics have been analysed to discriminate between normal and pain induced cry in newborns @cite_10 . Compared to pain analysis from audio-visual signals, physiological signals have received less attention. They have been used for valence and arousal detection of other emotions (e.g. joy, sadness, anger and pleasure) but only a few have applied it to pain detection @cite_19 @cite_25 @cite_26 . The use of bio-signals is still far from practical use because it is still difficult to map physiological patterns to specific emotions. Nonetheless, physiological signals are reputed to be robust to intentional emotion suppression since they are directly controlled by the nervous system and again, information can be collected in real time using bio-sensors @cite_23 . Physiological signals commonly used include galvanic skin response (GSR), photoplethysmogram (PPG), electrocardiogram (ECG), respiration changes and skin conductivity.
{ "cite_N": [ "@cite_26", "@cite_19", "@cite_23", "@cite_10", "@cite_25" ], "mid": [ "1996089789", "2066658540", "2153771043", "2536532022", "" ], "abstract": [ "How much does it hurt? Accurate assessment of pain is very important for selecting the right treatment, however current methods are not sufficiently valid and reliable in many cases. Automatic pain monitoring may help by providing an objective and continuous assessment. In this paper we propose an automatic pain recognition system combining information from video and biomedical signals, namely facial expression, head movement, galvanic skin response, electromyography and electrocardiogram. Using the BioVid Heat Pain Database, the system is evaluated in the task of pain detection showing significant improvement over the current state of the art. Further, we discuss the relevance of the modalities and compare person-specific and generic classification models.", "Pain is what the patient says it is. But what about these who cannot utter? Automatic pain monitoring opens up prospects for better treatment, but accurate assessment of pain is challenging due to the subjective nature of pain. To facilitate advances, we contribute a new dataset, the BioVid Heat Pain Database which contains videos and physiological data of 90 persons subjected to well-defined pain stimuli of 4 intensities. We propose a fully automatic recognition system utilizing facial expression, head pose information and their dynamics. The approach is evaluated with the task of pain detection on the new dataset, also outlining open challenges for pain monitoring in general. Additionally, we analyze the relevance of head pose information for pain recognition and compare person-specific and general classification models.", "Little attention has been paid so far to physiological signals for emotion recognition compared to audio-visual emotion channels, such as facial expressions or speech. In this paper, we discuss the most important stages of a fully implemented emotion recognition system including data analysis and classification. For collecting physiological signals in different affective states, we used a music induction method which elicits natural emotional reactions from the subject. Four-channel biosensors are used to obtain electromyogram, electrocardiogram, skin conductivity and respiration changes. After calculating a sufficient amount of features from the raw signals, several feature selection reduction methods are tested to extract a new feature set consisting of the most significant features for improving classification performance. Three well-known classifiers, linear discriminant function, k-nearest neighbour and multilayer perceptron, are then used to perform supervised classification", "Infant cry is a multimodal behavior that contains a lot of information about the infant, particularly, information about the health of the infant. In this paper a new feature in infant cry analysis is presented for recognition two groups: infants with pain and normal infants, by Mel frequency multi-band entropy extraction from infant's cry. In signal processing stage we made pre-processing included silence elimination, filtering, pre-emphasizing. After taking Fourier transform, spectral entropy was computed as single feature of signal. In classifying stage, by training artificial neural network, correction rate of recognition was obtained 66.9 . In order to enhancement in results, we used Mel filter bank. Entropy of each sub-band constitutes elements of next feature vector. We used PCA analysis for reducing in dimension of the recent feature vector. After ANN training, correction rate improved to 88.5 . So multiband spectral entropy enhanced results in salient correction rate.", "" ] }
1701.04540
2949893671
Automatic continuous time, continuous value assessment of a patient's pain from face video is highly sought after by the medical profession. Despite the recent advances in deep learning that attain impressive results in many domains, pain estimation risks not being able to benefit from this due to the difficulty in obtaining data sets of considerable size. In this work we propose a combination of hand-crafted and deep-learned features that makes the most of deep learning techniques in small sample settings. Encoding shape, appearance, and dynamics, our method significantly outperforms the current state of the art, attaining a RMSE error of less than 1 point on a 16-level pain scale, whilst simultaneously scoring a 67.3 Pearson correlation coefficient between our predicted pain level time series and the ground truth.
Attempts have also been made to combine visual features with physiological signals in @cite_26 . Their findings show that using combined data sources yields better performance than individual sources. However, rather than pain intensity estimation, only a pair wise pain classification was achieved for the four pain levels contained in the Bio-vid pain database.
{ "cite_N": [ "@cite_26" ], "mid": [ "1996089789" ], "abstract": [ "How much does it hurt? Accurate assessment of pain is very important for selecting the right treatment, however current methods are not sufficiently valid and reliable in many cases. Automatic pain monitoring may help by providing an objective and continuous assessment. In this paper we propose an automatic pain recognition system combining information from video and biomedical signals, namely facial expression, head movement, galvanic skin response, electromyography and electrocardiogram. Using the BioVid Heat Pain Database, the system is evaluated in the task of pain detection showing significant improvement over the current state of the art. Further, we discuss the relevance of the modalities and compare person-specific and generic classification models." ] }
1701.04224
2573865693
The Aduio-visual Speech Recognition (AVSR) which employs both the video and audio information to do Automatic Speech Recognition (ASR) is one of the application of multimodal leaning making ASR system more robust and accuracy. The traditional models usually treated AVSR as inference or projection but strict prior limits its ability. As the revival of deep learning, Deep Neural Networks (DNN) becomes an important toolkit in many traditional classification tasks including ASR, image classification, natural language processing. Some DNN models were used in AVSR like Multimodal Deep Autoencoders (MDAEs), Multimodal Deep Belief Network (MDBN) and Multimodal Deep Boltzmann Machine (MDBM) that actually work better than traditional methods. However, such DNN models have several shortcomings: (1) They don't balance the modal fusion and temporal fusion, or even haven't temporal fusion; (2)The architecture of these models isn't end-to-end, the training and testing getting cumbersome. We propose a DNN model, Auxiliary Multimodal LSTM (am-LSTM), to overcome such weakness. The am-LSTM could be trained and tested once, moreover easy to train and preventing overfitting automatically. The extensibility and flexibility are also take into consideration. The experiments show that am-LSTM is much better than traditional methods and other DNN models in three datasets.
The representative work in the early years in this field is (mHMMs) @cite_12 which were confirmed adaptability and flexibility for modeling sequence and temporal data @cite_8 . But probabilistic models have some explicit limitations, especially strong priori. Researchers are also interested in the manner of mapping data in dissimilar space to one space jointly.
{ "cite_N": [ "@cite_12", "@cite_8" ], "mid": [ "1978380426", "2125838338" ], "abstract": [ "The use of visual features in audio-visual speech recognition (AVSR) is justified by both the speech generation mechanism, which is essentially bimodal in audio and visual representation, and by the need for features that are invariant to acoustic noise perturbation. As a result, current AVSR systems demonstrate significant accuracy improvements in environments affected by acoustic noise. In this paper, we describe the use of two statistical models for audio-visual integration, the coupled HMM (CHMM) and the factorial HMM (FHMM), and compare the performance of these models with the existing models used in speaker dependent audio-visual isolated word recognition. The statistical properties of both the CHMM and FHMM allow to model the state asynchrony of the audio and visual observation sequences while preserving their natural correlation over time. In our experiments, the CHMM performs best overall, outperforming all the existing models and the FHMM.", "This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. >" ] }
1701.04224
2573865693
The Aduio-visual Speech Recognition (AVSR) which employs both the video and audio information to do Automatic Speech Recognition (ASR) is one of the application of multimodal leaning making ASR system more robust and accuracy. The traditional models usually treated AVSR as inference or projection but strict prior limits its ability. As the revival of deep learning, Deep Neural Networks (DNN) becomes an important toolkit in many traditional classification tasks including ASR, image classification, natural language processing. Some DNN models were used in AVSR like Multimodal Deep Autoencoders (MDAEs), Multimodal Deep Belief Network (MDBN) and Multimodal Deep Boltzmann Machine (MDBM) that actually work better than traditional methods. However, such DNN models have several shortcomings: (1) They don't balance the modal fusion and temporal fusion, or even haven't temporal fusion; (2)The architecture of these models isn't end-to-end, the training and testing getting cumbersome. We propose a DNN model, Auxiliary Multimodal LSTM (am-LSTM), to overcome such weakness. The am-LSTM could be trained and tested once, moreover easy to train and preventing overfitting automatically. The extensibility and flexibility are also take into consideration. The experiments show that am-LSTM is much better than traditional methods and other DNN models in three datasets.
Deep learning provides a powerful toolkit for machine learning. In AVSR, DNN also displays obvious increasement of accuracy. @cite_22 uses pre-trained CNN to extracted visual features and denoising autoencoders to improve aural features, and then mHMMs to do fusion and classfication. As I mentioned in , it mainly utilizes the feature extraction of DNN.
{ "cite_N": [ "@cite_22" ], "mid": [ "2076462394" ], "abstract": [ "Audio-visual speech recognition (AVSR) system is thought to be one of the most promising solutions for reliable speech recognition, particularly when the audio is corrupted by noise. However, cautious selection of sensory features is crucial for attaining high recognition performance. In the machine-learning community, deep learning approaches have recently attracted increasing attention because deep neural networks can effectively extract robust latent features that enable various recognition algorithms to demonstrate revolutionary generalization capabilities under diverse application conditions. This study introduces a connectionist-hidden Markov model (HMM) system for noise-robust AVSR. First, a deep denoising autoencoder is utilized for acquiring noise-robust audio features. By preparing the training data for the network with pairs of consecutive multiple steps of deteriorated audio features and the corresponding clean features, the network is trained to output denoised audio features from the corresponding features deteriorated by noise. Second, a convolutional neural network (CNN) is utilized to extract visual features from raw mouth area images. By preparing the training data for the CNN as pairs of raw images and the corresponding phoneme label outputs, the network is trained to predict phoneme labels from the corresponding mouth area input images. Finally, a multi-stream HMM (MSHMM) is applied for integrating the acquired audio and visual HMMs independently trained with the respective features. By comparing the cases when normal and denoised mel-frequency cepstral coefficients (MFCCs) are utilized as audio features to the HMM, our unimodal isolated word recognition results demonstrate that approximately 65 word recognition rate gain is attained with denoised MFCCs under 10 dB signal-to-noise-ratio (SNR) for the audio signal input. Moreover, our multimodal isolated word recognition results utilizing MSHMM with denoised MFCCs and acquired visual features demonstrate that an additional word recognition rate gain is attained for the SNR conditions below 10 dB." ] }
1701.04066
2952694556
In an Ultra-dense network (UDN) where there are more base stations (BSs) than active users, it is possible that many BSs are instantaneously left idle. Thus, how to utilize these dormant BSs by means of cooperative transmission is an interesting question. In this paper, we investigate the performance of a UDN with two types of cooperation schemes: non-coherent joint transmission (JT) without channel state information (CSI) and coherent JT with full CSI knowledge. We consider a bounded dual-slope path loss model to describe UDN environments where a user has several BSs in the near-field and the rest in the far-field. Numerical results show that non-coherent JT cannot improve the user spectral efficiency (SE) due to the simultaneous increment in signal and interference powers. For coherent JT, the achievable SE gain depends on the range of near-field, the relative densities of BSs and users, and the CSI accuracy. Finally, we assess the energy efficiency (EE) of cooperation in UDN. Despite costing extra energy consumption, cooperation can still improve EE under certain conditions.
Joint transmission (JT) is a potential solution which allows multiple BSs to jointly serve one user. In traditional fully loaded cellular networks, JT could turn dominant interferers into useful signals as shown in Fig. (a) while the other interferers remain the same. Thus, the desired signal strength increases and the interference decreases simultaneously, at the cost of reduced scheduling probability. It is known that JT enhances the performance of cell-edge users in macro cellular networks @cite_2 . However, interference nature is completely different in a UDN because turning on dormant BSs is like a double-edged sword, i.e., improving the desired received signal strength, but generating extra interference and energy consumption. In Fig. (b), if all the users get assistance from nearby sleeping BSs, the interference will grow rapidly as well as the desired signal power. Therefore, how to design cooperation schemes in UDN to overcome the concurrent interference becomes a big challenge. A cooperative UDN architecture is proposed in @cite_15 , but without further discussions on cooperation schemes and performance evaluation.
{ "cite_N": [ "@cite_15", "@cite_2" ], "mid": [ "2362238774", "2124122450" ], "abstract": [ "Ultra-dense networking (UDN) is considered as a promising technology for 5G. In this article, we define user-centric UDN (UUDN) by introducing the philosophy of the network serving user and the \"de-cellular\" method. Based on the analysis of challenges and requirements of UUDN, a new architecture is presented that breaks through the traditional cellular architecture of the network controlling user. Dynamic AP grouping is proposed as the core function of UUDN, through which a user could enjoy satisfactory and secure service following her movement. Furthermore, we provide methods for mobility management, resource management, interference management, and security issues. We point out that these functions should be co-designed and jointly optimized in order to improve the system throughput with higher resource utilization, better user experience, and increased energy efficiency. Finally, future works in UUDN are discussed.", "Coordinated multi-point (CoMP) has been selected as a key technology feature of LTE-Advanced, as it enables the exploitation of inter-cell interference in order to significantly increase spectral efficiency, especially at the cell-edge. While first field trials on CoMP schemes have delivered the proof-of-concept and shown that a moderate extent of theoretically predicated CoMP gains can indeed be achieved in practical systems, the implementation of these schemes has revealed many practical challenges. One central question is, for example, how small cooperation clusters can be extracted from large cellular systems, such that major portions of potential CoMP gains can be obtained at minimum signaling overhead. This paper deals with static clustering concepts, and shows that both in a hexagonal cell layout and under a realistic deployment and signal propagation scenario, static clustering concepts can perform close to optimal UE-specific clustering, while being easy to use and requiring negligible signaling overhead." ] }
1701.04066
2952694556
In an Ultra-dense network (UDN) where there are more base stations (BSs) than active users, it is possible that many BSs are instantaneously left idle. Thus, how to utilize these dormant BSs by means of cooperative transmission is an interesting question. In this paper, we investigate the performance of a UDN with two types of cooperation schemes: non-coherent joint transmission (JT) without channel state information (CSI) and coherent JT with full CSI knowledge. We consider a bounded dual-slope path loss model to describe UDN environments where a user has several BSs in the near-field and the rest in the far-field. Numerical results show that non-coherent JT cannot improve the user spectral efficiency (SE) due to the simultaneous increment in signal and interference powers. For coherent JT, the achievable SE gain depends on the range of near-field, the relative densities of BSs and users, and the CSI accuracy. Finally, we assess the energy efficiency (EE) of cooperation in UDN. Despite costing extra energy consumption, cooperation can still improve EE under certain conditions.
To examine the impact of JT on UDN, it is important to incorporate the propagation characteristics of UDN properly. In a UDN environment where the cell sizes are getting much smaller, a widely accepted unbounded single-slope path loss model, i.e., @math , becomes dubious. The radio signals in the near-field may experience much less absorption and diffraction losses than those in the far-field, resulting in dissimilar path loss exponents. Besides, the probability of a link within a reference distance, @math , becomes high, and thus this phenomenon cannot be neglected in the analysis. Hence, a path loss model with multiple slopes and bound becomes necessary in modeling the UDN scenario. The impact of bounded and multi-slope path loss models in fully loaded networks are separately studied in @cite_16 and @cite_9 @cite_7 . However, the combination of the two effects remains to be explored. Moreover, the full load assumption becomes implausible in the UDN environment since the BS density exceeds the user density @cite_12 @cite_10 .
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "", "2134686390", "2110532959", "1480189262", "2952796263" ], "abstract": [ "", "Existing cellular network analyses, and even simulations, typically use the standard path loss model where received power decays like @math over a distance @math . This standard path loss model is quite idealized, and in most scenarios the path loss exponent @math is itself a function of @math , typically an increasing one. Enforcing a single path loss exponent can lead to orders of magnitude differences in average received and interference powers versus the true values. In this paper, we study multi-slope path loss models, where different distance ranges are subject to different path loss exponents. We focus on the dual-slope path loss function, which is a piece-wise power law and continuous and accurately approximates many practical scenarios. We derive the distributions of SIR, SNR, and finally SINR before finding the potential throughput scaling, which provides insight on the observed cell-splitting rate gain. The exact mathematical results show that the SIR monotonically decreases with network density, while the converse is true for SNR, and thus the network coverage probability in terms of SINR is maximized at some finite density. With ultra-densification (network density goes to infinity), there exists a phase transition in the near-field path loss exponent @math : if @math unbounded potential throughput can be achieved asymptotically; if $ 0 , ultra-densification leads in the extreme case to zero throughput.", "This paper addresses the following question: how reliable is it to use the unbounded path-loss model G(d) = d-alpha, where alpha is the path-loss exponent, to model the decay of transmitted signal power in wireless networks? G(d) is a good approximation for the path-loss in wireless communications for large values of d but is not valid for small values of d due to the singularity at 0. This model is often used along with a random uniform node distribution, even though in a group of uniformly distributed nodes some may be arbitrarily close to one another. The unbounded path-loss model is compared to a more realistic bounded path-loss model, and it is shown that the effect of the singularity on the total network interference level is significant and cannot be disregarded when nodes are uniformly distributed. A phase transition phenomenon occurring in the interference behavior is analyzed in detail. Several performance metrics are also examined by using the computed interference distributions. In particular, the effects of the singularity at 0 on bit error rate, packet success probability and wireless channel capacity are analyzed.", "Dense deployment which brings small base stations (BS) closer to mobile devices is considered as a promising solution to the booming traffic demand. Meanwhile, the utilization of new frequency bands and spectrum aggregation techniques provide more options for spectrum choice.Whether to increase BS density or to acquire more spectrum is a key strategic question for mobile operators. In this paper, we investigate the relationship between BS density and spectrum with regard to individual user throughput target. Our work takes into account load-dependent interference model and various traffic demands. Numerical results show that densification is more effective in sparse networks than in already dense networks. In sparse networks, doubling BS density results in almost twofold throughput increase. However, in dense networks where BSs outnumber users, more than 10 times of BS density is needed to double user throughput. Meanwhile, spectrum has a linear relationship with user throughput for a given BS density. The impact of traffic types is also discussed. Even with the same area throughput requirement, different combination of user density and individual traffic amount leads to different needs for BS density and spectrum.", "There have been a bulk of analytic results about the performance of cellular networks where base stations are regularly located on a hexagonal or square lattice. This regular model cannot reflect the reality, and tends to overestimate the network performance. Moreover, tractable analysis can be performed only for a fixed location user (e.g., cell center or edge user). In this paper, we use the stochastic geometry approach, where base stations can be modeled as a homogeneous Poisson point process. We also consider the user density, and derive the user outage probability that an arbitrary user is under outage owing to low signal-to-interference-plus-noise ratio or high congestion by multiple users. Using the result, we calculate the density of success transmissions in the downlink cellular network. An interesting observation is that the success transmission density increases with the base station density, but the increasing rate diminishes. This means that the number of base stations installed should be more than @math -times to increase the network capacity by a factor of @math . Our results will provide a framework for performance analysis of the wireless infrastructure with a high density of access points, which will significantly reduce the burden of network-level simulations." ] }
1701.04238
2952317013
We present a novel extension of Thompson Sampling for stochastic sequential decision problems with graph feedback, even when the graph structure itself is unknown and or changing. We provide theoretical guarantees on the Bayesian regret of the algorithm, linking its performance to the underlying properties of the graph. Thompson Sampling has the advantage of being applicable without the need to construct complicated upper confidence bounds for different problems. We illustrate its performance through extensive experimental results on real and simulated networks with graph feedback. More specifically, we tested our algorithms on power law, planted partitions and Erdo's-Renyi graphs, as well as on graphs derived from Facebook and Flixster data. These all show that our algorithms clearly outperform related methods that employ upper confidence bounds, even if the latter use more information about the graph.
Optimal policies for the stochastic multi-armed bandit problem were first characterised by @cite_14 , while index-based optimal policies for general non-parametric problems were given by @cite_24 . Later @cite_8 proved finite-time regret bounds for a number of UCB (Upper Confidence Bound) index policies, while @cite_13 proved finite-time bounds for index policies similar to those of @cite_24 , with problem-dependent bounds @math . Recently, a number of policies based on sampling from the posterior distribution (i.e. Thompson sampling @cite_4 ) were analysed in both the frequentist @cite_9 and Bayesian setting @cite_11 and shown to obtain the same order of regret bound for the stochastic case. For the bandit problem the bounds are of order @math . The analysis for the full information case generally results in @math bounds on the regret @cite_0 , i.e. with a much lower dependence on the number of arms.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_9", "@cite_24", "@cite_0", "@cite_13", "@cite_11" ], "mid": [ "", "", "", "2949186496", "1998376807", "1570963478", "1501823362", "2962901934" ], "abstract": [ "", "", "", "The multi-armed bandit problem is a popular model for studying exploration exploitation trade-off in sequential decision problems. Many algorithms are now available for this well-studied problem. One of the earliest algorithms, given by W. R. Thompson, dates back to 1933. This algorithm, referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic idea is to choose an arm to play according to its probability of being the best arm. Thompson Sampling algorithm has experimentally been shown to be close to optimal. In addition, it is efficient to implement and exhibits several desirable properties such as small regret for delayed feedback. However, theoretical understanding of this algorithm was quite limited. In this paper, for the first time, we show that Thompson Sampling algorithm achieves logarithmic expected regret for the multi-armed bandit problem. More precisely, for the two-armed bandit problem, the expected regret in time @math is @math . And, for the @math -armed bandit problem, the expected regret in time @math is @math . Our bounds are optimal but for the dependence on @math and the constant factors in big-Oh.", "In this paper we consider the problem of adaptive control for Markov Decision Processes. We give the explicit form for a class of adaptive policies that possess optimal increase rate properties for the total expected finite horizon reward, under sufficient assumptions of finite state-action spaces and irreducibility of the transition law. A main feature of the proposed policies is that the choice of actions, at each state and time period, is based on indices that are inflations of the right-hand side of the estimated average reward optimality equations.", "1. Introduction 2. Prediction with expert advice 3. Tight bounds for specific losses 4. Randomized prediction 5. Efficient forecasters for large classes of experts 6. Prediction with limited feedback 7. Prediction and playing games 8. Absolute loss 9. Logarithmic loss 10. Sequential investment 11. Linear pattern recognition 12. Linear classification 13. Appendix.", "This paper presents a finite-time analysis of the KL-UCB algorithm, an online, horizon-free index policy for stochastic bandit problems. We prove two distinct results: first, for arbitrary bounded rewards, the KL-UCB algorithm satisfies a uniformly better regret bound than UCB or UCB2; second, in the special case of Bernoulli rewards, it reaches the lower bound of Lai and Robbins. Furthermore, we show that simple adaptations of the KL-UCB algorithm are also optimal for specific classes of (possibly unbounded) rewards, including those generated from exponential families of distributions. A large-scale numerical study comparing KL-UCB with its main competitors (UCB, UCB2, UCB-Tuned, UCB-V, DMED) shows that KL-UCB is remarkably efficient and stable, including for short time horizons. KL-UCB is also the only method that always performs better than the basic UCB policy. Our regret bounds rely on deviations results of independent interest which are stated and proved in the Appendix. As a by-product, we also obtain an improved regret bound for the standard UCB algorithm.", "We provide an information-theoretic analysis of Thompson sampling that applies across a broad range of online optimization problems in which a decision-maker must learn from partial feedback. This analysis inherits the simplicity and elegance of information theory and leads to regret bounds that scale with the entropy of the optimal-action distribution. This strengthens preexisting results and yields new insight into how information improves performance." ] }
1701.04238
2952317013
We present a novel extension of Thompson Sampling for stochastic sequential decision problems with graph feedback, even when the graph structure itself is unknown and or changing. We provide theoretical guarantees on the Bayesian regret of the algorithm, linking its performance to the underlying properties of the graph. Thompson Sampling has the advantage of being applicable without the need to construct complicated upper confidence bounds for different problems. We illustrate its performance through extensive experimental results on real and simulated networks with graph feedback. More specifically, we tested our algorithms on power law, planted partitions and Erdo's-Renyi graphs, as well as on graphs derived from Facebook and Flixster data. These all show that our algorithms clearly outperform related methods that employ upper confidence bounds, even if the latter use more information about the graph.
Intermediate cases between full information and bandit feedback can be obtained through graph feedback, introduced in @cite_2 , which is the focus of this paper. In particular, @cite_20 and @cite_16 analysed graph feedback problems with stochastic and adversarial reward sequences respectively. Specifically, caron12 analysed variants of Upper Confidence Bound policies, for which they obtained @math problem-dependent bounds. In more recent work, @cite_17 also introduced algorithms for graphs where the structure is never fully revealed showing that (unlike the bandit setting) there is a large gap in the regret between the adversarial and stochastic cases. In particular, they show that in the adversarial setting one cannot do any better than ignore all additional feedback, while they provide an action-elimination algorithm for the stochastic setting. Finally, @cite_5 obtain a problem-dependent bound of the form @math where @math is the linear programming relaxation to @math and @math is the minimum degree of @math .
{ "cite_N": [ "@cite_2", "@cite_5", "@cite_16", "@cite_20", "@cite_17" ], "mid": [ "2143140272", "1970930040", "1794753160", "1822035899", "2399101391" ], "abstract": [ "We consider an adversarial online learning setting where a decision maker can choose an action in every stage of the game. In addition to observing the reward of the chosen action, the decision maker gets side observations on the reward he would have obtained had he chosen some of the other actions. The observation structure is encoded as a graph, where node i is linked to node j if sampling i provides information on the reward of j. This setting naturally interpolates between the well-known \"experts\" setting, where the decision maker can view all rewards, and the multi-armed bandits setting, where the decision maker can only view the reward of the chosen action. We develop practical algorithms with provable regret guarantees, which depend on non-trivial graph-theoretic properties of the information feedback structure. We also provide partially-matching lower bounds.", "We study the stochastic multi-armed bandit (MAB) problem in the presence of side-observations across actions. In our model, choosing an action provides additional side observations for a subset of the remaining actions. One example of this model occurs in the problem of targeting users in online social networks where users respond to their friends's activity, thus providing information about each other's preferences. Our contributions are as follows: 1) We derive an asymptotic (with respect to time) lower bound (as a function of the network structure) on the regret (loss) of any uniformly good policy that achieves the maximum long term average reward. 2) We propose two policies - a randomized policy and a policy based on the well-known upper confidence bound (UCB) policies, both of which explore each action at a rate that is a function of its network position. We show that these policies achieve the asymptotic lower bound on the regret up to a multiplicative factor independent of network structure. The upper bound guarantees on the regret of these policies are better than those of existing policies. Finally, we use numerical examples on a real-world social network to demonstrate the significant benefits obtained by our policies against other existing policies.", "We study a general class of online learning problems where the feedback is specified by a graph. This class includes online prediction with expert advice and the multi-armed bandit problem, but also several learning problems where the online player does not necessarily observe his own loss. We analyze how the structure of the feedback graph controls the inherent difficulty of the induced @math -round learning problem. Specifically, we show that any feedback graph belongs to one of three classes: strongly observable graphs, weakly observable graphs, and unobservable graphs. We prove that the first class induces learning problems with @math minimax regret, where @math is the independence number of the underlying graph; the second class induces problems with @math minimax regret, where @math is the domination number of a certain portion of the graph; and the third class induces problems with linear minimax regret. Our results subsume much of the previous work on learning with feedback graphs and reveal new connections to partial monitoring games. We also show how the regret is affected if the graphs are allowed to vary with time.", "This paper considers stochastic bandits with side observations, a model that accounts for both the exploration exploitation dilemma and relationships between arms. In this setting, after pulling an arm i, the decision maker also observes the rewards for some other actions related to i. We will see that this model is suited to content recommendation in social networks, where users' reactions may be endorsed or not by their friends. We provide efficient algorithms based on upper confidence bounds (UCBs) to leverage this additional information and derive new bounds improving on standard regret guarantees. We also evaluate these policies in the context of movie recommendation in social networks: experiments on real datasets show substantial learning rate speedups ranging from 2.2x to 14x on dense networks.", "We study an online learning framework introduced by Mannor and Shamir (2011) in which the feedback is specified by a graph, in a setting where the graph may vary from round to round and is to the learner. We show a large gap between the adversarial and the stochastic cases. In the adversarial case, we prove that even for dense feedback graphs, the learner cannot improve upon a trivial regret bound obtained by ignoring any additional feedback besides her own loss. In contrast, in the stochastic case we give an algorithm that achieves @math regret over @math rounds, provided that the independence numbers of the hidden feedback graphs are at most @math . We also extend our results to a more general feedback model, in which the learner does not necessarily observe her own loss, and show that, even in simple cases, concealing the feedback graphs might render a learnable problem unlearnable." ] }
1701.04238
2952317013
We present a novel extension of Thompson Sampling for stochastic sequential decision problems with graph feedback, even when the graph structure itself is unknown and or changing. We provide theoretical guarantees on the Bayesian regret of the algorithm, linking its performance to the underlying properties of the graph. Thompson Sampling has the advantage of being applicable without the need to construct complicated upper confidence bounds for different problems. We illustrate its performance through extensive experimental results on real and simulated networks with graph feedback. More specifically, we tested our algorithms on power law, planted partitions and Erdo's-Renyi graphs, as well as on graphs derived from Facebook and Flixster data. These all show that our algorithms clearly outperform related methods that employ upper confidence bounds, even if the latter use more information about the graph.
In this paper, we provide much simpler strategies based on Thompson sampling, with a matching regret bound. Unlike previous work, these are also applicable to graphs whose structure is unknown or changing over time. More specifically: We extend @cite_11 to graph-structured feedback, and obtain a problem-independent bound of @math . Using planted partition models, we verify the bound's dependence on the clique cover. We provide experiments on data drawn from two types of random graphs: graphs and power law graphs, showing that our algorithms clearly outperform UCB and its variations @cite_20 . Finally, we measured the performance on graphs estimated from the data used in @cite_20 . Once again, Thompson sampling clearly outperforms UCB and its variants.
{ "cite_N": [ "@cite_20", "@cite_11" ], "mid": [ "1822035899", "2962901934" ], "abstract": [ "This paper considers stochastic bandits with side observations, a model that accounts for both the exploration exploitation dilemma and relationships between arms. In this setting, after pulling an arm i, the decision maker also observes the rewards for some other actions related to i. We will see that this model is suited to content recommendation in social networks, where users' reactions may be endorsed or not by their friends. We provide efficient algorithms based on upper confidence bounds (UCBs) to leverage this additional information and derive new bounds improving on standard regret guarantees. We also evaluate these policies in the context of movie recommendation in social networks: experiments on real datasets show substantial learning rate speedups ranging from 2.2x to 14x on dense networks.", "We provide an information-theoretic analysis of Thompson sampling that applies across a broad range of online optimization problems in which a decision-maker must learn from partial feedback. This analysis inherits the simplicity and elegance of information theory and leads to regret bounds that scale with the entropy of the optimal-action distribution. This strengthens preexisting results and yields new insight into how information improves performance." ] }
1701.04451
2576893748
Deduplication finds and removes long-range data duplicates. It is commonly used in cloud and enterprise server settings and has been successfully applied to primary, backup, and archival storage. Despite its practical importance as a source-coding technique, its analysis from the point of view of information theory is missing. This paper provides such an information-theoretic analysis of data deduplication. It introduces a new source model adapted to the deduplication setting. It formalizes the two standard fixed-length and variable-length deduplication schemes, and it introduces a novel multi-chunk deduplication scheme. It then provides an analysis of these three deduplication variants, emphasizing the importance of boundary synchronization between source blocks and deduplication chunks. In particular, under fairly mild assumptions, the proposed multi-chunk deduplication scheme is shown to be order optimal.
The largest gains of data deduplication are achieved when storing different versions of the same data such as in archival storage @cite_13 @cite_19 and backup systems @cite_10 @cite_0 @cite_1 . However, data deduplication has also been successfully applied to primary storage systems @cite_27 @cite_21 . A further area of application is virtual machine hosting centers, where data deduplication is used for virtual machine migration @cite_3 and for virtual machine disk image storage @cite_25 .
{ "cite_N": [ "@cite_10", "@cite_21", "@cite_1", "@cite_3", "@cite_0", "@cite_19", "@cite_27", "@cite_13", "@cite_25" ], "mid": [ "1975868314", "1474119323", "", "2094224023", "200233886", "2156719566", "1490390347", "85380564", "1971212200" ], "abstract": [ "Backup is cumbersome and expensive. Individual users almost never back up their data, and backup is a significant cost in large organizations. This paper presents Pastiche, a simple and inexpensive backup system. Pastiche exploits excess disk capacity to perform peer-to-peer backup with no administrative costs. Each node minimizes storage overhead by selecting peers that share a significant amount of data. It is easy for common installations to find suitable peers, and peers with high overlap can be identified with only hundreds of bytes. Pastiche provides mechanisms for confidentiality, integrity, and detection of failed or malicious peers. A Pastiche prototype suffers only 7.4 overhead for a modified Andrew Benchmark, and restore performance is comparable to cross-machine copy.", "We present a large scale study of primary data deduplication and use the findings to drive the design of a new primary data deduplication system implemented in the Windows Server 2012 operating system. File data was analyzed from 15 globally distributed file servers hosting data for over 2000 users in a large multinational corporation. The findings are used to arrive at a chunking and compression approach which maximizes deduplication savings while minimizing the generated metadata and producing a uniform chunk size distribution. Scaling of deduplication processing with data size is achieved using a RAM frugal chunk hash index and data partitioning - so that memory, CPU, and disk seek resources remain available to fulfill the primary workload of serving IO. We present the architecture of a new primary data deduplication system and evaluate the deduplication performance and chunking aspects of the system.", "", "This paper shows how to quickly move the state of a running computer across a network, including the state in its disks, memory, CPU registers, and I O devices. We call this state a capsule. Capsule state is hardware state, so it includes the entire operating system as well as applications and running processes.We have chosen to move x86 computer states because x86 computers are common, cheap, run the software we use, and have tools for migration. Unfortunately, x86 capsules can be large, containing hundreds of megabytes of memory and gigabytes of disk data. We have developed techniques to reduce the amount of data sent over the network: copy-on-write disks track just the updates to capsule disks, \"ballooning\" zeros unused memory, demand paging fetches only needed blocks, and hashing avoids sending blocks that already exist at the remote end. We demonstrate these optimizations in a prototype system that uses VMware GSX Server virtual, machine monitor to create and run x86 capsules. The system targets networks as slow as 384 kbps.Our experimental results suggest that efficient capsule migration can improve user mobility and system management. Software updates or installations on a set of machines can be accomplished simply by distributing a capsule with the new changes. Assuming the presence of a prior capsule, the amount of traffic incurred is commensurate with the size of the update or installation package itself. Capsule migration makes it possible for machines to start running an application within 20 minutes on a 384 kbps link, without having to first install the application or even the underlying operating system. Furthermore, users' capsules can be migrated during a commute between home and work in even less time.", "Disk-based deduplication storage has emerged as the new-generation storage system for enterprise data protection to replace tape libraries. Deduplication removes redundant data segments to compress data into a highly compact form and makes it economical to store backups on disk instead of tape. A crucial requirement for enterprise data protection is high throughput, typically over 100 MB sec, which enables backups to complete quickly. A significant challenge is to identify and eliminate duplicate data segments at this rate on a low-cost system that cannot afford enough RAM to store an index of the stored segments and may be forced to access an on-disk index for every input segment. This paper describes three techniques employed in the production Data Domain deduplication file system to relieve the disk bottleneck. These techniques include: (1) the Summary Vector, a compact in-memory data structure for identifying new segments; (2) Stream-Informed Segment Layout, a data layout method to improve on-disk locality for sequentially accessed segments; and (3) Locality Preserved Caching, which maintains the locality of the fingerprints of duplicate segments to achieve high cache hit ratios. Together, they can remove 99 of the disk accesses for deduplication of real world workloads. These techniques enable a modern two-socket dual-core system to run at 90 CPU utilization with only one shelf of 15 disks and achieve 100 MB sec for single-stream throughput and 210 MB sec for multi-stream throughput.", "We present the Deep Store archival storage architecture, a large-scale storage system that stores immutable data efficiently and reliably for long periods of time. Archived data is stored across a cluster of nodes and recorded to hard disk. The design differentiates itself from traditional file systems by eliminating redundancy within and across files, distributing content for scalability, associating rich metadata with content, and using variable levels of replication based on the importance or degree of dependency of each piece of stored data. We evaluate the foundations of our design, including PRESIDIO, a virtual content-addressable storage framework with multiple methods for interfile and intra-file compression that effectively addresses the data-dependent variability of data compression. We measure content and metadata storage efficiency, demonstrate the need for a variable-degree replication model, and provide preliminary results for storage performance.", "Storage systems frequently maintain identical copies of data. Identifying such data can assist in the design of solutions in which data storage, transmission, and management are optimised. In this paper we evaluate three methods used to discover identical portions of data: whole file content hashing, fixed size blocking, and a chunking strategy that uses Rabin fingerprints to delimit content-defined data chunks. We assess how effective each of these strategies is in finding identical sections of data. In our experiments, we analysed diverse data sets from a variety of different types of storage systems including a mirrored section of sunsite.org.uk, different data profiles in the file system infrastructure of the Cambridge University Computer Laboratory, source code distribution trees, compressed data, and packed files. We report our experimental results and present a comparative analysis of these techniques. This study also shows how levels of similarity differ between data sets and file types. Finally, we discuss the advantages and disadvantages in the application of these methods in the light of our experimental results.", "", "Virtualization is becoming widely deployed in servers to efficiently provide many logically separate execution environments while reducing the need for physical servers. While this approach saves physical CPU resources, it still consumes large amounts of storage because each virtual machine (VM) instance requires its own multi-gigabyte disk image. Moreover, existing systems do not support ad hoc block sharing between disk images, instead relying on techniques such as overlays to build multiple VMs from a single \"base\" image. Instead, we propose the use of deduplication to both reduce the total storage required for VM disk images and increase the ability of VMs to share disk blocks. To test the effectiveness of deduplication, we conducted extensive evaluations on different sets of virtual machine disk images with different chunking strategies. Our experiments found that the amount of stored data grows very slowly after the first few virtual disk images if only the locale or software configuration is changed, with the rate of compression suffering when different versions of an operating system or different operating systems are included. We also show that fixed-length chunks work well, achieving nearly the same compression rate as variable-length chunks. Finally, we show that simply identifying zero-filled blocks, even in ready-to-use virtual machine disk images available online, can provide significant savings in storage." ] }
1701.04451
2576893748
Deduplication finds and removes long-range data duplicates. It is commonly used in cloud and enterprise server settings and has been successfully applied to primary, backup, and archival storage. Despite its practical importance as a source-coding technique, its analysis from the point of view of information theory is missing. This paper provides such an information-theoretic analysis of data deduplication. It introduces a new source model adapted to the deduplication setting. It formalizes the two standard fixed-length and variable-length deduplication schemes, and it introduces a novel multi-chunk deduplication scheme. It then provides an analysis of these three deduplication variants, emphasizing the importance of boundary synchronization between source blocks and deduplication chunks. In particular, under fairly mild assumptions, the proposed multi-chunk deduplication scheme is shown to be order optimal.
As already mentioned, data deduplication has not yet been investigated from an information-theoretic point of view. The closest problems in the information theory literature are compression with unknown alphabets @cite_26 , also known as multi-alphabet source coding @cite_23 @cite_16 , or the zero-frequency problem @cite_7 . In fact, the large repeated blocks in the source data can be interpreted as being part of an unknown alphabet that has to be learned and described by the encoder.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_23", "@cite_16" ], "mid": [ "2096798913", "2113641473", "", "2106981684" ], "abstract": [ "It has long been known that the compression redundancy of independent and identically distributed (i.i.d.) strings increases to infinity as the alphabet size grows. It is also apparent that any string can be described by separately conveying its symbols, and its pattern-the order in which the symbols appear. Concentrating on the latter, we show that the patterns of i.i.d. strings over all, including infinite and even unknown, alphabets, can be compressed with diminishing redundancy, both in block and sequentially, and that the compression can be performed in linear time. To establish these results, we show that the number of patterns is the Bell number, that the number of patterns with a given number of symbols is the Stirling number of the second kind, and that the redundancy of patterns can be bounded using results of Hardy and Ramanujan on the number of integer partitions. The results also imply an asymptotically optimal solution for the Good-Turing probability-estimation problem.", "Approaches to the zero-frequency problem in adaptive text compression are discussed. This problem relates to the estimation of the likelihood of a novel event occurring. Although several methods have been used, their suitability has been on empirical evaluation rather than a well-founded model. The authors propose the application of a Poisson process model of novelty. Its ability to predict novel tokens is evaluated, and it consistently outperforms existing methods. It is applied to a practical statistical coding scheme, where a slight modification is required to avoid divergence. The result is a well-founded zero-frequency model that explains observed differences in the performance of existing methods, and offers a small improvement in the coding efficiency of text compression over the best method previously known. >", "", "For lossless universal source coding of memoryless sequences with an a priori unknown alphabet size (multialphabet coding), the alphabet of the sequence must be described as well as the sequence itself. Usually an efficient description of the alphabet can be made only by taking into account some additional information. We show that these descriptions can be separated in such a way that the encoding of the actual sequence can be performed independently of the alphabet description, and present sequential coding methods for such sequences. Such methods have applications in coding methods where the alphabet description is made available sequentially, such as PPM." ] }
1701.03616
2952567239
We consider programmable matter that consists of computationally limited devices (which we call particles) that are able to self-organize in order to achieve some collective goal without the need for central control or external intervention. We use the geometric amoebot model to describe such self-organizing particle systems, which defines how particles can actively move and establish or release bonds with one another. Under this model, we investigate the feasibility of solving fundamental problems relevant to programmable matter. In this paper, we present an efficient local-control algorithm which solves the leader election problem in O(n) asynchronous rounds with high probability, where n is the number of particles in the system. Our algorithm relies only on local information (e.g., particles do not have unique identifiers, any knowledge of n, or any sort of global coordinate system), and requires only constant memory per particle.
A variety of work related to programmable matter has recently been proposed and investigated. One can distinguish between active and passive systems. In passive systems, the computational units either have no intelligence (moving and bonding is based only on their structural properties or interactions with their environment), or have limited computational capabilities but cannot control their movements. Examples of research on are DNA computing @cite_25 @cite_29 @cite_5 @cite_2 , tile self-assembly systems (e.g., the surveys in @cite_20 @cite_3 @cite_31 ), and population protocols @cite_11 . We will not describe these models in detail as they are of little relevance to our approach. , on the other hand, are composed of computational units which can control the way they act and move in order to solve a specific task. We discuss prominent examples of active systems here, as they are more comparable to our work.
{ "cite_N": [ "@cite_29", "@cite_3", "@cite_2", "@cite_5", "@cite_31", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2054579370", "2079425200", "1567090568", "2148219472", "2050994027", "1977312644", "2044709436", "1565089780" ], "abstract": [ "Abstract We show how DNA-based computers can be used to solve the satisfiability problem for boolean circuits. Furthermore, we show how DNA computers can solve optimization problems directly without first solving several decision problems. Our methods also enable random sampling of satisfying assignments.", "We first give an introduction to the field of tile-based self-assembly, focusing primarily on theoretical models and their algorithmic nature. We start with a description of Winfree’s abstract Tile Assembly Model (aTAM) and survey a series of results in that model, discussing topics such as the shapes which can be built and the computations which can be performed, among many others. Next, we introduce the more experimentally realistic kinetic Tile Assembly Model (kTAM) and provide an overview of kTAM results, focusing especially on the kTAM’s ability to model errors and several results targeted at preventing and correcting errors. We then describe the 2-Handed Assembly Model (2HAM), which allows entire assemblies to combine with each other in pairs (as opposed to the restriction of single-tile addition in the aTAM and kTAM) and doesn’t require a specified seed. We give overviews of a series of 2HAM results, which tend to make use of geometric techniques not applicable in the aTAM. Finally, we discuss and define a wide array of more recently developed models and discuss their various tradeoffs in comparison to the previous models and to each other.", "Molecular self-assembly presents a bottom-up' approach to the fabrication of objects specified with nanometre precision. DNA molecular structures and intermolecular interactions are particularly amenable to the design and synthesis of complex molecular objects. We report the design and observation of two-dimensional crystalline forms of DNA that self-assemble from synthetic DNA double-crossover molecules. Intermolecular interactions between the structural units are programmed by the design of sticky ends' that associate according to Watson-Crick complementarity, enabling us to create specific periodic patterns on the nanometre scale. The patterned crystals have been visualized by atomic force microscopy.", "Understanding how linear strings fold into 2-D and 3-D shapes has been a long sought goal in many fields of both academia and industry. This paper presents a technique to design self-assembling and self-reconfigurable systems that are composed of strings of very simple robotic modules. We show that physical strings that are composed of a small set of discrete polygonal or polyhedral modules can be used to programmatically generate any continuous area or volumetric shape. These modules can have one or two degrees of freedom (DOFs) and simple actuators with only two or three states. We describe a subdivision algorithm to produce universal polygonal and polyhedral string folding schemas, and we prove the existence of a continuous motion to reach any such folding. This technique is validated with dynamics simulations as well as experiments with chains of modules that pack on a regular cubic lattice. We call robotic programmable universally foldable strings “moteins” as motorized proteins.", "This short survey of recent work in tile self-assembly discusses the use of simulation to classify and separate the computational and expressive power of self-assembly models. The journey begins with the result that there is a single universal tile set that, with proper initialization and scaling, simulates any tile assembly system. This universal tile set exhibits something stronger than Turing universality: it captures the geometry and dynamics of any simulated system. From there we find that there is no such tile set in the noncooperative, or temperature 1, model, proving it weaker than the full tile assembly model. In the two-handed or hierarchal model, where large assemblies can bind together on one step, we encounter an infinite set, of infinite hierarchies, each with strictly increasing simulation power. Towards the end of our trip, we find one tile to rule them all: a single rotatable flipable polygonal tile that can simulate any tile assembly system. It seems this could be the beginning of a much longer journey, so directions for future work are suggested.", "The tools of molecular biology were used to solve an instance of the directed Hamiltonian path problem. A small graph was encoded in molecules of DNA, and the \"operations\" of the computation were performed with standard protocols and enzymes. This experiment demonstrates the feasibility of carrying out computations at the molecular level.", "Self-assembly is the process by which small components automatically assemble themselves into large, complex structures. Examples in nature abound: lipids self-assemble a cell's membrane, and bacteriophage virus proteins self-assemble a capsid that allows the virus to invade other bacteria. Even a phenomenon as simple as crystal formation is a process of self-assembly. How could such a process be described as \"algorithmic?\" The key word in the first sentence is automatically. Algorithms automate a series of simple computational tasks. Algorithmic self-assembly systems automate a series of simple growth tasks, in which the object being grown is simultaneously the machine controlling its own growth.", "Many biological systems are composed of unreliable components which self-organize efficiently into systems that can tackle complex problems. One such example is the true slimemold Physarum polycephalum which is an amoeba-like organism that seeks food sources and efficiently distributes nutrients throughout its cell body. The distribution of nutrients is accomplished by a self-assembled resource distribution network of small tubes with varying diameter which can evolve with changing environmental conditions without any global control. In this paper, we use a phenomenological model for the tube evolution in slime mold and map it to a path formation protocol for wireless sensor networks. By selecting certain evolution parameters in the protocol, the network may evolve toward single paths connecting data sources to a data sink. In other parameter regimes, the protocol may evolve toward multiple redundant paths. We present detailed analysis of a small model network. A thorough understanding of the simple network leads to design insights into appropriate parameter selection. We also validate the design via simulation of large-scale realistic wireless sensor networks using the QualNet network simulator." ] }
1701.03616
2952567239
We consider programmable matter that consists of computationally limited devices (which we call particles) that are able to self-organize in order to achieve some collective goal without the need for central control or external intervention. We use the geometric amoebot model to describe such self-organizing particle systems, which defines how particles can actively move and establish or release bonds with one another. Under this model, we investigate the feasibility of solving fundamental problems relevant to programmable matter. In this paper, we present an efficient local-control algorithm which solves the leader election problem in O(n) asynchronous rounds with high probability, where n is the number of particles in the system. Our algorithm relies only on local information (e.g., particles do not have unique identifiers, any knowledge of n, or any sort of global coordinate system), and requires only constant memory per particle.
In the area of , it is usually assumed that there is a collection of autonomous robots that can move freely in a given area and have limited sensing, vision, and communication ranges. They are used in a variety of contexts, including graph exploration (e.g., @cite_4 ), gathering problems (e.g., @cite_33 @cite_1 ), and shape formation problems (e.g., @cite_35 @cite_12 ). Surveys of recent results in swarm robotics can be found in @cite_14 @cite_36 ; other samples of representative work can be found in @cite_30 @cite_28 @cite_27 @cite_9 @cite_37 @cite_22 @cite_26 . While the analytic techniques developed in swarm robotics and natural swarms are of some relevance to this work, the individual units in those systems have more powerful communication and processing capabilities than in the systems we consider.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_37", "@cite_14", "@cite_4", "@cite_33", "@cite_22", "@cite_26", "@cite_36", "@cite_28", "@cite_9", "@cite_1", "@cite_27", "@cite_12" ], "mid": [ "2085992503", "1975531816", "1965243636", "", "2020796742", "2066466105", "1658640257", "", "2468687866", "2091969952", "1983586118", "2087073465", "1967871771", "2030775133" ], "abstract": [ "Self-assembly of active, robotic agents, rather than of passive agents such as molecules, is an emerging research field that is attracting increasing attention. Active self-assembly techniques are especially attractive at very small spatial scales, where alternative construction methods are unavailable or have severe limitations. Building nanostructures by using swarms of very simple nanorobots is a promising approach for manufacturing nanoscale devices and systems. The method described in this paper allows a group of simple, physically identical, identically programmed and reactive (i.e., stateless) agents to construct and repair polygonal approximations to arbitrary structures in the plane. The distributed algorithms presented here are tolerant of robot failures and of externally-induced disturbances. The structures are self-healing, and self-replicating in a weak sense. Their components can be re-used once the structures are no longer needed. A specification of vertices at relative positions, and the edges between them, is translated by a compiler into reactive rules for assembly agents. These rules lead to the construction and repair of the specified shape. Simulation results are presented, which validate the proposed algorithms.", "From an engineering point of view, the problem of coordinating a set of autonomous, mobile robots for the purpose of cooperatively performing a task has been studied extensively over the past decade. In contrast, in this paper we aim to understand the fundamental algorithmic limitations on what a set of autonomous mobile robots can or cannot achieve. We therefore study a hard task for a set of weak robots. The task is for the robots in the plane to form any arbitrary pattern that is given in advance. This task is fundamental in the sense that if the robots can form any pattern, they can agree on their respective roles in a subsequent, coordinated action. The robots are weak in several aspects. They are anonymous; they cannot explicitly communicate with each other, but only observe the positions of the others; they cannot remember the past; they operate in a very strong form of asynchronicity. We show that the tasks that such a system of robots can perform depend strongly on their common agreement about their environment, i.e. the readings of their environment sensors. If the robots have no common agreement about their environment, they cannot form an arbitrary pattern. If each robot has a compass needle that indicates North (the robot world is a flat surface, and compass needles are parallel), then any odd number of robots can form an arbitrary pattern, but an even number cannot (in the worst case). If each robot has two independent compass needles, say North and East, then any set of robots can form any pattern.", "This paper presents a distributed algorithm whereby a group of mobile robots self-organize and position themselves into forming a circle in a loosely synchronized environment. In spite of its apparent simplicity, the difficulty of the problem comes from the weak assumptions made on the system. In particular, robots are anonymous, oblivious (i.e., stateless), unable to communicate directly, and disoriented in the sense that they share no knowledge of a common coordinate system. Furthermore, robots' activations are not synchronized. More specifically, the proposed algorithm ensures that robots deterministically form a non-uniform circle in a finite number of steps and converges to a situation in which all robots are located evenly on the boundary of the circle.", "", "We consider the problem of exploring an anonymous unoriented ring by a team of k identical, oblivious, asynchronous mobile robots that can view the environment but cannot communicate. This weak scenario is standard when the spatial universe in which the robots operate is the two-dimensional plane, but (with one exception) has not been investigated before for networks. Our results imply that, although these weak capabilities of robots render the problem considerably more difficult, ring exploration by a small team of robots is still possible. We first show that, when k and n are not co-prime, the problem is not solvable in general, e.g., if k divides n there are initial placements of the robots for which gathering is impossible. We then prove that the problem is always solvable provided that n and k are co-prime, for kź17, by giving an exploration algorithm that always terminates, starting from arbitrary initial configurations. Finally, we consider the minimum number ź(n) of robots that can explore a ring of size n. As a consequence of our positive result we show that ź(n) is O(logn). We additionally prove that Ω(logn) robots are necessary for infinitely many n.", "We revisit the problem of gathering autonomous robots in the plane. In particular, we consider non-transparent unit-disc robots (i.e., fat) in an asynchronous setting with vision as the only means of coordination and robots only make local decisions. We use a state-machine representation to formulate the gathering problem and develop a distributed algorithm that solves the problem for any number of fat robots. The main idea behind the algorithm is to enforce the robots to reach a configuration in which all the following hold: (i) The robots' centers form a convex hull in which all robots are on the convex hull's boundary; (ii) Each robot can see all other robots; (iii) The configuration is connected: every robot touches another robot and all robots form together a connected formation. We show that starting from any initial configuration, the fat robots eventually reach such a configuration and terminate yielding a solution to the gathering problem.", "We develop and analyze algorithms for dispersing a swarm of primitive robots in an unknown environment, R. The primary objective is to minimize the makespan, that is, the time to fill the entire region. An environment is composed of pixels that form a connected subset of the integer grid. There is at most one robot per pixel and robots move horizontally or vertically at unit speed. Robots enter R by means of k ≥ 1 door pixels. Robots are primitive finite automata, only having local communication, local sensors, and a constant-sized memory.", "", "Distributed algorithms for multi-robot systems rely on network communications to share information. However, the motion of the robots changes the network topology, which affects the information presented to the algorithm. For an algorithm to produce accurate output, robots need to communicate rapidly enough to keep the network topology correlated to their physical configuration. Infrequent communications will cause most multi-robot distributed algorithms to produce less accurate results, and cause some algorithms to stop working altogether. The central theme of this work is that algorithm accuracy, communications bandwidth, and physical robot speed are related. This thesis has three main contributions: First, I develop a prototypical multi-robot application and computational model, propose a set of complexity metrics to evaluate distributed algorithm performance on multi-robot systems, and introduce the idea of the robot speed ratio, a dimensionless measure of robot speed relative to message speed in networks that rely on multi-hop communication. The robot speed ratio captures key relationships between communications bandwidth, mobility, and algorithm accuracy, and can be used at design time to trade off between them. I use this speed ratio to evaluate the performance of existing distributed algorithms for multi-hop communication and navigation. Second, I present a definition of boundaries in multi-robot systems, and develop new distributed algorithms to detect and characterize them. Finally, I define the problem of dynamic task assignment, and present four distributed algorithms that solve this problem, each representing a different trade-off between accuracy, running time, and communication resources. All the algorithms presented in this work are provably correct under ideal conditions and produce verifiable real-world performance. They are self-stabilizing and robust to communications failures, population changes, and other errors. All the algorithms were tested on a swarm of 112 robots. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)", "We consider the uniform scattering problem for a set of autonomous mobile robots deployed in a grid network: starting from an arbitrary placement in the grid, using purely localized computations, the robots must move so to reach in finite time a state of static equilibrium in which they cover uniformly the grid. The theoretical quest is on determining the minimal capabilities needed by the robots to solve the problem. We prove that uniform scattering is indeed possible even for very weak robots. The proof is constructive. We present a provably correct protocol for uniform self-deployment in a grid. The protocol is fully localized, collision-free, and it makes minimal assumptions; in particular: (1) it does not require any direct or explicit communication between robots; (2) it makes no assumption on robots synchronization or timing, hence the robots can be fully asynchronous in all their actions; (3) it requires only a limited visibility range; (4) it uses at each robot only a constant size memory, hence computationally the robots can be simple Finite-State Machines; (5) it does not need a global localization system but only orientation in the grid (e.g., a compass); (6) it does not require identifiers, hence the robots can be anonymous and totally identical.", "We study the computational power of a distributed system consisting of simple autonomous robots moving on the plane. The robots are endowed with visual perception but do not have any means of explicit communication with each other, and have no memory of the past. In the extensive literature it has been shown how such simple robots can form a single geometric pattern (e.g., a line, a circle, etc), however arbitrary, in spite of their obliviousness. This brings to the front the natural research question: what are the real computational limits imposed by the robots being oblivious? In particular, since obliviousness limits what can be remembered, under what conditions can oblivious robots form a series of geometric patterns? Notice that a series of patterns would create some form of memory in an otherwise memory-less system. In this paper we examine and answer this question showing that, under particular conditions, oblivious robot systems can indeed form series of geometric patterns starting from any arbitrary configuration. More precisely, we study the series of patterns that can be formed by robot systems under various restrictions such as anonymity, asynchrony and lack of common orientation. These results are the first strong indication that oblivious solutions may be obtained also for tasks that intuitively seem to require memory.", "Consider a set of @math identical mobile computational entities in the plane, called robots, operating in Look-Compute-Move cycles, without any means of direct communication. The Gathering Problem is the primitive task of all entities gathering in finite time at a point not fixed in advance, without any external control. The problem has been extensively studied in the literature under a variety of strong assumptions (e.g., synchronicity of the cycles, instantaneous movements, complete memory of the past, common coordinate system, etc.). In this paper we consider the setting without those assumptions, that is, when the entities are oblivious (i.e., they do not remember results and observations from previous cycles), disoriented (i.e., have no common coordinate system), and fully asynchronous (i.e., no assumptions exist on timing of cycles and activities within a cycle). The existing algorithmic contributions for such robots are limited to solutions for @math or for restricted sets of initial configura...", "This paper studies local algorithms for autonomous robot systems, namely, algorithms that use only information of the positions of a bounded number of their nearest neighbors. The paper focuses on the spreading problem. It defines measures for the quality of spreading, presents a local algorithm for the one-dimensional spreading problem, proves its convergence to the equally spaced configuration and discusses its convergence rate in the synchronous and semi-synchronous settings. It then presents a local algorithm achieving the exact equally spaced configuration in finite time in the synchronous setting, and proves it is time optimal for local algorithms. Finally, the paper also proposes a possible algorithm for the two-dimensional case and presents partial simulation results of its effectiveness.", "When individuals swarm, they must somehow communicate to direct collective motion. Swarms of robots need to deal with outliers, such as robots that move more slowly than the rest. created a large swarm of programmed robots that can form collaborations using only local information. The robots could communicate only with nearby members, within about three times their diameter. They were able to assemble into complex preprogrammed shapes. If the robots' formation hit snags when they bumped into one another or because of an outlier, additional algorithms guided them to rectify their collective movements. @PARASPLIT Science , this issue p. [795][1] @PARASPLIT [1]: lookup volpage 345 795?iss=6198" ] }
1701.03616
2952567239
We consider programmable matter that consists of computationally limited devices (which we call particles) that are able to self-organize in order to achieve some collective goal without the need for central control or external intervention. We use the geometric amoebot model to describe such self-organizing particle systems, which defines how particles can actively move and establish or release bonds with one another. Under this model, we investigate the feasibility of solving fundamental problems relevant to programmable matter. In this paper, we present an efficient local-control algorithm which solves the leader election problem in O(n) asynchronous rounds with high probability, where n is the number of particles in the system. Our algorithm relies only on local information (e.g., particles do not have unique identifiers, any knowledge of n, or any sort of global coordinate system), and requires only constant memory per particle.
The model @cite_32 @cite_34 @cite_18 by aims to provide the theoretical framework that would allow for more rigorous algorithmic studies of biomolecular-inspired systems, specifically of self-assembly systems with active molecular components. While there are similarities between such systems and our self-organizing particle systems, key differences prohibit the translation of the algorithms and other results under the nubot model to our systems; e.g., there is always an arbitrarily large supply of extra'' particles that can be added to the nubot system as needed, and a (non-local) notion of rigid-body movement.
{ "cite_N": [ "@cite_18", "@cite_34", "@cite_32" ], "mid": [ "2949409400", "1487420811", "2951913554" ], "abstract": [ "We describe a computational model for studying the complexity of self-assembled structures with active molecular components. Our model captures notions of growth and movement ubiquitous in biological systems. The model is inspired by biology's fantastic ability to assemble biomolecules that form systems with complicated structure and dynamics, from molecular motors that walk on rigid tracks and proteins that dynamically alter the structure of the cell during mitosis, to embryonic development where large-scale complicated organisms efficiently grow from a single cell. Using this active self-assembly model, we show how to efficiently self-assemble shapes and patterns from simple monomers. For example, we show how to grow a line of monomers in time and number of monomer states that is merely logarithmic in the length of the line. Our main results show how to grow arbitrary connected two-dimensional geometric shapes and patterns in expected time that is polylogarithmic in the size of the shape, plus roughly the time required to run a Turing machine deciding whether or not a given pixel is in the shape. We do this while keeping the number of monomer types logarithmic in shape size, plus those monomers required by the Kolmogorov complexity of the shape or pattern. This work thus highlights the efficiency advantages of active self-assembly over passive self-assembly and motivates experimental effort to construct general-purpose active molecular self-assembly systems.", "We study the computational complexity of the recently proposed nubots model of molecular-scale self-assembly. The model generalizes asynchronous cellular automaton to have non-local movement where large assemblies of molecules can be moved around, analogous to millions of molecular motors in animal muscle effecting the rapid movement of large arms and legs. We show that nubots is capable of simulating Boolean circuits of polylogarithmic depth and polynomial size, in only polylogarithmic expected time. In computational complexity terms, any problem from the complexity class NC is solved in polylogarithmic expected time on nubots that use a polynomial amount of workspace. Along the way, we give fast parallel algorithms for a number of problems including line growth, sorting, Boolean matrix multiplication and space-bounded Turing machine simulation, all using a constant number of nubot states monomer types. Circuit depth is a well-studied notion of parallel time, and our result implies that nubots is a highly parallel model of computation in a formal sense. Thus, adding a movement primitive to an asynchronous non-deterministic cellular automation, as in nubots, drastically increases its parallel processing abilities.", "We study the power of uncontrolled random molecular movement in the nubot model of self-assembly. The nubot model is an asynchronous nondeterministic cellular automaton augmented with rigid-body movement rules (push pull, deterministically and programmatically applied to specific monomers) and random agitations (nondeterministically applied to every monomer and direction with equal probability all of the time). Previous work on the nubot model showed how to build simple shapes such as lines and squares quickly---in expected time that is merely logarithmic of their size. These results crucially make use of the programmable rigid-body movement rule: the ability for a single monomer to control the movement of a large objects quickly, and only at a time and place of the programmers' choosing. However, in engineered molecular systems, molecular motion is largely uncontrolled and fundamentally random. This raises the question of whether similar results can be achieved in a more restrictive, and perhaps easier to justify, model where uncontrolled random movements, or agitations, are happening throughout the self-assembly process and are the only form of rigid-body movement. We show that this is indeed the case: we give a polylogarithmic expected time construction for squares using agitation, and a sublinear expected time construction to build a line. Such results are impossible in an agitation-free (and movement-free) setting and thus show the benefits of exploiting uncontrolled random movement." ] }
1701.03616
2952567239
We consider programmable matter that consists of computationally limited devices (which we call particles) that are able to self-organize in order to achieve some collective goal without the need for central control or external intervention. We use the geometric amoebot model to describe such self-organizing particle systems, which defines how particles can actively move and establish or release bonds with one another. Under this model, we investigate the feasibility of solving fundamental problems relevant to programmable matter. In this paper, we present an efficient local-control algorithm which solves the leader election problem in O(n) asynchronous rounds with high probability, where n is the number of particles in the system. Our algorithm relies only on local information (e.g., particles do not have unique identifiers, any knowledge of n, or any sort of global coordinate system), and requires only constant memory per particle.
The model @cite_10 is a model for self-organizing programmable matter that aims to provide a framework for rigorous algorithmic research for nano-scale systems. @cite_8 , the authors describe a leader election algorithm for an abstract (synchronous) version of the amoebot model that decides the problem in expected linear time. Recently, a universal shape formation algorithm @cite_0 , a universal coating algorithm @cite_21 and a Markov chain algorithm for the compression problem @cite_19 were introduced, showing that there is potential to investigate a wide variety of problems under this model.
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_0", "@cite_19", "@cite_10" ], "mid": [ "1567090568", "2436246434", "2474103057", "2304365451", "2027032283" ], "abstract": [ "Molecular self-assembly presents a bottom-up' approach to the fabrication of objects specified with nanometre precision. DNA molecular structures and intermolecular interactions are particularly amenable to the design and synthesis of complex molecular objects. We report the design and observation of two-dimensional crystalline forms of DNA that self-assemble from synthetic DNA double-crossover molecules. Intermolecular interactions between the structural units are programmed by the design of sticky ends' that associate according to Watson-Crick complementarity, enabling us to create specific periodic patterns on the nanometre scale. The patterned crystals have been visualized by atomic force microscopy.", "Imagine coating buildings and bridges with smart particles (also coined smart paint) that monitor structural integrity and sense and report on traffic and wind loads, leading to technology that could do such inspection jobs faster and cheaper and increase safety at the same time. In this paper, we study the problem of uniformly coating objects of arbitrary shape in the context of self-organizing programmable matter, i.e., programmable matter which consists of simple computational elements called particles that can establish and release bonds and can actively move in a self-organized way. Particles are anonymous, have constant-size memory, and utilize only local interactions in order to coat an object. We continue the study of our universal coating algorithm by focusing on its runtime analysis, showing that our algorithm terminates within a linear number of rounds with high probability. We also present a matching linear lower bound that holds with high probability. We use this lower bound to show a linear lower bound on the competitive gap between fully local coating algorithms and coating algorithms that rely on global information, which implies that our algorithm is also optimal in a competitive sense. Simulation results show that the competitive ratio of our algorithm may be better than linear in practice.", "We envision programmable matter consisting of systems of computationally limited devices (which we call particles) that are able to self-organize in order to achieve a desired collective goal without the need for central control or external intervention. Central problems for these particle systems are shape formation and coating problems. In this paper, we present a universal shape formation algorithm which takes an arbitrary shape composed of a constant number of equilateral triangles of unit size and lets the particles build that shape at a scale depending on the number of particles in the system. Our algorithm runs in O(√n) asynchronous execution rounds, where @math is the number of particles in the system, provided we start from a well-initialized configuration of the particles. This is optimal in a sense that for any shape deviating from the initial configuration, any movement strategy would require Ω(√n) rounds in the worst case (over all asynchronous activations of the particles). Our algorithm relies only on local information (e.g., particles do not have ids, nor do they know n, or have any sort of global coordinate system), and requires only a constant-size memory per particle.", "We consider programmable matter as a collection of simple computational elements (or particles) with limited (constant-size) memory that self-organize to solve system-wide problems of movement, configuration, and coordination. Here, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. More specifically, we seek fully distributed, local, and asynchronous algorithms that lead the system to converge to a configuration with small perimeter. We present a Markov chain based algorithm that solves the compression problem under the geometric amoebot model, for particle systems that begin in a connected configuration with no holes. The algorithm takes as input a bias parameter λ, where λ > 1 corresponds to particles favoring inducing more lattice triangles within the particle system. We show that for all λ > 5, there is a constant α > 1 such that at stationarity with all but exponentially small probability the particles are α-compressed, meaning the perimeter of the system configuration is at most α ⋅ pmin, where pmin is the minimum possible perimeter of the particle system. We additionally prove that the same algorithm can be used for expansion for small values of λ in particular, for all 0", "The term programmable matter refers to matter which has the ability to change its physical properties (shape, density, moduli, conductivity, optical properties, etc.) in a programmable fashion, based upon user input or autonomous sensing. This has many applications like smart materials, autonomous monitoring and repair, and minimal invasive surgery, so there is a high relevance of this topic to industry and society in general. While programmable matter has just been science fiction more than two decades ago, a large amount of research activities can now be seen in this field in the recent years. Often programmable matter is envisioned, as a very large number of small locally interacting computational . We propose the Amoebot model, a new model which builds upon this vision of programmable matter. Inspired by the behavior of amoeba, the Amoebot model offers a versatile framework to model self-organizing particles and facilitates rigorous algorithmic research in the area of programmable matter." ] }
1701.03849
2949748033
This paper is focused on automatic multi-label document classification of Czech text documents. The current approaches usually use some pre-processing which can have negative impact (loss of information, additional implementation work, etc). Therefore, we would like to omit it and use deep neural networks that learn from simple features. This choice was motivated by their successful usage in many other machine learning fields. Two different networks are compared: the first one is a standard multi-layer perceptron, while the second one is a popular convolutional network. The experiments on a Czech newspaper corpus show that both networks significantly outperform baseline method which uses a rich set of features with maximum entropy classifier. We have also shown that convolutional network gives the best results.
Recently, deep'' Neural Nets (NN) have shown their superior performance in many natural language processing tasks including POS tagging, chunking, named entity recognition and semantic role labelling @cite_23 without any parametrization. Several different topologies and learning algorithms were proposed.
{ "cite_N": [ "@cite_23" ], "mid": [ "2158899491" ], "abstract": [ "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements." ] }
1701.03849
2949748033
This paper is focused on automatic multi-label document classification of Czech text documents. The current approaches usually use some pre-processing which can have negative impact (loss of information, additional implementation work, etc). Therefore, we would like to omit it and use deep neural networks that learn from simple features. This choice was motivated by their successful usage in many other machine learning fields. Two different networks are compared: the first one is a standard multi-layer perceptron, while the second one is a popular convolutional network. The experiments on a Czech newspaper corpus show that both networks significantly outperform baseline method which uses a rich set of features with maximum entropy classifier. We have also shown that convolutional network gives the best results.
For instance, the authors of @cite_15 propose two Convolutional Neural Nets (CNN) for ontology classification, sentiment analysis and single-label document classification. Their networks are composed of 9 layers out of which 6 are convolutional layers and 3 fully-connected layers with different numbers of hidden units and frame sizes. They show that the proposed method significantly outperforms the baseline approaches (bag of words) on English and Chinese corpora. Another interesting work @cite_13 uses in the first layer (i.e. lookup table) pre-trained vectors from word2vec @cite_21 . The authors show that the proposed models outperform the state-of-the-art on 4 out of 7 tasks, which include sentiment analysis and question classification.
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_13" ], "mid": [ "1775434803", "", "2949541494" ], "abstract": [ "This article demontrates that we can apply deep learning to text understanding from character-level inputs all the way up to abstract text concepts, using temporal convolutional networks (ConvNets). We apply ConvNets to various large-scale datasets, including ontology classification, sentiment analysis, and text categorization. We show that temporal ConvNets can achieve astonishing performance without the knowledge of words, phrases, sentences and any other syntactic or semantic structures with regards to a human language. Evidence shows that our models can work for both English and Chinese.", "", "We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification." ] }
1701.03849
2949748033
This paper is focused on automatic multi-label document classification of Czech text documents. The current approaches usually use some pre-processing which can have negative impact (loss of information, additional implementation work, etc). Therefore, we would like to omit it and use deep neural networks that learn from simple features. This choice was motivated by their successful usage in many other machine learning fields. Two different networks are compared: the first one is a standard multi-layer perceptron, while the second one is a popular convolutional network. The experiments on a Czech newspaper corpus show that both networks significantly outperform baseline method which uses a rich set of features with maximum entropy classifier. We have also shown that convolutional network gives the best results.
For additional information about architectures, algorithms, and applications of deep learning, please refer the survey @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2123585936" ], "abstract": [ "In this invited paper, my overview material on the same topic as presented in the plenary overview session of APSIPA-2011 and the tutorial material presented in the same conference [1] are expanded and updated to include more recent developments in deep learning. The previous and the updated materials cover both theory and applications, and analyze its future directions. The goal of this tutorial survey is to introduce the emerging area of deep learning or hierarchical learning to the APSIPA community. Deep learning refers to a class of machine learning techniques, developed largely since 2006, where many stages of non-linear information processing in hierarchical architectures are exploited for pattern classification and for feature learning. In the more recent literature, it is also connected to representation learning, which involves a hierarchy of features or concepts where higher-level concepts are defined from lower-level ones and where the same lower-level concepts help to define higher-level ones. In this tutorial survey, a brief history of deep learning research is discussed first. Then, a classificatory scheme is developed to analyze and summarize major work reported in the recent deep learning literature. Using this scheme, I provide a taxonomy-oriented survey on the existing deep architectures and algorithms in the literature, and categorize them into three classes: generative, discriminative, and hybrid. Three representative deep architectures – deep autoencoders, deep stacking networks with their generalization to the temporal domain (recurrent networks), and deep neural networks (pretrained with deep belief networks) – one in each of the three classes, are presented in more detail. Next, selected applications of deep learning are reviewed in broad areas of signal and information processing including audio speech, image vision, multimodality, language modeling, natural language processing, and information retrieval. Finally, future directions of deep learning are discussed and analyzed." ] }
1701.03849
2949748033
This paper is focused on automatic multi-label document classification of Czech text documents. The current approaches usually use some pre-processing which can have negative impact (loss of information, additional implementation work, etc). Therefore, we would like to omit it and use deep neural networks that learn from simple features. This choice was motivated by their successful usage in many other machine learning fields. Two different networks are compared: the first one is a standard multi-layer perceptron, while the second one is a popular convolutional network. The experiments on a Czech newspaper corpus show that both networks significantly outperform baseline method which uses a rich set of features with maximum entropy classifier. We have also shown that convolutional network gives the best results.
Traditional multi-layer neural networks were also used for multi-label document classification in @cite_3 . The authors have modified standard backpropagation algorithm for multi-label learning which employs a novel error function. This approach is evaluated on functional genomics and text categorization.
{ "cite_N": [ "@cite_3" ], "mid": [ "2119466907" ], "abstract": [ "In multilabel learning, each instance in the training set is associated with a set of labels and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, this problem is addressed in the way that a neural network algorithm named BP-MLL, i.e., backpropagation for multilabel learning, is proposed. It is derived from the popular backpropagation algorithm through employing a novel error function capturing the characteristics of multilabel learning, i.e., the labels belonging to an instance should be ranked higher than those not belonging to that instance. Applications to two real-world multilabel learning problems, i.e., functional genomics and text categorization, show that the performance of BP-MLL is superior to that of some well-established multilabel learning algorithms" ] }