aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1411.5057 | 1903168855 | In this paper, we propose a novel algorithm for analysis-based sparsity reconstruction. It can solve the generalized problem by structured sparsity regularization with an orthogonal basis and total variation regularization. The proposed algorithm is based on the iterative reweighted least squares (IRLS) model, which is further accelerated by the preconditioned conjugate gradient method. The convergence rate of the proposed algorithm is almost the same as that of the traditional IRLS algorithms, that is, exponentially fast. Moreover, with the specifically devised preconditioner, the computational cost for each iteration is significantly less than that of traditional IRLS algorithms, which enables our approach to handle large scale problems. In addition to the fast convergence, it is straightforward to apply our method to standard sparsity, group sparsity, overlapping group sparsity and TV based problems. Experiments are conducted on a practical application: compressive sensing magnetic resonance imaging. Extensive results demonstrate that the proposed algorithm achieves superior performance over 14 state-of-the-art algorithms in terms of both accuracy and computational cost. | The conventional IRLS algorithms solve the standard sparse problem in this constrained form: In practice, the @math norm is replaced by a reweighted @math norm @cite_25 : The diagonal weight matrix @math in the @math -th iteration is computed from the solution of the current iteration @math , in particular, the diagonal elements @math . With current weights @math , we can derive the closed form solution for @math : The algorithm can be summarized in Algorithm . It has been proven that the IRLS algorithm converges exponentially fast under mild conditions @cite_30 : where @math is a fixed constant with @math . However, this algorithm is rarely used in compressive sensing applications especially for large scale problems. That is because the inverse of @math takes @math if @math is a @math sampling matrix. Even with higher convergence rate, traditional IRLS still cannot compete with the fastest first-order algorithms such as FISTA @cite_41 (some results have been shown in @cite_18 ). Moreover, none of existing IRLS methods @cite_25 @cite_30 @cite_27 could solve the overlapping group sparsity problems, which significantly limits the usage. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_41",
"@cite_27",
"@cite_25"
],
"mid": [
"2119883478",
"2952139899",
"2100556411",
"2122315118",
"2168745297"
],
"abstract": [
"Under certain conditions (known as the restricted isometry property, or RIP) on the mN matrix ˆ (where m<N ), vectors x 2 R N that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y WDx even though ˆ � 1 .y is typically an .Nm -dimensional hyperplane; in addition, x is then equal to the element in ˆ � 1 .y of minimal 1-norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in ˆ � 1 .y with smallest 2.w -norm. If x .n is the solution at itera- tion step n, then the new weight w .n is defined by w .n WD Œjx .n j 2 C \" 2� � 1=2 , i D 1; :::;N , for a decreasing sequence of adaptively defined \"n; this updated weight is then used to obtain x .nC1 and the process is repeated. We prove that whensatisfies the RIP conditions, the sequence x .n converges for all y, re- gardless of whether ˆ � 1 .y contains a sparse vector. If there is a sparse vector in ˆ � 1 .y , then the limit is this sparse vector, and when x .n is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same al- gorithm with the \"heavier\" weight w .n D Œjx .n",
"Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted @math -penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view.",
"We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.",
"We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging.",
"The theory of compressive sensing has shown that sparse signals can be reconstructed exactly from many fewer measurements than traditionally believed necessary. In [1], it was shown empirically that using lscrp minimization with p < 1 can do so with fewer measurements than with p = 1. In this paper we consider the use of iteratively reweighted algorithms for computing local minima of the nonconvex problem. In particular, a particular regularization strategy is found to greatly improve the ability of a reweighted least-squares algorithm to recover sparse signals, with exact recovery being observed for signals that are much less sparse than required by an unregularized version (such as FOCUSS, [2]). Improvements are also observed for the reweighted-lscr1 approach of [3]."
]
} |
1411.4246 | 2949543617 | Nuclear Magnetic Resonance (NMR) Spectroscopy is a widely used technique to predict the native structure of proteins. However, NMR machines are only able to report approximate and partial distances between pair of atoms. To build the protein structure one has to solve the Euclidean distance geometry problem given the incomplete interval distance data produced by NMR machines. In this paper, we propose a new genetic algorithm for solving the Euclidean distance geometry problem for protein structure prediction given sparse NMR data. Our genetic algorithm uses a greedy mutation operator to intensify the search, a twin removal technique for diversification in the population and a random restart method to recover stagnation. On a standard set of benchmark dataset, our algorithm significantly outperforms standard genetic algorithms. | Euclidean distance geometry problem and its variants are applied to many problems in various fields including wireless sensor network localization @cite_8 , inverse kinematic problem @cite_16 , multi dimensional scaling @cite_9 , protein structure determination @cite_2 etc. The variant of the MDGP problem, when we know distances for all pairs @math and @math has a polynomial algorithm to produce exact solution @cite_10 . Even when some of the pairwise distances are unknown, the problem is solvable by a linear time algorithm @cite_17 . However, the variant of MDGP with sparse and inaccurate data is shown to be NP-hard by More and Wu @cite_5 . A recent survey of computational methods applied to solve this variant of MDGP can be found in @cite_6 . | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"1872902815",
"2001141328",
"2003250847",
"1856811566",
"2095172966",
"2139795879",
"1582520000",
"2110543129"
],
"abstract": [
"Evolving networks of ad-hoc wireless sensing nodes rely heavily on the ability to establish position information. The algorithms presented herein rely on range measurements between pairs of nodes and the a priori coordinates of sparsely located anchor nodes. Clusters of nodes surrounding anchor nodes cooperatively establish confident position estimates through assumptions, checks, and iterative refinements. Once established, these positions are propagated to more distant nodes, allowing the entire network to create an accurate map of itself. Major obstacles include overcoming inaccuracies in range measurements as great as spl plusmn 50 , as well as the development of initial guesses for node locations in clusters with few or no anchor nodes. Solutions to these problems are presented and discussed, using position error as the primary metric. Algorithms are compared according to position error, scalability, and communication and computational requirements. Early simulations yield average position errors of 5 in the presence of both range and initial position inaccuracies.",
"Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.",
"Euclidean distance geometry is the study of Euclidean geometry based on the concept of distance. This is useful in several applications where the input data consist of an incomplete set of distances and the output is a set of points in Euclidean space realizing those given distances. We survey the theory of Euclidean distance geometry and its most important applications, with special emphasis on molecular conformation problems.",
"We study the performance of the code for the solution of distance geometry problems with lower and upper bounds on distance constraints. The code uses only a sparse set of distance constraints, while other algorithms tend to work with a dense set of constraints either by imposing additional bounds or by deducing bounds from the given bounds. Our computational results show that protein structures can be determined by solving a distance geometry problem with and that the approach based on is significantly more reliable and efficient than multi-starts with an optimization code.",
"Distance geometry problems arise in the determination of protein structure. We consider the case where only a subset of the distances between atoms is given and formulate this distance geometry problem as a global minimization problem with special structure. We show that global smoothing techniques and a continuation approach for global optimization can be used to determine global solutions of this problem reliably and efficiently. The global continuation approach determines a global solution with less computational effort than is required by a standard multistart algorithm. Moreover, the continuation approach usually finds the global solution from any given starting point, while the multistart algorithm tends to fail.",
"In this paper we develop a set of inverse kinematics algorithms suitable for an anthropomorphic arm or leg. We use a combination of analytical and numerical methods to solve generalized inverse kinematics problems including position, orientation, and aiming constraints. Our combination of analytical and numerical methods results in faster and more reliable algorithms than conventional inverse Jacobian and optimization-based techniques. Additionally, unlike conventional numerical algorithms, our methods allow the user to interactively explore all possible solutions using an intuitive set of parameters that define the redundancy of the system.",
"",
"Nuclear magnetic resonance (NMR) structure modeling usually produces a sparse set of inter-atomic distances in protein. In order to calculate the three-dimensional structure of protein, current approaches need to estimate all other missing'' distances to build a full set of distances. However, the estimation step is costly and prone to introducing errors. In this report, we describe a geometric build-up algorithm for solving protein structure by using only a sparse set of inter-atomic distances. Such a sparse set of distances can be obtained by combining NMR data with our knowledge on certain bond lengths and bond angles. It can also include confident estimations on some missing'' distances. Our algorithm utilizes a simple geometric relationship between coordinates and distances. The coordinates for each atom are calculated by using the coordinates of previously determined atoms and their distances. We have implemented the algorithm and tested it on several proteins. Our results showed that our algorithm successfully determined the protein structures with sparse sets of distances. Therefore, our algorithm reduces the need of estimating the missing'' distances and promises a more efficient approach to NMR structure modeling."
]
} |
1411.4246 | 2949543617 | Nuclear Magnetic Resonance (NMR) Spectroscopy is a widely used technique to predict the native structure of proteins. However, NMR machines are only able to report approximate and partial distances between pair of atoms. To build the protein structure one has to solve the Euclidean distance geometry problem given the incomplete interval distance data produced by NMR machines. In this paper, we propose a new genetic algorithm for solving the Euclidean distance geometry problem for protein structure prediction given sparse NMR data. Our genetic algorithm uses a greedy mutation operator to intensify the search, a twin removal technique for diversification in the population and a random restart method to recover stagnation. On a standard set of benchmark dataset, our algorithm significantly outperforms standard genetic algorithms. | Among the general purpose methods, spatial branch and bound @cite_0 and variable neighborhood search (VNS) @cite_15 methods are not scalable @cite_3 . Smoothing based methods like DGSOL @cite_2 @cite_5 also fail for large instances of the problem. In @cite_1 , VNS was combined with DGSOL approaches which provided better results for larger instances but resulted into a slow algorithm. A combinatorial build up algorithm was proposed in @cite_11 . It is important to note that all these methods were tested on dense instances only. Among other methods applied to this problem graph decomposition methods @cite_14 and NLP formulations @cite_18 are notable. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_11"
],
"mid": [
"1975519273",
"2000284933",
"2034307786",
"",
"1978947328",
"1856811566",
"2095172966",
"96018061",
""
],
"abstract": [
"The molecule problem is that of determining the relative locations of a set of objects in Euclidean space relying only upon a sparse set of pairwise distance measurements. This NP-hard problem has applications in the determination of molecular conformation. The molecule problem can be naturally expressed as a continuous, global optimization problem, but it also has a rich combinatorial structure. This paper investigates how that structure can be exploited to simplify the optimization problem. In particular, we present a novel divide-and-conquer algorithm in which a large global optimization problem is replaced by a sequence of smaller ones. Since the cost of the optimization can grow exponentially with problem size, this approach holds the promise of a substantial improvement in performance. Our algorithmic development relies upon some recently published results in graph theory. We describe an implementation of this algorithm and report some results of its performance on a sample molecule.",
"We present a new iterative algorithm for the molecular distance geometry problem with inaccurate and sparse data, which is based on the solution of linear systems, maximum cliques, and a minimization of nonlinear least-squares function. Computational results with real protein structures are presented in order to validate our approach.",
"We discuss the geometrical interpretation of a well-known smoothing operator applied to the Molecular Distance Geometry Problem (MDGP), and we then describe a heuristic approach based on Variable Neighbourhood Search on the smoothed and original problem. This algorithm often manages to find solutions having higher accuracy than other methods. This is important as small differences in the objective function value may point to completely different 3D molecular structures.",
"",
"In this paper, we compare two different approaches to nonconvex global optimization. The first one is a deterministic spatial Branch-and-Bound algorithm, whereas the second approach is a Quasi Monte Carlo (QMC) variant of a stochastic multi level single linkage (MLSL) algorithm. Both algorithms apply to problems in a very general form and are not dependent on problem structure. The test suite we chose is fairly extensive in scope, in that it includes constrained and unconstrained problems, continuous and mixed-integer problems. The conclusion of the tests is that in general the QMC variant of the MLSL algorithm is generally faster, although in some instances the Branch-and-Bound algorithm outperforms it.",
"We study the performance of the code for the solution of distance geometry problems with lower and upper bounds on distance constraints. The code uses only a sparse set of distance constraints, while other algorithms tend to work with a dense set of constraints either by imposing additional bounds or by deducing bounds from the given bounds. Our computational results show that protein structures can be determined by solving a distance geometry problem with and that the approach based on is significantly more reliable and efficient than multi-starts with an optimization code.",
"Distance geometry problems arise in the determination of protein structure. We consider the case where only a subset of the distances between atoms is given and formulate this distance geometry problem as a global minimization problem with special structure. We show that global smoothing techniques and a continuation approach for global optimization can be used to determine global solutions of this problem reliably and efficiently. The global continuation approach determines a global solution with less computational effort than is required by a standard multistart algorithm. Moreover, the continuation approach usually finds the global solution from any given starting point, while the multistart algorithm tends to fail.",
"We report on the theory and implementation of a global optimization solver for general constrained nonlinear programming problems based on Variable Neighbourhood Search, and we give comparative computational results on several instances of continuous nonconvex problems. Compared to an efficient multi-start global optimization solver, the VNS solver proposed appears to be significantly faster.",
""
]
} |
1411.4006 | 2950076437 | In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkit. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6 to 36.8 for the TRECVID MEDTest 14 dataset and from 34.0 to 44.6 for the TRECVID MEDTest 13 dataset. This work is the core part of the winning solution of our CMU-Informedia team in TRECVID MED 2014 competition. | Recent research efforts have shown that combining multiple features, including static appearance features @cite_15 @cite_57 @cite_13 , motion features @cite_32 @cite_42 @cite_5 @cite_33 @cite_35 and acoustic features @cite_36 , yields good performance in event detection, as evidenced by the reports of the top ranked teams in the TRECVID Multimedia Event Detection (MED) competition @cite_17 @cite_29 @cite_25 @cite_7 and research papers @cite_28 @cite_43 @cite_46 that have tackled this problem. By utilizing additional data to assist complex event detection, researchers propose the use of video attributes'' derived from other sources to facilitate event detection @cite_49 , or to utilize related exemplars when the training exemplars are very few @cite_2 . As we focus on improving video representation in this paper, this new method can be readily fed into those frameworks to further improve their performance. | {
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_7",
"@cite_36",
"@cite_28",
"@cite_29",
"@cite_42",
"@cite_46",
"@cite_32",
"@cite_57",
"@cite_43",
"@cite_49",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_25",
"@cite_17"
],
"mid": [
"",
"",
"",
"2097508275",
"2141939040",
"",
"",
"",
"",
"",
"",
"2067936711",
"2150979491",
"",
"2161969291",
"",
"",
""
],
"abstract": [
"",
"",
"",
"In this paper, we present recent experiments on using Artificial Neural Networks (ANNs), a new “delayed” approach to speech vs. non-speech segmentation, and extraction of large-scale pooling feature (LSPF) for detecting “events” within consumer videos, using the audio channel only. A “event” is defined to be a sequence of observations in a video, that can be directly observed or inferred. Ground truth is given by a semantic description of the event, and by a number of example videos. We describe and compare several algorithmic approaches, and report results on the 2013 TRECVID Multimedia Event Detection (MED) task, using arguably the largest such research set currently available. The presented system achieved the best results in most audio-only conditions. While the overall finding is that MFCC features perform best, we find that ANN as well as LSP features provide complementary information at various levels of temporal resolution. This paper provides analysis of both low-level and high-level features, investigating their relative contributions to overall system performance.",
"Combining multiple low-level visual features is a proven and effective strategy for a range of computer vision tasks. However, limited attention has been paid to combining such features with information from other modalities, such as audio and videotext, for large scale analysis of web videos. In our work, we rigorously analyze and combine a large set of low-level features that capture appearance, color, motion, audio and audio-visual co-occurrence patterns in videos. We also evaluate the utility of high-level (i.e., semantic) visual information obtained from detecting scene, object, and action concepts. Further, we exploit multimodal information by analyzing available spoken and videotext content using state-of-the-art automatic speech recognition (ASR) and videotext recognition systems. We combine these diverse features using a two-step strategy employing multiple kernel learning (MKL) and late score level fusion methods. Based on the TRECVID MED 2011 evaluations for detecting 10 events in a large benchmark set of ∼45000 videos, our system showed the best performance among the 19 international teams.",
"",
"",
"",
"",
"",
"",
"Complex events essentially include human, scenes, objects and actions that can be summarized by visual attributes, so leveraging relevant attributes properly could be helpful for event detection. Many works have exploited attributes at image level for various applications. However, attributes at image level are possibly insufficient for complex event detection in videos due to their limited capability in characterizing the dynamic properties of video data. Hence, we propose to leverage attributes at video level (named as video attributes in this work), i.e., the semantic labels of external videos are used as attributes. Compared to complex event videos, these external videos contain simple contents such as objects, scenes and actions which are the basic elements of complex events. Specifically, building upon a correlation vector which correlates the attributes and the complex event, we incorporate video attributes latently as extra informative cues into the event detector learnt from complex event videos. Extensive experiments on a real-world large-scale dataset validate the efficacy of the proposed approach.",
"Compared to visual concepts such as actions, scenes and objects, complex event is a higher level abstraction of longer video sequences. For example, a \"marriage proposal\" event is described by multiple objects (e.g., ring, faces), scenes (e.g., in a restaurant, outdoor) and actions (e.g., kneeling down). The positive exemplars which exactly convey the precise semantic of an event are hard to obtain. It would be beneficial to utilize the related exemplars for complex event detection. However, the semantic correlations between related exemplars and the target event vary substantially as relatedness assessment is subjective. Two related exemplars can be about completely different events, e.g., in the TRECVID MED dataset, both bicycle riding and equestrianism are labeled as related to \"attempting a bike trick\" event. To tackle the subjectiveness of human assessment, our algorithm automatically evaluates how positive the related exemplars are for the detection of an event and uses them on an exemplar-specific basis. Experiments demonstrate that our algorithm is able to utilize related exemplars adaptively, and the algorithm gains good performance for complex event detection.",
"",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"",
"",
""
]
} |
1411.4006 | 2950076437 | In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkit. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6 to 36.8 for the TRECVID MEDTest 14 dataset and from 34.0 to 44.6 for the TRECVID MEDTest 13 dataset. This work is the core part of the winning solution of our CMU-Informedia team in TRECVID MED 2014 competition. | Secondly, when dealing with a domain specific task with a small number of training data, fine-tuning @cite_37 is an effective technique for adapting the ImageNet pre-trained models for new tasks. However, the video level event labels are rather coarse at the frame level, , not all frames necessarily contain the semantic information of the event. If we use the coarse video level label for each frame, performance is barely improved; this was verified by our preliminary experiment. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2102605133"
],
"abstract": [
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1411.4331 | 2950615031 | As a fundamental technique that concerns several vision tasks such as image parsing, action recognition and clothing retrieval, human pose estimation (HPE) has been extensively investigated in recent years. To achieve accurate and reliable estimation of the human pose, it is well-recognized that the clothing attributes are useful and should be utilized properly. Most previous approaches, however, require to manually annotate the clothing attributes and are therefore very costly. In this paper, we shall propose and explore a clothing attribute approach for HPE. Unlike previous approaches, our approach models the clothing attributes as latent variables and thus requires no explicit labeling for the clothing attributes. The inference of the latent variables are accomplished by utilizing the framework of latent structured support vector machines (LSSVM). We employ the strategy of to train the LSSVM model: In each iteration, one kind of variables (e.g., human pose or clothing attribute) are fixed and the others are optimized. Our extensive experiments on two real-world benchmarks show the state-of-the-art performance of our proposed approach. | As aforementioned, HPE is a difficult problem, especially in unconstrained scenes. Some of the researchers studied the problem under the context of 3D scenery @cite_23 @cite_9 . In the work of @cite_23 , they extended the popular 2D pictorial structure @cite_0 @cite_13 to 3D images and employed the new framework to model view point, joint angle, etc. @cite_12 proposed a real time algorithm for estimating the 3D human pose, striving for making the technique practical in real world applications. | {
"cite_N": [
"@cite_9",
"@cite_0",
"@cite_23",
"@cite_13",
"@cite_12"
],
"mid": [
"2052747804",
"2045798786",
"2171125807",
"2030536784",
"2060280062"
],
"abstract": [
"Recently, the emergence of Kinect systems has demonstrated the benefits of predicting an intermediate body part labeling for 3D human pose estimation, in conjunction with RGB-D imagery. The availability of depth information plays a critical role, so an important question is whether a similar representation can be developed with sufficient robustness in order to estimate 3D pose from RGB images. This paper provides evidence for a positive answer, by leveraging (a) 2D human body part labeling in images, (b) second-order label-sensitive pooling over dynamically computed regions resulting from a hierarchical decomposition of the body, and (c) iterative structured-output modeling to contextualize the process based on 3D pose estimates. For robustness and generalization, we take advantage of a recent large-scale 3D human motion capture dataset, Human3.6M[18] that also has human body part labeling annotations available with images. We provide extensive experimental studies where alternative intermediate representations are compared and report a substantial 33 error reduction over competitive discriminative baselines that regress 3D human pose against global HOG features.",
"The primary problem dealt with in this paper is the following. Given some description of a visual object, find that object in an actual photograph. Part of the solution to this problem is the specification of a descriptive scheme, and a metric on which to base the decision of \"goodness\" of matching or detection.",
"We consider the problem of automatically estimating the 3D pose of humans from images, taken from multiple calibrated views. We show that it is possible and tractable to extend the pictorial structures framework, popular for 2D pose estimation, to 3D. We discuss how to use this framework to impose view, skeleton, joint angle and intersection constraints in 3D. The 3D pictorial structures are evaluated on multiple view data from a professional football game. The evaluation is focused on computational tractability, but we also demonstrate how a simple 2D part detector can be plugged into the framework.",
"In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.",
"We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching."
]
} |
1411.4331 | 2950615031 | As a fundamental technique that concerns several vision tasks such as image parsing, action recognition and clothing retrieval, human pose estimation (HPE) has been extensively investigated in recent years. To achieve accurate and reliable estimation of the human pose, it is well-recognized that the clothing attributes are useful and should be utilized properly. Most previous approaches, however, require to manually annotate the clothing attributes and are therefore very costly. In this paper, we shall propose and explore a clothing attribute approach for HPE. Unlike previous approaches, our approach models the clothing attributes as latent variables and thus requires no explicit labeling for the clothing attributes. The inference of the latent variables are accomplished by utilizing the framework of latent structured support vector machines (LSSVM). We employ the strategy of to train the LSSVM model: In each iteration, one kind of variables (e.g., human pose or clothing attribute) are fixed and the others are optimized. Our extensive experiments on two real-world benchmarks show the state-of-the-art performance of our proposed approach. | Most studies (including this work) on HPE focus on 2D static images. In the early works, the human part was often modeled by oriented template. Although straightforward, the oriented templates may not properly handle the fore-shortening of the objects @cite_29 @cite_15 @cite_14 . @cite_22 , an advanced representation scheme was proposed to model the oriented human parts. The new model is formulated as a mixture of non-oriented components, each of which is attributed with a type''. Interestingly, the new model can approximate the fore-shortening by tuning the adjacent components in a spring structure. | {
"cite_N": [
"@cite_29",
"@cite_14",
"@cite_22",
"@cite_15"
],
"mid": [
"2105990640",
"2112324405",
"",
"1997691213"
],
"abstract": [
"We consider the machine vision task of pose estimation from static images, specifically for the case of articulated objects. This problem is hard because of the large number of degrees of freedom to be estimated. Following a established line of research, pose estimation is framed as inference in a probabilistic model. In our experience however, the success of many approaches often lie in the power of the features. Our primary contribution is a novel casting of visual inference as an iterative parsing process, where one sequentially learns better and better features tuned to a particular image. We show quantitative results for human pose estimation on a database of over 300 images that suggest our algorithm is competitive with or surpasses the state-of-the-art. Since our procedure is quite general (it does not rely on face or skin detection), we also use it to estimate the poses of horses in the Weizmann database.",
"We analyze the use of kinematic constraints for articulated object tracking. Conditions for the occurrence of singularities in 3-D models are presented and their effects on tracking are characterized We describe a novel 2-D Scaled Prismatic Model (SPM) for figure registration. In contrast to 3-D kinematic models, the SPM has fewer singularity problems and does not require detailed knowledge of the 3-D kinematics. We fully characterize the singularities in the SPM and illustrate tracking through singularities using synthetic and real examples with 3-D and 2-D models. Our results demonstrate the significant benefits of the SPM in tracking with a single source of video.",
"",
"Pictorial structure (PS) models are extensively used for part-based recognition of scenes, people, animals and multi-part objects. To achieve tractability, the structure and parameterization of the model is often restricted, for example, by assuming tree dependency structure and unimodal, data-independent pairwise interactions. These expressivity restrictions fail to capture important patterns in the data. On the other hand, local methods such as nearest-neighbor classification and kernel density estimation provide non-parametric flexibility but require large amounts of data to generalize well. We propose a simple semi-parametric approach that combines the tractability of pictorial structure inference with the flexibility of non-parametric methods by expressing a subset of model parameters as kernel regression estimates from a learned sparse set of exemplars. This yields query-specific, image-dependent pose priors. We develop an effective shape-based kernel for upper-body pose similarity and propose a leave-one-out loss function for learning a sparse subset of exemplars for kernel regression. We apply our techniques to two challenging datasets of human figure parsing and advance the state-of-the-art (from 80 to 86 on the Buffy dataset [8]), while using only 15 of the training data as exemplars."
]
} |
1411.4331 | 2950615031 | As a fundamental technique that concerns several vision tasks such as image parsing, action recognition and clothing retrieval, human pose estimation (HPE) has been extensively investigated in recent years. To achieve accurate and reliable estimation of the human pose, it is well-recognized that the clothing attributes are useful and should be utilized properly. Most previous approaches, however, require to manually annotate the clothing attributes and are therefore very costly. In this paper, we shall propose and explore a clothing attribute approach for HPE. Unlike previous approaches, our approach models the clothing attributes as latent variables and thus requires no explicit labeling for the clothing attributes. The inference of the latent variables are accomplished by utilizing the framework of latent structured support vector machines (LSSVM). We employ the strategy of to train the LSSVM model: In each iteration, one kind of variables (e.g., human pose or clothing attribute) are fixed and the others are optimized. Our extensive experiments on two real-world benchmarks show the state-of-the-art performance of our proposed approach. | Some work tried to incorporate side'' techniques, e.g., image segmentation, to enhance HPE. @cite_28 , a variety of image features, e.g., boundary response and region segmentation, were utilized to produce more reliable HPE results. @cite_10 , the background was modeled as a Gaussian distribution. @cite_5 , the authors present a two-stage approximate scheme to improve the accuracy of estimating lower arms in videos. The algorithm was imposed to output the candidates with high contrast to the surroundings. | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_10"
],
"mid": [
"1540144755",
"2157982431",
"2054990548"
],
"abstract": [
"We address the problem of articulated human pose estimation by learning a coarse-to-fine cascade of pictorial structure models. While the fine-level state-space of poses of individual parts is too large to permit the use of rich appearance models, most possibilities can be ruled out by efficient structured models at a coarser scale. We propose to learn a sequence of structured models at different pose resolutions, where coarse models filter the pose space for the next level via their max-marginals. The cascade is trained to prune as much as possible while preserving true poses for the final level pictorial structure model. The final level uses much more expensive segmentation, contour and shape features in the model for the remaining filtered set of candidates. We evaluate our framework on the challenging Buffy and PASCAL human pose datasets, improving the state-of-the-art.",
"In this paper, we present a method for estimating articulated human poses in videos. We cast this as an optimization problem defined on body parts with spatio-temporal links between them. The resulting formulation is unfortunately intractable and previous approaches only provide approximate solutions. Although such methods perform well on certain body parts, e.g., head, their performance on lower arms, i.e., elbows and wrists, remains poor. We present a new approximate scheme with two steps dedicated to pose estimation. First, our approach takes into account temporal links with subsequent frames for the less-certain parts, namely elbows and wrists. Second, our method decomposes poses into limbs, generates limb sequences across time, and recomposes poses by mixing these body part sequences. We introduce a new dataset \"Poses in the Wild\", which is more challenging than the existing ones, with sequences containing background clutter, occlusions, and severe camera motion. We experimentally compare our method with recent approaches on this new dataset as well as on two other benchmark datasets, and show significant improvement.",
"In this paper we present a compositional and-or graph grammar model for human pose estimation. Our model has three distinguishing features: (i) large appearance differences between people are handled compositionally by allowing parts or collections of parts to be substituted with alternative variants, (ii) each variant is a sub-model that can define its own articulated geometry and context-sensitive compatibility with neighboring part variants, and (iii) background region segmentation is incorporated into the part appearance models to better estimate the contrast of a part region from its surroundings, and improve resilience to background clutter. The resulting integrated framework is trained discriminatively in a max-margin framework using an efficient and exact inference algorithm. We present experimental evaluation of our model on two popular datasets, and show performance improvements over the state-of-art on both benchmarks."
]
} |
1411.4166 | 2952828476 | Vector space word representations are learned from distributional information of words in large corpora. Although such statistics are semantically informative, they disregard the valuable information that is contained in semantic lexicons such as WordNet, FrameNet, and the Paraphrase Database. This paper proposes a method for refining vector space representations using relational information from semantic lexicons by encouraging linked words to have similar vector representations, and it makes no assumptions about how the input vectors were constructed. Evaluated on a battery of standard lexical semantic evaluation tasks in several languages, we obtain substantial improvements starting with a variety of word vector models. Our refinement method outperforms prior techniques for incorporating semantic lexicons into the word vector training algorithms. | The use of lexical semantic information in training word vectors has been limited. Recently, word similarity knowledge @cite_0 @cite_26 and word relational knowledge @cite_4 @cite_15 have been used to improve the word2vec embeddings in a joint training model similar to our regularization approach. In latent semantic analysis, the word cooccurrence matrix can be constructed to incorporate relational information like antonym specific polarity induction @cite_25 and multi-relational latent semantic analysis @cite_44 . | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_0",
"@cite_44",
"@cite_15",
"@cite_25"
],
"mid": [
"1953971037",
"2125076245",
"2250930514",
"2150159277",
"68293321",
"2101251342"
],
"abstract": [
"We investigate the hypothesis that word representations ought to incorporate both distributional and relational semantics. To this end, we employ the Alternating Direction Method of Multipliers (ADMM), which flexibly optimizes a distributional objective on raw text and a relational objective on WordNet. Preliminary results on knowledge base completion, analogy tests, and parsing show that word representations trained on both objectives can give improvements in some cases.",
"Representing words into vectors in continuous space can form up a potentially powerful basis to generate high-quality textual features for many text mining and natural language processing tasks. Some recent efforts, such as the skip-gram model, have attempted to learn word representations that can capture both syntactic and semantic information among text corpus. However, they still lack the capability of encoding the properties of words and the complex relationships among words very well, since text itself often contains incomplete and ambiguous information. Fortunately, knowledge graphs provide a golden mine for enhancing the quality of learned word representations. In particular, a knowledge graph, usually composed by entities (words, phrases, etc.), relations between entities, and some corresponding meta information, can supply invaluable relational knowledge that encodes the relationship between entities as well as categorical knowledge that encodes the attributes or properties of entities. Hence, in this paper, we introduce a novel framework called RC-NET to leverage both the relational and categorical knowledge to produce word representations of higher quality. Specifically, we build the relational knowledge and the categorical knowledge into two separate regularization functions, and combine both of them with the original objective function of the skip-gram model. By solving this combined optimization problem using back propagation neural networks, we can obtain word representations enhanced by the knowledge graph. Experiments on popular text mining and natural language processing tasks, including analogical reasoning, word similarity, and topic prediction, have all demonstrated that our model can significantly improve the quality of word representations.",
"Word embeddings learned on unlabeled data are a popular tool in semantics, but may not capture the desired semantics. We propose a new learning objective that incorporates both a neural language model objective (, 2013) and prior knowledge from semantic resources to learn improved lexical semantic embeddings. We demonstrate that our embeddings improve over those learned solely on raw text in three settings: language modeling, measuring semantic similarity, and predicting human judgements.",
"We present Multi-Relational Latent Semantic Analysis (MRLSA) which generalizes Latent Semantic Analysis (LSA). MRLSA provides an elegant approach to combining multiple relations between words by constructing a 3-way tensor. Similar to LSA, a lowrank approximation of the tensor is derived using a tensor decomposition. Each word in the vocabulary is thus represented by a vector in the latent semantic space and each relation is captured by a latent square matrix. The degree of two words having a specific relation can then be measured through simple linear algebraic operations. We demonstrate that by integrating multiple relations from both homogeneous and heterogeneous information sources, MRLSA achieves stateof-the-art performance on existing benchmark datasets for two relations, antonymy and is-a.",
"The basis of applying deep learning to solve natural language processing tasks is to obtain high-quality distributed representations of words, i.e., word embeddings, from large amounts of text data. However, text itself usually contains incomplete and ambiguous information, which makes necessity to leverage extra knowledge to understand it. Fortunately, text itself already contains well-defined morphological and syntactic knowledge; moreover, the large amount of texts on the Web enable the extraction of plenty of semantic knowledge. Therefore, it makes sense to design novel deep learning algorithms and systems in order to leverage the above knowledge to compute more effective word embeddings. In this paper, we conduct an empirical study on the capacity of leveraging morphological, syntactic, and semantic knowledge to achieve high-quality word embeddings. Our study explores these types of knowledge to define new basis for word representation, provide additional input information, and serve as auxiliary supervision in deep learning, respectively. Experiments on an analogical reasoning task, a word similarity task, and a word completion task have all demonstrated that knowledge-powered deep learning can enhance the effectiveness of word embedding.",
"Existing vector space models typically map synonyms and antonyms to similar word vectors, and thus fail to represent antonymy. We introduce a new vector space representation where antonyms lie on opposite sides of a sphere: in the word vector space, synonyms have cosine similarities close to one, while antonyms are close to minus one. We derive this representation with the aid of a thesaurus and latent semantic analysis (LSA). Each entry in the thesaurus -- a word sense along with its synonyms and antonyms -- is treated as a \"document,\" and the resulting document collection is subjected to LSA. The key contribution of this work is to show how to assign signs to the entries in the co-occurrence matrix on which LSA operates, so as to induce a subspace with the desired property. We evaluate this procedure with the Graduate Record Examination questions of (, 2008) and find that the method improves on the results of that study. Further improvements result from refining the subspace representation with discriminative training, and augmenting the training data with general newspaper text. Altogether, we improve on the best previous results by 11 points absolute in F measure."
]
} |
1411.4166 | 2952828476 | Vector space word representations are learned from distributional information of words in large corpora. Although such statistics are semantically informative, they disregard the valuable information that is contained in semantic lexicons such as WordNet, FrameNet, and the Paraphrase Database. This paper proposes a method for refining vector space representations using relational information from semantic lexicons by encouraging linked words to have similar vector representations, and it makes no assumptions about how the input vectors were constructed. Evaluated on a battery of standard lexical semantic evaluation tasks in several languages, we obtain substantial improvements starting with a variety of word vector models. Our refinement method outperforms prior techniques for incorporating semantic lexicons into the word vector training algorithms. | The approach we propose is conceptually similar to previous work that uses graph structures to propagate information among semantic concepts @cite_9 @cite_18 . Graph-based belief propagation has also been used to induce POS tags @cite_34 @cite_31 and semantic frame associations @cite_24 . In those efforts, labels for unknown words were inferred using a method similar to ours. Broadly, graph-based semi-supervised learning @cite_9 @cite_38 has been applied to machine translation @cite_19 , unsupervised semantic role induction @cite_27 , semantic document modeling @cite_36 , language generation @cite_22 and sentiment analysis @cite_13 . | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_22",
"@cite_36",
"@cite_9",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_31",
"@cite_34",
"@cite_13"
],
"mid": [
"2138288827",
"2135725275",
"2096646663",
"2145769341",
"1497443639",
"2117391079",
"2151075664",
"2111024941",
"2142523187",
"1709989312",
"2071085454"
],
"abstract": [
"Graph-based semi-supervised learning (SSL) algorithms have been successfully used to extract class-instance pairs from large unstructured and structured text collections. However, a careful comparison of different graph-based SSL algorithms on that task has been lacking. We compare three graph-based SSL algorithms for class-instance acquisition on a variety of graphs constructed from different domains. We find that the recently proposed MAD algorithm is the most effective. We also show that class-instance extraction can be significantly improved by adding semantic information in the form of instance-attribute edges derived from an independently developed knowledge base. All of our code and data will be made publicly available to encourage reproducible research in this area.",
"Graph-based learning provides a useful approach for modeling data in classification problems. In this modeling scenario, the relationship between labeled and unlabeled data impacts the construction and performance of classifiers and, therefore, a semisupervised learning framework is adopted. We propose a graph classifier based on kernel smoothing. A regularization framework is also introduced and it is shown that the proposed classifier optimizes certain loss functions. Its performance is assessed on several synthetic and real benchmark data sets with good results, especially in settings where only a small fraction of the data are labeled.",
"This article describes a new approach to the generation of referring expressions. We propose to formalize a scene (consisting of a set of objects with various properties and relations) as a labeled directed graph and describe content selection (which properties to include in a referring expression) as a subgraph construction problem. Cost functions are used to guide the search process and to give preference to some solutions over others. The current approach has four main advantages: (1) Graph structures have been studied extensively, and by moving to a graph perspective we get direct access to the many theories and algorithms for dealing with graphs; (2) many existing generation algorithms can be reformulated in terms of graphs, and this enhances comparison and integration of the various approaches; (3) the graph perspective allows us to solve a number of problems that have plagued earlier algorithms for the generation of referring expressions; and (4) the combined use of graphs and cost functions paves the way for an integration of rule-based generation techniques with more recent stochastic approaches.",
"We propose a graph-based semantic model for representing document content. Our method relies on the use of a semantic network, namely the DBpedia knowledge base, for acquiring fine-grained information about entities and their semantic relations, thus resulting in a knowledge-rich document model. We demonstrate the benefits of these semantic representations in two tasks: entity ranking and computing document semantic similarity. To this end, we couple DBpedia's structure with an information-theoretic measure of concept association, based on its explicit semantic relations, and compute semantic similarity using a Graph Edit Distance based measure, which finds the optimal matching between the documents' entities using the Hungarian method. Experimental results show that our general model outperforms baselines built on top of traditional methods, and achieves a performance close to that of highly specialized methods that have been tuned to these specific tasks.",
"In traditional machine learning approaches to classification, one uses only a labeled set to train the classifier. Labeled instances however are often difficult, expensive, or time consuming to obtain, as they require the efforts of experienced human annotators. Meanwhile unlabeled data may be relatively easy to collect, but there has been few ways to use them. Semi-supervised learning addresses this problem by using large amount of unlabeled data, together with the labeled data, to build better classifiers. Because semi-supervised learning requires less human effort and gives higher accuracy, it is of great interest both in theory and in practice. We present a series of novel semi-supervised learning approaches arising from a graph representation, where labeled and unlabeled instances are represented as vertices, and edges encode the similarity between instances. They address the following questions: How to use unlabeled data? (label propagation); What is the probabilistic interpretation? (Gaussian fields and harmonic functions); What if we can choose labeled data? (active learning); How to construct good graphs? (hyperparameter learning); How to work with kernel machines like SVM? (graph kernels); How to handle complex data like sequences? (kernel conditional random fields); How to handle scalability and induction? (harmonic mixtures). An extensive literature review is included at the end.",
"We describe a new approach to disambiguating semantic frames evoked by lexical predicates previously unseen in a lexicon or annotated data. Our approach makes use of large amounts of unlabeled data in a graph-based semi-supervised learning framework. We construct a large graph where vertices correspond to potential predicates and use label propagation to learn possible semantic frames for new ones. The label-propagated graph is used within a frame-semantic parser and, for unknown predicates, results in over 15 absolute improvement in frame identification accuracy and over 13 absolute improvement in full frame-semantic parsing F1 score on a blind test set, over a state-of-the-art supervised baseline.",
"Current phrase-based statistical machine translation systems process each test sentence in isolation and do not enforce global consistency constraints, even though the test data is often internally consistent with respect to topic or style. We propose a new consistency model for machine translation in the form of a graph-based semi-supervised learning algorithm that exploits similarities between training and test data and also similarities between different test sentences. The algorithm learns a regression function jointly over training and test data and uses the resulting scores to rerank translation hypotheses. Evaluation on two travel expression translation tasks demonstrates improvements of up to 2.6 BLEU points absolute and 2.8 in PER.",
"In this paper we present a method for unsupervised semantic role induction which we formalize as a graph partitioning problem. Argument instances of a verb are represented as vertices in a graph whose edge weights quantify their role-semantic similarity. Graph partitioning is realized with an algorithm that iteratively assigns vertices to clusters based on the cluster assignments of neighboring vertices. Our method is algorithmically and conceptually simple, especially with respect to how problem-specific knowledge is incorporated into the model. Experimental results on the CoNLL 2008 benchmark dataset demonstrate that our model is competitive with other unsupervised approaches in terms of F1 whilst attaining significantly higher cluster purity.",
"We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language. Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages. We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg-, 2010). Across eight European languages, our approach results in an average absolute improvement of 10.4 over a state-of-the-art baseline, and 16.7 over vanilla hidden Markov models induced with the Expectation Maximization algorithm.",
"We describe a new scalable algorithm for semi-supervised training of conditional random fields (CRF) and its application to part-of-speech (POS) tagging. The algorithm uses a similarity graph to encourage similar n-grams to have similar POS tags. We demonstrate the efficacy of our approach on a domain adaptation task, where we assume that we have access to large amounts of unlabeled data from the target domain, but no additional labeled data. The similarity graph is used during training to smooth the state posteriors on the target domain. Standard inference can be used at test time. Our approach is able to scale to very large problems and yields significantly improved target domain accuracy.",
"We present a graph-based semi-supervised learning algorithm to address the sentiment analysis task of rating inference. Given a set of documents (e.g., movie reviews) and accompanying ratings (e.g., \"4 stars\"), the task calls for inferring numerical ratings for unlabeled documents based on the perceived sentiment expressed by their text. In particular, we are interested in the situation where labeled data is scarce. We place this task in the semi-supervised setting and demonstrate that considering unlabeled reviews in the learning process can improve rating-inference performance. We do so by creating a graph on both labeled and unlabeled data to encode certain assumptions for this task. We then solve an optimization problem to obtain a smooth rating function over the whole graph. When only limited labeled data is available, this method achieves significantly better predictive accuracy over other methods that ignore the unlabeled examples during training."
]
} |
1411.4464 | 209511213 | In this paper, we propose a fast fully convolutional neural network (FCNN) for crowd segmentation. By replacing the fully connected layers in CNN with 1 by 1 convolution kernels, FCNN takes whole images as inputs and directly outputs segmentation maps by one pass of forward propagation. It has the property of translation invariance like patch-by-patch scanning but with much lower computation cost. Once FCNN is learned, it can process input images of any sizes without warping them to a standard size. These attractive properties make it extendable to other general image segmentation problems. Based on FCNN, a multi-stage deep learning is proposed to integrate appearance and motion cues for crowd segmentation. Both appearance filters and motion filers are pretrained stage-by-stage and then jointly optimized. Different combination methods are investigated. The effectiveness of our approach and component-wise analysis are evaluated on two crowd segmentation datasets created by us, which include image frames from 235 and 11 scenes, respectively. They are currently the largest crowd segmentation datasets and will be released to the public. | A number of methods have been proposed for crowd segmentation in recent years. It is typically achieved via background subtraction and motion segmentation @cite_15 @cite_10 @cite_17 @cite_5 , which usually require a static camera view or fixed pedestrian motion patterns. Some approaches based on pedestrian detection and tracking results @cite_39 @cite_4 @cite_1 @cite_9 usually perform poorly on highly crowded scenes due to severe occlusions. Combining multiple visual cues into crowd segmentation has also been investigated by using motion and shape information jointly @cite_37 @cite_17 . Most of these works require training and testing on the same scene, which is not applicable for real-world cross-scene segmentation. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_9",
"@cite_1",
"@cite_39",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"2135347708",
"2161969291",
"2096246546",
"2096349671",
"2133986780",
"1987927068",
"2130293653",
"2162616721",
"2162411030"
],
"abstract": [
"The main focus of this work is the integration of feature grouping and model based segmentation into one consistent framework. The algorithm is based on partitioning a given set of image features using a likelihood function that is parameterized on the shape and location of potential individuals in the scene. Using a variant of the EM formulation, maximum likelihood estimates of both the model parameters and the grouping are obtained simultaneously. The resulting algorithm performs global optimization and generates accurate results even when decisions can not be made using local context alone. An important feature of the algorithm is that the number of people in the scene is not modeled explicitly. As a result no prior knowledge or assumed distributions are required. The approach is shown to be robust with respect to partial occlusion, shadows, clutter, and can operate over a large range of challenging view angles including those that are parallel to the ground plane. Comparisons with existing crowd segmentation systems are made and the utility of coupling crowd segmentation with a temporal tracking system is demonstrated.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"Segmentation and tracking of multiple humans in crowded situations is made difficult by interobject occlusion. We propose a model-based approach to interpret the image observations by multiple partially occluded human hypotheses in a Bayesian framework. We define a joint image likelihood for multiple humans based on the appearance of the humans, the visibility of the body obtained by occlusion reasoning, and foreground background separation. The optimal solution is obtained by using an efficient sampling method, data-driven Markov chain Monte Carlo (DDMCMC), which uses image observations for proposal probabilities. Knowledge of various aspects, including human shape, camera model, and image cues, are integrated in one theoretically sound framework. We present experimental results and quantitative evaluation, demonstrating that the resulting approach is effective for very challenging data.",
"In this paper, we address the problem of detecting pedestrians in crowded real-world scenes with severe overlaps. Our basic premise is that this problem is too difficult for any type of model or feature alone. Instead, we present an algorithm that integrates evidence in multiple iterations and from different sources. The core part of our method is the combination of local and global cues via probabilistic top-down segmentation. Altogether, this approach allows examining and comparing object hypotheses with high precision down to the pixel level. Qualitative and quantitative results on a large data set confirm that our method is able to reliably detect pedestrians in crowded scenes, even when they overlap and partially occlude each other. In addition, the flexible nature of our approach allows it to operate on very small training sets.",
"This paper describes a pedestrian detection system that integrates image intensity information with motion information. We use a detection style algorithm that scans a detector over two consecutive frames of a video sequence. The detector is trained (using AdaBoost) to take advantage of both motion and appearance information to detect a walking person. Past approaches have built detectors based on motion information or detectors based on appearance information, but ours is the first to combine both sources of information in a single detector. The implementation described runs at about 4 frames second, detects pedestrians at very small scales (as small as 20 × 15 pixels), and has a very low false positive rate. Our approach builds on the detection work of Viola and Jones. Novel contributions of this paper include: (i) development of a representation of image motion which is extremely efficient, and (ii) implementation of a state of the art pedestrian detection system which operates on low resolution images under difficult conditions (such as rain and snow).",
"We propose a joint foreground-background mixture model (FBM) that simultaneously performs background estimation and motion segmentation in complex dynamic scenes. Our FBM consist of a set of location-specific dynamic texture (DT) components, for modeling local background motion, and set of global DT components, for modeling consistent foreground motion. We derive an EM algorithm for estimating the parameters of the FBM. We also apply spatial constraints to the FBM using an Markov random field grid, and derive a corresponding variational approximation for inference. Unlike existing approaches to background subtraction, our FBM does not require a manually selected threshold or a separate training video. Unlike existing motion segmentation techniques, our FBM can segment foreground motions over complex background with mixed motions, and detect stopped objects. Since most dynamic scene datasets only contain videos with a single foreground object over a simple background, we develop a new challenging dataset with multiple foreground objects over complex dynamic backgrounds. In experiments, we show that jointly modeling the background and foreground segments with FBM yields significant improvements in accuracy on both background estimation and motion segmentation, compared to state-of-the-art methods.",
"Background subtraction is a common computer vision task. We analyze the usual pixel-level approach. We develop an efficient adaptive algorithm using Gaussian mixture probability density. Recursive equations are used to constantly update the parameters and but also to simultaneously select the appropriate number of components for each pixel.",
"A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectation-maximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time- series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (for example, fire, steam, water, vehicle and pedestrian traffic, and so forth). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (for example, optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.",
"This paper presents a fast, accurate, and novel method for the problem of estimating the number of humans and their positions from background differenced images obtained from a single camera where inter-human occlusion is significant. The problem is challenging firstly because the state space formed by the number, positions, and articulations of people is large. Secondly, in spite of many advances in background maintenance and change detection, background differencing remains a noisy and imprecise process, and its output is far from ideal: holes, fill-ins, irregular boundaries etc. pose additional challenges for our \"mid- level\" problem of segmenting it to localize humans. We propose a novel example-based algorithm which maps the global shape feature by Fourier descriptors to various configurations of humans directly. We use locally weighted averaging to interpolate for the best possible candidate configuration. The inherent ambiguity resulting from the lack of depth and layer information in the background difference images is mitigated by the use of dynamic programming, which finds the trajectory in state space that best explains the evolution of the projected shapes. The key components of our solution are simple and fast. We demonstrate the accuracy and speed of our approach on real image sequences."
]
} |
1411.4464 | 209511213 | In this paper, we propose a fast fully convolutional neural network (FCNN) for crowd segmentation. By replacing the fully connected layers in CNN with 1 by 1 convolution kernels, FCNN takes whole images as inputs and directly outputs segmentation maps by one pass of forward propagation. It has the property of translation invariance like patch-by-patch scanning but with much lower computation cost. Once FCNN is learned, it can process input images of any sizes without warping them to a standard size. These attractive properties make it extendable to other general image segmentation problems. Based on FCNN, a multi-stage deep learning is proposed to integrate appearance and motion cues for crowd segmentation. Both appearance filters and motion filers are pretrained stage-by-stage and then jointly optimized. Different combination methods are investigated. The effectiveness of our approach and component-wise analysis are evaluated on two crowd segmentation datasets created by us, which include image frames from 235 and 11 scenes, respectively. They are currently the largest crowd segmentation datasets and will be released to the public. | Deep neural networks have been widely deployed in general image segmentation or scene labeling tasks. The traditional methods for image segmentation are patch-by-patch scanning @cite_13 @cite_20 @cite_22 and fully-connected layer regression @cite_25 , which requires the fixed size of input and output. | {
"cite_N": [
"@cite_13",
"@cite_22",
"@cite_20",
"@cite_25"
],
"mid": [
"2022508996",
"",
"1546771929",
"2153410696"
],
"abstract": [
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.",
"",
"The goal of the scene labeling task is to assign a class label to each pixel in an image. To ensure a good visual coherence and a high class accuracy, it is essential for a model to capture long range (pixel) label dependencies in images. In a feed-forward architecture, this can be achieved simply by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach that consists of a recurrent convolutional neural network which allows us to consider a large input context while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation technique nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.",
"We propose a new Deep Decompositional Network (DDN) for parsing pedestrian images into semantic regions, such as hair, head, body, arms, and legs, where the pedestrians can be heavily occluded. Unlike existing methods based on template matching or Bayesian inference, our approach directly maps low-level visual features to the label maps of body parts with DDN, which is able to accurately estimate complex pose variations with good robustness to occlusions and background clutters. DDN jointly estimates occluded regions and segments body parts by stacking three types of hidden layers: occlusion estimation layers, completion layers, and decomposition layers. The occlusion estimation layers estimate a binary mask, indicating which part of a pedestrian is invisible. The completion layers synthesize low-level features of the invisible part from the original features and the occlusion mask. The decomposition layers directly transform the synthesized visual features to label maps. We devise a new strategy to pre-train these hidden layers, and then fine-tune the entire network using the stochastic gradient descent. Experimental results show that our approach achieves better segmentation accuracy than the state-of-the-art methods on pedestrian images with or without occlusions. Another important contribution of this paper is that it provides a large scale benchmark human parsing dataset that includes 3,673 annotated samples collected from 171 surveillance videos. It is 20 times larger than existing public datasets."
]
} |
1411.3949 | 2950928468 | Evacuee routing algorithms in emergency typically adopt one single criterion to compute desired paths and ignore the specific requirements of users caused by different physical strength, mobility and level of resistance to hazard. In this paper, we present a quality of service (QoS) driven multi-path routing algorithm to provide diverse paths for different categories of evacuees. This algorithm borrows the concept of Cognitive Packet Network (CPN), which is a flexible protocol that can rapidly solve optimal solution for any user-defined goal function. Spatial information regarding the location and spread of hazards is taken into consideration to avoid that evacuees be directed towards hazardous zones. Furthermore, since previous emergency navigation algorithms are normally insensitive to sudden changes in the hazard environment such as abrupt congestion or injury of civilians, evacuees are dynamically assigned to several groups to adapt their course of action with regard to their on-going physical condition and environments. Simulation results indicate that the proposed algorithm which is sensitive to the needs of evacuees produces better results than the use of a single metric. Simulations also show that the use of dynamic grouping to adjust the evacuees' category and routing algorithms with regard for their on-going health conditions and mobility, can achieve higher survival rates. | Disaster management and building evacuation can improve significantly with the help of IT solutions. Initial research in this field were actually driven by defence applications @cite_16 including enhanced reality simulators @cite_24 and evacuation models that incorporated models of human mobility and behaviour @cite_5 . Recent survey articles @cite_8 @cite_9 can assist in selecting research directions with agent-based models that offer some level of realism by representing each individual evacuee as an agent that follows specific goals. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_24",
"@cite_5",
"@cite_16"
],
"mid": [
"2602778791",
"2006908712",
"2042067389",
"2049104168",
"2044871439"
],
"abstract": [
"",
"This paper surveys recent research on the use of sensor networks, communications and computer systems to enhance the human outcome of emergency situations. Areas covered include sensing, communication with evacuees and emergency personnel, path finding algorithms for safe evacuation, simulation and prediction, and decision tools. The systems being considered are a special instance of real-time cyber-physical-human systems that have become a crucial component of all large scale physical infrastructures such as buildings, campuses, sports and entertainment venues, and transportation hubs.",
"In many critical applications such as airport operations (for capacity planning), military simulations (for tactical training and planning), and medical simulations (for the planning of medical treatment and surgical operations), it is very useful to conduct simulations within physically accurate and visually realistic settings that are represented by real video imaging sequences. Furthermore, it is important that the simulated entities conduct autonomous actions which are realistic and which follow plans of action or intelligent behavior in reaction to current situations. We describe the research we have conducted to incorporate synthetic objects in a visually realistic manner in video sequences representing a real scene. We also discuss how the synthetic objects can be designed to conduct intelligent behavior within an augmented reality setting. The paper discusses both the computer vision aspects that we have addressed and solved, and the issues related to the insertion of intelligent autonomous objects within an augmented reality simulation.",
"Computer based analysis of evacuation can be performed using one of three different approaches, namely optimization, simulation and risk assessment. Furthermore, within each approach different means of representing the enclosure, the population and the behaviour of the population are possible. The myriad of approaches that are available has led to the development of some 22 different evacuation models. This review attempts to describe each of the modelling approaches adopted and critically review the inherent capabilities of each approach. The review is based on available published literature.",
"Research on demining includes many different aspects, and in particular the design of efficient and intelligent strategies for (1) determining regions of interest using a variety of sensors, (2) detecting and classifying mines, and (3) searching for mines by autonomous agents. This paper discusses strategies for directing autonomous search based on spatio-temporal distributions. We discuss a model for search assuming that the environment is static, except for the effect of identifying mine locations. Algorithms are designed and compared for autonomously directing a robot, in the case where a single search engine carrying a single sensor."
]
} |
1411.3949 | 2950928468 | Evacuee routing algorithms in emergency typically adopt one single criterion to compute desired paths and ignore the specific requirements of users caused by different physical strength, mobility and level of resistance to hazard. In this paper, we present a quality of service (QoS) driven multi-path routing algorithm to provide diverse paths for different categories of evacuees. This algorithm borrows the concept of Cognitive Packet Network (CPN), which is a flexible protocol that can rapidly solve optimal solution for any user-defined goal function. Spatial information regarding the location and spread of hazards is taken into consideration to avoid that evacuees be directed towards hazardous zones. Furthermore, since previous emergency navigation algorithms are normally insensitive to sudden changes in the hazard environment such as abrupt congestion or injury of civilians, evacuees are dynamically assigned to several groups to adapt their course of action with regard to their on-going physical condition and environments. Simulation results indicate that the proposed algorithm which is sensitive to the needs of evacuees produces better results than the use of a single metric. Simulations also show that the use of dynamic grouping to adjust the evacuees' category and routing algorithms with regard for their on-going health conditions and mobility, can achieve higher survival rates. | Research then moved further to the development of complex Emergency Cyber-Physical-Human systems to direct evacuees to exits in a real time @cite_27 with sensor nodes (SNs) responsible for collecting hazard information while the decision subsystem composed of decision nodes (DNs) provides advice to evacuees through visual indicators or portable devices. @cite_29 implement a WSN consisting of sensors that continuously monitor the environment and distribute a danger-level map across the network. Optimization methods have been suggested @cite_12 using distributed decision making with random neural networks @cite_32 @cite_25 to overcome the huge complexity of decision making for a large number of agents in the presence of spatial information to help select exit routes and decide about the appropriate allocation of rescuers and technical assets. | {
"cite_N": [
"@cite_29",
"@cite_32",
"@cite_27",
"@cite_25",
"@cite_12"
],
"mid": [
"2039252979",
"2122376145",
"2150843819",
"",
"2033118756"
],
"abstract": [
"We develop distributed algorithms for self-organizing sensor networks that respond to directing a target through a region. The sensor network models the danger levels sensed across its area and has the ability to adapt to changes. It represents the dangerous areas as obstacles. A protocol that combines the artificial potential field of the sensors with the goal location for the moving object guides the object incrementally across the network to the goal, while maintaining the safest distance to the danger areas. We give the analysis to the protocol and report on hardware experiments using a physical sensor network consisting of Mote sensors.",
"Large-scale distributed systems, such as natural neuronal and artificial systems, have many local interconnections, but they often also have the ability to propagate information very fast over relatively large distances. Mechanisms that enable such behavior include very long physical signaling paths and possibly saccades of synchronous behavior that may propagate across a network. This letter studies the modeling of such behaviors in neuronal networks and develops a related learning algorithm. This is done in the context of the random neural network (RNN), a probabilistic model with a well-developed mathematical theory, which was inspired by the apparently stochastic spiking behavior of certain natural neuronal systems. Thus, we develop an extension of the RNN to the case when synchronous interactions can occur, leading to synchronous firing by large ensembles of cells. We also present an O(N3) gradient descent learning algorithm for an N-cell recurrent network having both conventional excitatory-inhibitory interactions and synchronous interactions. Finally, the model and its learning algorithm are applied to a resource allocation problem that is NP-hard and requires fast approximate decisions.",
"The evacuation of a building is a challenging problem, since the evacuees most of the times do not know or do not follow the optimal evacuation route. Especially during an ongoing hazard present in the building, finding the best evacuation route becomes harder as the conditions along the paths change in the course of the evacuation procedure. In this paper we propose a distributed system that will compute the best evacuation routes in real-time, while a hazard is spreading inside the building. The system is composed of a network of decision nodes and sensor nodes, positioned in specific locations inside the building. The recommendations of the decision nodes are computed in a distributed manner, at each of the decision nodes, which then communicate them to evacuees or rescue personnel located in their vicinity. We evaluate our proposed system in various emergency scenarios, using a multi-agent simulation platform for Building Evacuation. Our results indicate that the presence of the system improves the outcome of the evacuation with respect to the evacuation time and the injury level of the evacuees.",
"",
"Emergency rescues require that first responders provide support to evacuate injured and other civilians who are obstructed by the hazards. In this case, the emergency personnel can take actions strategically in order to rescue people maximally, efficiently and quickly. The paper studies the effectiveness of a random neural network (RNN)-based task assignment algorithm involving optimally matching emergency personnel and injured civilians, so that the emergency personnel can aid trapped people to move towards evacuation exits in real-time. The evaluations are run on a decision support evacuation system using the Distributed Building Evacuation Simulator (DBES) multi-agent platform in various emergency scenarios. The simulation results indicate that the RNN-based task assignment algorithm provides a near-optimal solution to resource allocation problems, which avoids resource wastage and improves the efficiency of the emergency rescue process."
]
} |
1411.3949 | 2950928468 | Evacuee routing algorithms in emergency typically adopt one single criterion to compute desired paths and ignore the specific requirements of users caused by different physical strength, mobility and level of resistance to hazard. In this paper, we present a quality of service (QoS) driven multi-path routing algorithm to provide diverse paths for different categories of evacuees. This algorithm borrows the concept of Cognitive Packet Network (CPN), which is a flexible protocol that can rapidly solve optimal solution for any user-defined goal function. Spatial information regarding the location and spread of hazards is taken into consideration to avoid that evacuees be directed towards hazardous zones. Furthermore, since previous emergency navigation algorithms are normally insensitive to sudden changes in the hazard environment such as abrupt congestion or injury of civilians, evacuees are dynamically assigned to several groups to adapt their course of action with regard to their on-going physical condition and environments. Simulation results indicate that the proposed algorithm which is sensitive to the needs of evacuees produces better results than the use of a single metric. Simulations also show that the use of dynamic grouping to adjust the evacuees' category and routing algorithms with regard for their on-going health conditions and mobility, can achieve higher survival rates. | The notion of effective length'' @cite_27 calculated as the product of the physical length and the hazard intensity, together with Dijkstra's algorithm can compute the best path to exits @cite_0 @cite_18 , and can include @cite_22 information about the spatial hazard. The Uniformity principle'' @cite_14 is also useful in showing that proper allocation of evacuees to routes, such that all exit routes have the same clearance time, is essential in minimizing evacuation time @cite_6 , while in @cite_2 @cite_4 network flow models mimic evacuation planning problems and convert the original network to time-expended networks. To reduce the high computational cost caused of these linear programming methods, in @cite_26 the Cognitive Packet Network is used for route discovery. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_26",
"@cite_6",
"@cite_0",
"@cite_27",
"@cite_2"
],
"mid": [
"2462610560",
"2332141129",
"",
"",
"2951006828",
"2074026287",
"1874038193",
"2150843819",
"2027465986"
],
"abstract": [
"Information systems that provide decision support in an efficient and timely manner can prove beneficial for emergency response operations. In this paper we propose the use of a system that provides movement decision support to evacuees by directing them through the shortest or less hazardous routes to the exit and evaluate it with a specialised software platform that we have developed for simulation of disasters in buildings. The system operates in a distributed manner, and computes the best evacuation routes in real-time while a hazard is spreading inside the building. It is composed of a network of decision nodes and sensor nodes, positioned in specific locations inside the building. The recommendations of the decision nodes are computed in a distributed manner, at each of the decision nodes, which then communicate them to evacuees or rescue personnel located in their vicinity. We use a multi-agent simulation platform for Building Evacuation that we developed, in order to evaluate our proposed system in various emergency scenarios. Our simulation results show that the overall outcome of the evacuation procedure is improved when the decision support system is in operation.",
"This paper establishes what might be called a \"uniformity principle\" for building evacuation problems. The principle may be stated as follows: given a building for which each occupant has reasonable access to every evacuation route, if the building is evacuated in minimum time, then the allocation of evacuees to routes is such that the route evacuation times are all the same. That is, there is a uniformity of route evacuation times. Also, analytical expressions for the minimum time to evacuate a building, and for the corresponding allocation of evacuees to routes, are obtained.",
"",
"",
"Exit paths in buildings are designed to minimise evacuation time when the building is at full capacity. We present an evacuation support system which does this regardless of the number of evacuees. The core concept is to even-out congestion in the building by diverting evacuees to less-congested paths in order to make maximal usage of all accessible routes throughout the entire evacuation process. The system issues a set of flow-optimal routes using a capacity-constrained routing algorithm which anticipates evolutions in path metrics using the concept of \"future capacity reservation\". In order to direct evacuees in an intuitive manner whilst implementing the routing algorithm's scheme, we use dynamic exit signs, i.e. whose pointing direction can be controlled. To make this system practical and minimise reliance on sensors during the evacuation, we use an evacuee mobility model and make several assumptions on the characteristics of the evacuee flow. We validate this concept using simulations, and show how the underpinning assumptions may limit the system's performance, especially in low-headcount evacuations.",
"The main purpose of this work is to present a formulation of the building evacuation problem that incorporates evacuation routes and applies the functions developed by Nelson and McLennan [H.E. Nelson, H.A. McLennan (Eds.), Emergency Movement, The SFPE Handbook of Fire Protection Engineering, 1996, pp. 3.286-3.295 (Section 3 Chapter 14)] to model the movement of people. These considerations lead to significant changes in the form of the evacuation and inverse evacuation functions, so it is necessary to develop a new procedure for solving the building evacuation problem.",
"Emergency response operations can benefit from the use of information systems that reduce decision making time and facilitate co-ordination between the participating units. We propose the use of two such systems and evaluate them with a specialised software platform that we have developed for simulation of disasters in buildings. The first system provides movement decision support to evacuees by directing them through the shortest or less hazardous routes to the exit. It is composed of a network of decision nodes and sensor nodes, positioned at specific locations inside the building. The recommendations of the decision nodes are computed in a distributed manner and communicated to the evacuees or rescue personnel in their vicinity. The second system uses wireless-equipped robots that move inside a disaster area and establish a network for two-way communication between trapped civilians and rescuers. They are autonomous and their goal is to maximise the number of civilians connected to the network. We evaluate both proposed information systems in various emergency scenarios, using the specialised simulation software that we developed.",
"The evacuation of a building is a challenging problem, since the evacuees most of the times do not know or do not follow the optimal evacuation route. Especially during an ongoing hazard present in the building, finding the best evacuation route becomes harder as the conditions along the paths change in the course of the evacuation procedure. In this paper we propose a distributed system that will compute the best evacuation routes in real-time, while a hazard is spreading inside the building. The system is composed of a network of decision nodes and sensor nodes, positioned in specific locations inside the building. The recommendations of the decision nodes are computed in a distributed manner, at each of the decision nodes, which then communicate them to evacuees or rescue personnel located in their vicinity. We evaluate our proposed system in various emergency scenarios, using a multi-agent simulation platform for Building Evacuation. Our results indicate that the presence of the system improves the outcome of the evacuation with respect to the evacuation time and the injury level of the evacuees.",
""
]
} |
1411.4455 | 2950831664 | The essence of distantly supervised relation extraction is that it is an incomplete multi-label classification problem with sparse and noisy features. To tackle the sparsity and noise challenges, we propose solving the classification problem using matrix completion on factorized matrix of minimized rank. We formulate relation classification as completing the unknown labels of testing items (entity pairs) in a sparse matrix that concatenates training and testing textual features with training labels. Our algorithmic framework is based on the assumption that the rank of item-by-feature and item-by-label joint matrix is low. We apply two optimization models to recover the underlying low-rank matrix leveraging the sparsity of feature-label matrix. The matrix completion problem is then solved by the fixed point continuation (FPC) algorithm, which can find the global optimum. Experiments on two widely used datasets with different dimensions of textual features demonstrate that our low-rank matrix completion approach significantly outperforms the baseline and the state-of-the-art methods. | The idea of distant supervision was firstly proposed in the field of bioinformatics @cite_2 . used WordNet as the knowledge base to discover more hpyernym hyponym relations between entities from news articles. However, either bioinformatic database or WordNet is maintained by a few experts, thus hardly kept up-to-date. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1954715867"
],
"abstract": [
"Recently, there has been much effort in making databases for Inolecular biology more accessible osld interoperable. However, information in text. form, such as MEDLINE records, remains a greatly underutilized source of biological information. We have begun a research effort aimed at automatically mapping information from text. sources into structured representations, such as knowledge bases. Our approach to this task is to use machine-learning methods to induce routines for extracting facts from text. We describe two learning methods that we have applied to this task -a statistical text classification method, and a relational learning method -and our initial experiments in learning such information-extraction routines. We also present an approach to decreasing the cost of learning information-extraction routines by learning from \"weakly\" labeled training data."
]
} |
1411.4455 | 2950831664 | The essence of distantly supervised relation extraction is that it is an incomplete multi-label classification problem with sparse and noisy features. To tackle the sparsity and noise challenges, we propose solving the classification problem using matrix completion on factorized matrix of minimized rank. We formulate relation classification as completing the unknown labels of testing items (entity pairs) in a sparse matrix that concatenates training and testing textual features with training labels. Our algorithmic framework is based on the assumption that the rank of item-by-feature and item-by-label joint matrix is low. We apply two optimization models to recover the underlying low-rank matrix leveraging the sparsity of feature-label matrix. The matrix completion problem is then solved by the fixed point continuation (FPC) algorithm, which can find the global optimum. Experiments on two widely used datasets with different dimensions of textual features demonstrate that our low-rank matrix completion approach significantly outperforms the baseline and the state-of-the-art methods. | Our work is more relevant to 's which considered the task as a matrix factorization problem. Their approach is composed of several models, such as PCA @cite_24 and collaborative filtering @cite_5 . However, they did not concern about the data noise brought by the basic assumption of distant supervision. | {
"cite_N": [
"@cite_24",
"@cite_5"
],
"mid": [
"2135001774",
"1994389483"
],
"abstract": [
"Principal component analysis (PCA) is a commonly applied technique for dimensionality reduction. PCA implicitly minimizes a squared loss function, which may be inappropriate for data that is not real-valued, such as binary-valued data. This paper draws on ideas from the Exponential family, Generalized linear models, and Bregman distances, to give a generalization of PCA to loss functions that we argue are better suited to other data types. We describe algorithms for minimizing the loss functions, and give examples on simulated data.",
"Recommender systems provide users with personalized suggestions for products or services. These systems often rely on Collaborating Filtering (CF), where past transactions are analyzed in order to establish connections between users and products. The two more successful approaches to CF are latent factor models, which directly profile both users and products, and neighborhood models, which analyze similarities between products or users. In this work we introduce some innovations to both approaches. The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model. Further accuracy improvements are achieved by extending the models to exploit both explicit and implicit feedback by the users. The methods are tested on the Netflix data. Results are better than those previously published on that dataset. In addition, we suggest a new evaluation metric, which highlights the differences among methods, based on their performance at a top-K recommendation task."
]
} |
1411.4199 | 2952600206 | We present a simple but powerful reinterpretation of kernelized locality-sensitive hashing (KLSH), a general and popular method developed in the vision community for performing approximate nearest-neighbor searches in an arbitrary reproducing kernel Hilbert space (RKHS). Our new perspective is based on viewing the steps of the KLSH algorithm in an appropriately projected space, and has several key theoretical and practical benefits. First, it eliminates the problematic conceptual difficulties that are present in the existing motivation of KLSH. Second, it yields the first formal retrieval performance bounds for KLSH. Third, our analysis reveals two techniques for boosting the empirical performance of KLSH. We evaluate these extensions on several large-scale benchmark image retrieval data sets, and show that our analysis leads to improved recall performance of at least 12 , and sometimes much higher, over the standard KLSH method. | There have been conflicting views about the comparison of KLSH and LSH after applying kernel PCA @cite_15 to the data. For example, some work @cite_23 has concluded that KLSH has a clear performance edge over KPCA+LSH, while these results are contradicted by the empirical analysis in @cite_14 @cite_11 which demonstrated that LSH after a KPCA projection step shows a significant improvement over KLSH. We will see in that these two seemingly disparate methods are (up to how the random vectors are drawn in the two approaches), and the performance gap observed in practice is largely due to the choice of parameters. Although @cite_14 gives some error analysis for the LSH after a PCA projection step using the Cauchy-Schwarz inequality, no explicit performance bounds are proved. Thus, it fails to show the interesting tradeoffs and retrieval bounds that we derive in . | {
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_23",
"@cite_11"
],
"mid": [
"2140095548",
"2739530197",
"",
"2094900960"
],
"abstract": [
"A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map—for instance, the space of all possible five-pixel products in 16 × 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.",
"Many algorithms have been proposed to handle efficient search in large databases for simple metrics such as the Euclidean distance. However, few approaches apply to more sophisticated Positive Semi-Definite (PSD) kernels. In this document, we propose for such kernels to use the concept of explicit embedding and to cast the search problem into a Euclidean space. We first describe an exact nearest neighbor search technique which relies on bounds on the approximation of the kernel. We show that, in the case of SIFT descriptors, one can retrieve the nearest neighbor with probability 1 by computing only a fraction of the costly kernels between the query and the database vectors. We then propose to combine explicit embedding with a recent Euclidean approximate nearest neighbor search method and show that it leads to significant improvements with respect to the state-of-the-art methods which rely on an implicit embedding. The database vectors being indexed by short codes, the approach is shown to scale to a dataset comprising 200 million vectors on a commodity server.",
"",
"We introduce an asymmetric sparse approximate embedding optimized for fast kernel comparison operations arising in large-scale visual search. In contrast to other methods that perform an explicit approximate embedding using kernel PCA followed by a distance compression technique in Rd, which loses information at both steps, our method utilizes the implicit kernel representation directly. In addition, we empirically demonstrate that our method needs no explicit training step and can operate with a dictionary of random exemplars from the dataset. We evaluate our method on three benchmark image retrieval datasets: SIFT1M, ImageNet, and 80M-TinyImages."
]
} |
1411.3550 | 1659681212 | Social media have become part of modern news reporting, used by journalists to spread information and find sources, or as a news source by individuals. The quest for prominence and recognition on social media sites like Twitter can sometimes eclipse accuracy and lead to the spread of false information. As a way to study and react to this trend, we introduce TwitterTrails , an interactive, web-based tool ( twittertrails.com ) that allows users to investigate the origin and propagation characteristics of a rumor and its refutation, if any, on Twitter. Visualizations of burst activity, propagation timeline, retweet and co-retweeted networks help its users trace the spread of a story. Within minutes TwitterTrails will collect relevant tweets and automatically answer several important questions regarding a rumor: its originator, burst characteristics, propagators and main actors according to the audience. In addition, it will compute and report the rumor's level of visibility and, as an example of the power of crowdsourcing, the audience's skepticism towards it which correlates with the rumor's credibility. We envision TwitterTrails as valuable tool for individual use, but we especially for amateur and professional journalists investigating recent and breaking stories. Further, its expanding collection of investigated rumors can be used to answer questions regarding the amount and success of misinformation on Twitter. | Similar to TwitterTrails are tools which focus on timelines to visualize the spread and propagation of a story or real time event, often focusing on bursts or peaks of data to assist in summarization of the data. Narratives tracks the frequency of terms in blog data to track the evolution of news stories @cite_21 . Like in TwitterTrails , use burstiness algorithm to automatically detect peaks in the data, and use these to extract and summarize events, and then rank them based on interest and importance @cite_15 . Although TwitterTrails performs a similar task in finding the first peak in the data, its end result is to provide the propagation visualization to the user, in order to allow them to analyse and theorize about the origin of their story. | {
"cite_N": [
"@cite_15",
"@cite_21"
],
"mid": [
"2005580324",
"2063340904"
],
"abstract": [
"In this paper, we present a framework and a system that extracts events relevant to a query from a collection C of documents, and places such events along a timeline. Each event is represented by a sentence extracted from C, based on the assumption that \"important\" events are widely cited in many documents for a period of time within which these events are of interest. In our experiments, we used queries that are event types (\"earthquake\") and person names (e.g. \"George Bush\"). Evaluation was performed using G8 leader names as queries: comparison made by human evaluators between manually and system generated timelines showed that although manually generated timelines are on average more preferable, system generated timelines are sometimes judged to be better than manually constructed ones.",
"Analyzing unstructured text streams can be challenging. One popular approach is to isolate specific themes in the text, and to visualize the connections between them. Some existing systems, like ThemeRiver, provide a temporal view of changes in themes; other systems, like In-Spire, use clustering techniques to help an analyst identify the themes at a single point in time. Narratives combines both of these techniques; it uses a temporal axis to visualize ways that concepts have changed over time, and introduces several methods to explore how those concepts relate to each other. Narratives is designed to help the user place news stories in their historical and social context by understanding how the major topics associated with them have changed over time. Users can relate articles through time by examining the topical keywords that summarize a specific news event. By tracking the attention to a news article in the form of references in social media (such as weblogs), a user discovers both important events and measures the social relevance of these stories."
]
} |
1411.3550 | 1659681212 | Social media have become part of modern news reporting, used by journalists to spread information and find sources, or as a news source by individuals. The quest for prominence and recognition on social media sites like Twitter can sometimes eclipse accuracy and lead to the spread of false information. As a way to study and react to this trend, we introduce TwitterTrails , an interactive, web-based tool ( twittertrails.com ) that allows users to investigate the origin and propagation characteristics of a rumor and its refutation, if any, on Twitter. Visualizations of burst activity, propagation timeline, retweet and co-retweeted networks help its users trace the spread of a story. Within minutes TwitterTrails will collect relevant tweets and automatically answer several important questions regarding a rumor: its originator, burst characteristics, propagators and main actors according to the audience. In addition, it will compute and report the rumor's level of visibility and, as an example of the power of crowdsourcing, the audience's skepticism towards it which correlates with the rumor's credibility. We envision TwitterTrails as valuable tool for individual use, but we especially for amateur and professional journalists investigating recent and breaking stories. Further, its expanding collection of investigated rumors can be used to answer questions regarding the amount and success of misinformation on Twitter. | Some of these tools focus on highlighting keywords and phrases in the data: ThemeRiver uses a timeline to map the prominence of topical keywords overtime, to find temporal patterns quickly and easily @cite_12 . A similar meme tracking tool is developed by , mapping the rise and fall of memes in the blogosphere and news media @cite_10 . TimeMines creates “overview timelines” by extracting nouns and named entities and charting the frequency of these features over time @cite_3 . | {
"cite_N": [
"@cite_3",
"@cite_10",
"@cite_12"
],
"mid": [
"193933736",
"2127492100",
"2106738877"
],
"abstract": [
"",
"Tracking new topics, ideas, and \"memes\" across the Web has been an issue of considerable interest. Recent work has developed methods for tracking topic shifts over long time scales, as well as abrupt spikes in the appearance of particular named entities. However, these approaches are less well suited to the identification of content that spreads widely and then fades over time scales on the order of days - the time scale at which we perceive news and events. We develop a framework for tracking short, distinctive phrases that travel relatively intact through on-line text; developing scalable algorithms for clustering textual variants of such phrases, we identify a broad class of memes that exhibit wide spread and rich variation on a daily basis. As our principal domain of study, we show how such a meme-tracking approach can provide a coherent representation of the news cycle - the daily rhythms in the news media that have long been the subject of qualitative interpretation but have never been captured accurately enough to permit actual quantitative analysis. We tracked 1.6 million mainstream media sites and blogs over a period of three months with the total of 90 million articles and we find a set of novel and persistent temporal patterns in the news cycle. In particular, we observe a typical lag of 2.5 hours between the peaks of attention to a phrase in the news media and in blogs respectively, with divergent behavior around the overall peak and a \"heartbeat\"-like pattern in the handoff between news and blogs. We also develop and analyze a mathematical model for the kinds of temporal variation that the system exhibits.",
"The ThemeRiver visualization depicts thematic variations over time within a large collection of documents. The thematic changes are shown in the context of a time-line and corresponding external events. The focus on temporal thematic change within a context framework allows a user to discern patterns that suggest relationships or trends. For example, the sudden change of thematic strength following an external event may indicate a causal relationship. Such patterns are not readily accessible in other visualizations of the data. We use a river metaphor to convey several key notions. The document collection's time-line, selected thematic content and thematic strength are indicated by the river's directed flow, composition and changing width, respectively. The directed flow from left to right is interpreted as movement through time and the horizontal distance between two points on the river defines a time interval. At any point in time, the vertical distance, or width, of the river indicates the collective strength of the selected themes. Colored \"currents\" flowing within the river represent individual themes. A current's vertical width narrows or broadens to indicate decreases or increases in the strength of the individual theme."
]
} |
1411.3550 | 1659681212 | Social media have become part of modern news reporting, used by journalists to spread information and find sources, or as a news source by individuals. The quest for prominence and recognition on social media sites like Twitter can sometimes eclipse accuracy and lead to the spread of false information. As a way to study and react to this trend, we introduce TwitterTrails , an interactive, web-based tool ( twittertrails.com ) that allows users to investigate the origin and propagation characteristics of a rumor and its refutation, if any, on Twitter. Visualizations of burst activity, propagation timeline, retweet and co-retweeted networks help its users trace the spread of a story. Within minutes TwitterTrails will collect relevant tweets and automatically answer several important questions regarding a rumor: its originator, burst characteristics, propagators and main actors according to the audience. In addition, it will compute and report the rumor's level of visibility and, as an example of the power of crowdsourcing, the audience's skepticism towards it which correlates with the rumor's credibility. We envision TwitterTrails as valuable tool for individual use, but we especially for amateur and professional journalists investigating recent and breaking stories. Further, its expanding collection of investigated rumors can be used to answer questions regarding the amount and success of misinformation on Twitter. | One of the earliest systems that focused on studying patterns of information propagation in online social networks like Twitter is Truthy @cite_19 . Truthy is based on the concept of memes that spread in the network. Such memes are detected and followed over time to capture their diffusion patterns. Truthy is a more general-purpose system than the ones we mentioned previously in this section, which, despite its name, it doesn't provide explicit assessment of the veracity of the tracked memes. However, through visualizations of propagation patterns and other metrics (e.g., sentiment analysis), Truthy can enable a user to come to a certain conclusion on her own. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2100974526"
],
"abstract": [
"Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We demonstrate a web service that tracks political memes in Twitter and helps detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We also present some cases of abusive behaviors uncovered by our service. Our web service is based on an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events."
]
} |
1411.3736 | 2466365654 | The effectiveness and the simple implementation of physical layer jammers make them an essential threat for wireless networks. In a multihop wireless network, where jammers can interfere with the transmission of user messages at intermediate nodes along the path, one can employ jamming oblivious routing and then employ physical-layer techniques (e.g., spread spectrum) to suppress jamming. However, whereas these approaches can provide significant gains, the residual jamming can still severely limit system performance. This motivates the consideration of routing approaches that account for the differences in the jamming environment between different paths. First, we take a straightforward approach where an equal outage probability is allocated to each link along a path and develop a minimum energy routing solution. Next, we demonstrate the shortcomings of this approach and then consider the joint problem of outage allocation and routing by employing an approximation to the link outage probability. This yields an efficient and effective routing algorithm that only requires knowledge of the measured jamming at each node. Numerical results demonstrate that the amount of energy saved by the proposed methods with respect to a standard minimum energy routing algorithm, especially for parameters appropriate for terrestrial wireless networks, is substantial. | When the system nodes are able to move, they can simply leave the jammed area to a safe place. This is the basis of the spatial retreat technique, in which the system nodes move away from a stationary jammer @cite_12 @cite_39 . Another jamming evasion technique is channel surfing, where the system nodes basically change their communication frequency to an interference-free frequency band when necessary @cite_29 . These approaches, however, are orthogonal to the problem considered here which deals with static nodes. | {
"cite_N": [
"@cite_29",
"@cite_12",
"@cite_39"
],
"mid": [
"2151090982",
"2133282516",
"2122285088"
],
"abstract": [
"Wireless sensor networks are susceptible to interference that can disrupt sensor communication. In order to cope with this disruption, we explore channel surfing, whereby the sensor nodes adapt their channel assignments to restore network connectivity in the presence of interference. We explore two different approaches to channel surfing: coordinated channel switching, where the entire sensor network adjusts its channel; and spectral multiplexing, where nodes in a jammed region switch channels while nodes on the boundary of a jammed region act as radio relays between different spectral zones. For spectral multiplexing, we have devised both synchronous and asynchronous strategies to facilitate the spectral scheduling needed to improve network fidelity when sensor nodes operate on multiple channels. In designing these algorithms, we have taken a system-oriented approach that has focused on exploring actual implementation issues under realistic network settings. We have implemented these proposed methods on a testbed of 30 Mica2 sensor nodes, and the experimental results show that these strategies can each repair network connectivity in the presence of interference without introducing significant overhead.",
"Wireless networks are built upon a shared medium that makes it easy for adversaries to launch denial of service (DoS) attacks. One form of denial of service is targeted at preventing sources from communicating. These attacks can be easily accomplished by an adversary by either bypassing MAC-layer protocols, or emitting a radio signal targeted at jamming a particular channel. In this paper we present two strategies that may be employed by wireless devices to evade a MAC PHY-layer jamming-style wireless denial of service attack. The first strategy, channel surfing, is a form of spectral evasion that involves legitimate wireless devices changing the channel that they are operating on. The second strategy, spatial retreats, is a form of spatial evasion whereby legitimate mobile devices move away from the locality of the DoS emitter. We study both of these strategies for three broad wireless communication scenarios: two-party radio communication, an infrastructured wireless network, and an ad hoc wireless network. We evaluate several of our proposed strategies and protocols through ns-2 simulations and experiments on the Berkeley mote platform.",
"Wireless sensor networks are built upon a shared medium that makes it easy for adversaries to conduct radio interference, or jamming, attacks that effectively cause a denial of service of either transmission or reception functionalities. These attacks can easily be accomplished by an adversary by either bypassing MAC-layer protocols or emitting a radio signal targeted at jamming a particular channel. In this article we survey different jamming attacks that may be employed against a sensor network. In order to cope with the problem of jamming, we discuss a two-phase strategy involving the diagnosis of the attack, followed by a suitable defense strategy. We highlight the challenges associated with detecting jamming. To cope with jamming, we propose two different but complementary approaches. One approach is to simply retreat from the interferer which may be accomplished by either spectral evasion (channel surfing) or spatial evasion (spatial retreats). The second approach aims to compete more actively with the interferer by adjusting resources, such as power levels and communication coding, to achieve communication in the presence of the jammer."
]
} |
1411.3736 | 2466365654 | The effectiveness and the simple implementation of physical layer jammers make them an essential threat for wireless networks. In a multihop wireless network, where jammers can interfere with the transmission of user messages at intermediate nodes along the path, one can employ jamming oblivious routing and then employ physical-layer techniques (e.g., spread spectrum) to suppress jamming. However, whereas these approaches can provide significant gains, the residual jamming can still severely limit system performance. This motivates the consideration of routing approaches that account for the differences in the jamming environment between different paths. First, we take a straightforward approach where an equal outage probability is allocated to each link along a path and develop a minimum energy routing solution. Next, we demonstrate the shortcomings of this approach and then consider the joint problem of outage allocation and routing by employing an approximation to the link outage probability. This yields an efficient and effective routing algorithm that only requires knowledge of the measured jamming at each node. Numerical results demonstrate that the amount of energy saved by the proposed methods with respect to a standard minimum energy routing algorithm, especially for parameters appropriate for terrestrial wireless networks, is substantial. | Several works consider one-hop energy aware communication in the presence of one jammer @cite_4 @cite_10 @cite_28 @cite_17 . It is usually treated as a game between a jammer and two system nodes. The objective of the jammer is to increase the cost (energy) of communication for the system nodes, whereas the objective of the system nodes is increasing the cost of jamming for the jammer and conveying their message with a minimum use of energy. Unlike these approaches, in this work we consider multi-hop communication in the presence of many jammers. | {
"cite_N": [
"@cite_28",
"@cite_10",
"@cite_4",
"@cite_17"
],
"mid": [
"2071274528",
"2107831657",
"2161513761",
"2114467681"
],
"abstract": [
"The security issue in collaborative sensing in cognitive radio networks can be modeled as attackers and secondary users in a jamming and anti-jamming scenario. In this paper, we introduce a stochastic zero-sum game model to study the strategies. Primary users, secondary users and jammers are the three types of agents in the system. The primary users dictate the system states and their transitions while the secondary users and jammers behave non-cooperatively to achieve their goals independently under different system environment. Our Markovian game model captures not only the zero-sum interactions between secondary users and the jammers but also the dynamics of the system. Our results indicate that the secondary users can enhance their security level or increase their long-term payoff by either improving their sensing capabilities to confuse the jammer with the choice or choosing to communicate under states where the available channels are less prone to jamming. In the numerical experiments, we point out that the payoff of the secondary users increases with the number of available jamming-free channels and is eventually limited by the behavior of primary users.",
"We consider a scenario where a sophisticated jammer jams an area in a single-channel wireless sensor network. The jammer controls the probability of jamming and transmission range to cause maximal damage to the network in terms of corrupted communication links. The jammer action ceases when it is detected by a monitoring node in the network, and a notification message is transferred out of the jamming region. The jammer is detected at a monitor node by employing an optimal detection test based on the percentage of incurred collisions. On the other hand, the network computes channel access probability in an effort to minimize the jamming detection plus notification time. In order for the jammer to optimize its benefit, it needs to know the network channel access probability and number of neighbors of the monitor node. Accordingly, the network needs to know the jamming probability of the jammer. We study the idealized case of perfect knowledge by both the jammer and the network about the strategy of one another, and the case where the jammer or the network lack this knowledge. The latter is captured by formulating and solving optimization problems, the solutions of which constitute best responses of the attacker or the network to the worst-case strategy of each other. We also take into account potential energy constraints of the jammer and the network. We extend the problem to the case of multiple observers and adaptable jamming transmission range and propose a intuitive heuristic jamming strategy for that case.",
"The process of communication jamming can be modeled as a two-person zero-sum noncooperative dynamic game played between a communicator (a transmitter-receiver pair) and a jammer. We consider a one-way time-slotted packet radio communication link in the presence of a jammer, where the data rate is fixed and (1) in each slot, the communicator and jammer choose their respective power levels in a random fashion from a zero and a positive value; (2) both players are subject to temporal energy constraints which account for protection of the communicating and jamming transmitters from overheating. The payoff function is the time average of the mean payoff per slot. The game is solved for certain ranges of the players' transmitter parameters. Structures of steady-state solutions to the game are also investigated. The general behavior of the players' strategies and payoff increment is found to depend on a parameter related to the payoff matrix, which me call the payoff parameter, and the transmitters' parameters. When the payoff parameter is lower than a threshold, the optimal steady-state strategies are mixed and the payoff increment constant over time, whereas when it is greater than the threshold, the strategies are pure, and the payoff increment exhibits oscillatory behavior.",
"In this work, we study the problem of power allocation and adaptive modulation in teams of decision makers. We consider the special case of two teams with each team consisting of two mobile agents. Agents belonging to the same team communicate over wireless ad hoc networks, and they try to split their available power between the tasks of communication and jamming the nodes of the other team. The agents have constraints on their total energy and instantaneous power usage. The cost function adopted is the difference between the rates of erroneously transmitted bits of each team. We model the adaptive modulation problem as a zero-sum matrix game which in turn gives rise to a a continuous kernel game to handle power control. Based on the communications model, we present sufficient conditions on the physical parameters of the agents for the existence of a pure strategy saddle-point equilibrium (PSSPE)."
]
} |
1411.3736 | 2466365654 | The effectiveness and the simple implementation of physical layer jammers make them an essential threat for wireless networks. In a multihop wireless network, where jammers can interfere with the transmission of user messages at intermediate nodes along the path, one can employ jamming oblivious routing and then employ physical-layer techniques (e.g., spread spectrum) to suppress jamming. However, whereas these approaches can provide significant gains, the residual jamming can still severely limit system performance. This motivates the consideration of routing approaches that account for the differences in the jamming environment between different paths. First, we take a straightforward approach where an equal outage probability is allocated to each link along a path and develop a minimum energy routing solution. Next, we demonstrate the shortcomings of this approach and then consider the joint problem of outage allocation and routing by employing an approximation to the link outage probability. This yields an efficient and effective routing algorithm that only requires knowledge of the measured jamming at each node. Numerical results demonstrate that the amount of energy saved by the proposed methods with respect to a standard minimum energy routing algorithm, especially for parameters appropriate for terrestrial wireless networks, is substantial. | Some works consider jamming-aware multi-path routing @cite_6 @cite_19 @cite_13 @cite_43 @cite_2 . While in a completely different setting from this work, these multi-path algorithms are mostly based on sending a message along multiple node-disjoint or link-disjoint paths to ensure fault-tolerant message delivery. Although such algorithms are suitable for wired networks, their application in wireless networks is challenging due to lack of path diversity at the source or destination of a communication session. In particular, in wireless networks node-disjoint and link-disjoint paths are not necessarily independent paths. Moreover, network topology, in wireless networks, is a function of power allocation at the physical-layer and propagation environment, e.g., fading. | {
"cite_N": [
"@cite_6",
"@cite_19",
"@cite_43",
"@cite_2",
"@cite_13"
],
"mid": [
"1760148955",
"1646091373",
"2136377206",
"2005497978",
"2140000065"
],
"abstract": [
"Many studies show that, when Internet links go up or down, the dynamics of BGP may cause several minutes of packet loss. The loss occurs even when multiple paths between the sender and receiver domains exist, and is unwarranted given the high connectivity of the Internet. Our objective is to ensure that Internet domains stay connected as long as the underlying network is connected. Our solution, R-BGP works by pre-computing a few strategically chosen failover paths. R-BGP provably guarantees that a domain will not become disconnected from any destination as long as it will have a policy-compliant path to that destination after convergence. Surprisingly, this can be done using a few simple and practical modifications to BGP, and, like BGP, requires announcing only one path per neighbor. Simulations on the AS-level graph of the current Internet show that R-BGP reduces the number of domains that see transient disconnectivity resulting from a link failure from 22 for edge links and 14 for core links down to zero in both cases.",
"Mobile ad hoc networks (MANETs) consist of a collection of wireless mobile nodes which dynamically exchange data among themselves without the reliance on a fixed base station or a wired backbone network. MANET nodes are typically distinguished by their limited power, processing, and memory resources as well as high degree of mobility. In such networks, the wireless mobile nodes may dynamically enter the network as well as leave the network. Due to the limited transmission range of wireless network nodes, multiple hops are usually needed for a node to exchange information with any other node in the network. Thus routing is a crucial issue to the design of a MANET. In this paper, we specifically examine the issues of multipath routing in MANETs. Multipath routing allows the establishment of multiple paths between a single source and single destination node. It is typically proposed in order to increase the reliability of data transmission (i.e., fault tolerance) or to provide load balancing. Load balancing is of especial importance in MANETs because of the limited bandwidth between the nodes. We also discuss the application of multipath routing to support application constraints such as reliability, load-balancing, energy-conservation, and Quality-of-Service (QoS).",
"We present the design of a routing system in which end-systems set tags to select non-shortest path routes as an alternative to explicit source routes. Routers collectively generate these routes by using tags as hints to independently deflect packets to neighbors that lie off the shortest-path. We show how this can be done simply, by local extensions of the shortest path machinery, and safely, so that loops are provably not formed. The result is to provide end-systems with a high-level of path diversity that allows them to bypass unde-sirable locations within the network. Unlike explicit source routing, our scheme is inherently scalable and compatible with ISP policies because it derives from the deployed Internet routing. We also sug-gest an encoding that is compatible with common IP usage, making our scheme incrementally deployable at the granularity of individual routers.",
"Jamming attacks are especially harmful to the reliability of wireless communication, as they can effectively disrupt communication between any node pairs. Existing jamming defenses primarily focus on repairing connectivity between adjacent nodes. In this paper, we address jamming at the network level and focus on restoring the end-to-end data delivery through multipath routing. As long as all paths do not fail concurrently, the end-to-end path availability is maintained. Prior work in multipath selection improves routing availability by choosing node-disjoint paths or link-disjoint paths. However, through our experiments on jamming effects using MicaZ nodes, we show that disjointness is insufficient for selecting fault-independent paths. Thus, we address multipath selection based on the knowledge of a path's availability history. Using Availability History Vectors (AHVs) of paths, we present a centralized AHV-based algorithm to select fault-independent paths, and a distributed AHV-based routing protocol built on top of a classic routing algorithm in ad hoc networks. Our extensive simulation results validate that both AHV-based algorithms are effective in overcoming the jamming impact by maximizing the end-to-end availability of the selected paths.",
"Mobile ad hoc networks are characterized by multi-hop wireless links, absence of any cellular infrastructure, and frequent host mobility. Design of efficient routing protocols in such networks is a challenging issue. A class of routing protocols called on-demand protocols has recently attracted attention because of their low routing overhead. The on-demand protocols depend on query floods to discover routes whenever a new route is needed. Such floods take up a substantial portion of network bandwidth. We focus on a particular on-demand protocol, called dynamic source routing, and show how intelligent use of multipath techniques can reduce the frequency of query floods. We develop an analytic modeling framework to determine the relative frequency of query floods for various techniques. Results show that while multipath routing is significantly better than single path routing, the performance advantage is small beyond a few paths and for long path lengths. It also shows that providing all intermediate nodes in the primary (shortest) route with alternative paths has a significantly better performance than providing only the source with alternate paths."
]
} |
1411.3736 | 2466365654 | The effectiveness and the simple implementation of physical layer jammers make them an essential threat for wireless networks. In a multihop wireless network, where jammers can interfere with the transmission of user messages at intermediate nodes along the path, one can employ jamming oblivious routing and then employ physical-layer techniques (e.g., spread spectrum) to suppress jamming. However, whereas these approaches can provide significant gains, the residual jamming can still severely limit system performance. This motivates the consideration of routing approaches that account for the differences in the jamming environment between different paths. First, we take a straightforward approach where an equal outage probability is allocated to each link along a path and develop a minimum energy routing solution. Next, we demonstrate the shortcomings of this approach and then consider the joint problem of outage allocation and routing by employing an approximation to the link outage probability. This yields an efficient and effective routing algorithm that only requires knowledge of the measured jamming at each node. Numerical results demonstrate that the amount of energy saved by the proposed methods with respect to a standard minimum energy routing algorithm, especially for parameters appropriate for terrestrial wireless networks, is substantial. | In order to minimize energy consumption in wireless networks, numerous energy-efficient routing algorithms have been studied @cite_8 @cite_21 @cite_36 @cite_33 @cite_23 @cite_42 . For instance, in @cite_42 energy-efficient routing with an end-to-end probability of error constraint is considered. However, @cite_42 does not consider any kind of jamming and or spatially non-uniform interference. Instead of the total energy usage of the network nodes, some works consider the battery usage of each node, or balanced energy dissipation in the network as their criteria @cite_44 @cite_11 @cite_16 . For example, in @cite_44 , instead of choosing one source-destination path, the algorithm chooses several paths and uses them alternatively to avoid quick energy depletion of each path. While minimum energy routing has been studied extensively, a few works (e.g. see @cite_15 @cite_14 ) considered security-aware routing. However, unlike our work, they considered routing in the presence of passive eavesdroppers, which is different from the problem considered in this work with active jammers. | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_8",
"@cite_36",
"@cite_21",
"@cite_42",
"@cite_44",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_11"
],
"mid": [
"2149522963",
"2107708869",
"1992876548",
"1787995280",
"2170469173",
"1499623512",
"2143514830",
"2042123165",
"1994027334",
"2132895032",
"2107905431"
],
"abstract": [
"There is a rich recent literature on information-theoretically secure communication at the physical layer of wireless networks, where secret communication between a single transmitter and receiver has been studied extensively. In this paper, we consider how single-hop physical layer security techniques can be extended to multi-hop wireless networks. We show that guaranteed security can be achieved in multi-hop networks by augmenting physical layer security techniques, such as cooperative jamming, with the higher layer network mechanisms, such as routing. Specifically, we consider the secure minimum energy routing problem, in which the objective is to compute a minimum energy path between two network nodes subject to constraints on the end-to-end communication secrecy and goodput over the path. This problem is formulated as a constrained optimization of transmission power and link selection, which is proved to be NP-hard. Nevertheless, we show that efficient algorithms exist to compute both exact and approximate solutions for the problem. In particular, we develop an exact solution of pseudo-polynomial complexity, as well as an e-optimal approximation of polynomial complexity. Simulation results are also provided to show the utility of our algorithms and quantify their energy savings compared to a combination of (standard) security-agnostic minimum energy routing and physical layer security. In the simulated scenarios, we observe that, by jointly optimizing link selection at the network layer and cooperative jamming at the physical layer, our algorithms reduce the network energy consumption by half.",
"In this paper, we develop an energy efficient routing scheme that takes into account the interference created by existing flows in the network. Unlike previous works, we explicitly study the impact of routing a new flow on the energy consumption of the network. Under certain assumptions on how links are scheduled, we can show that our proposed algorithm is asymptotically (in time) optimal in terms of minimizing the average energy consumption. We also develop a distributed version of the algorithm. Our algorithm automatically detours around a congested area in the network, which helps mitigate network congestion and improve overall network performance. Using simulations, we show that the routes chosen by our algorithm (centralized and distributed) are more energy efficient than the state of the art.",
"b this paper we present a case for using new power-aware metn.cs for determining routes in wireless ad hoc networks. We present five erent metriw based on battery power consumption at nodw. We show that using th=e metrics in a shortest-cost routing algorithm reduces the cost packet of routing packets by 5-30 over shortwt-hop routing (this cost reduction is on top of a 40-70 reduction in energy consumption obtained by using PAMAS, our MAC layer prtocol). Furthermore, using these new metrics ensures that the mean time to node failure is increased si cantly. An interesting property of using shortest-cost routing is that packet delays do not increase. Fintiy, we note that our new metrim can be used in most tradition routing protocols for ad hoc networks.",
"An ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor-based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set of possible one hop away neighbors. Traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach. The nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver. We propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized. The algorithms are local and amenable to distributed implementation. When there is a single power level, the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed such that the energy consumption is balanced among the nodes in proportion to their energy reserves, instead of routing to minimize the absolute consumed power.",
"We describe a distributed position-based network protocol optimized for minimum energy consumption in mobile wireless networks that support peer-to-peer communications. Given any number of randomly deployed nodes over an area, we illustrate that a simple local optimization scheme executed at each node guarantees strong connectivity of the entire network and attains the global minimum energy solution for stationary networks. Due to its localized nature, this protocol proves to be self-reconfiguring and stays close to the minimum energy solution when applied to mobile networks. Simulation results are used to verify the performance of the protocol.",
"Reducing power consumption and increasing battery life of nodes in an ad-hoc network requires an integrated power control and routing strategy. Power optimal routing selects the multi-hop links that require the minimum total power cost for data transmission under a constraint on the link quality. This paper studies optimal power routing under the constraint of a fixed end-to-end probability of error and compares the power optimal routes obtained with this criterion with those from the more commonly used fixed per hop error rate constraint. The comparison is carried out by looking at the properties of the power optimal graph, formed by the union of all the power optimal routes. The paper also provides algorithms to determine the power optimal routes.",
"The recent interest in sensor networks has led to a number of routing schemes that use the limited resources available at sensor nodes more efficiently. These schemes typically try to find the minimum energy path to optimize energy usage at a node. In this paper we take the view that always using lowest energy paths may not be optimal from the point of view of network lifetime and long-term connectivity. To optimize these measures, we propose a new scheme called energy aware routing that uses sub-optimal paths occasionally to provide substantial gains. Simulation results are also presented that show increase in network lifetimes of up to 40 over comparable schemes like directed diffusion routing. Nodes also burn energy in a more equitable way across the network ensuring a more graceful degradation of service with time.",
"This paper considers the problem of finding minimum-energy cooperative routes in a wireless network with variable wireless channels. We assume that each node in the network is equipped with a single omnidirectional antenna and, motivated by the large body of physical layer research indicating its potential utility, that multiple nodes are able to coordinate their transmissions at the physical layer in order to take advantage of spatial diversity. Such coordination, however, is intrinsically intertwined with routing decisions, thus motivating the work. We first formulate the energy cost of forming a cooperative link between two nodes based on a two-stage transmission strategy assuming that only statistical knowledge about channels is available. Utilizing the link cost formulation, we show that optimal static routes in a network can be computed by running Dijkstra's algorithm over an extended network graph created by cooperative links. However, due to the variability of wireless channels, we argue that a many-to-one cooperation model in static routing is suboptimal. Hence, we develop an opportunistic routing algorithm based on many-to-many cooperation, and show that optimal routes in a network can be computed by a stochastic version of the Bellman-Ford algorithm. We use static and opportunistic optimal algorithms as baselines to develop heuristic link selection algorithms that are energy efficient while being computationally simpler than the optimal algorithms. We simulate our algorithms and show that while optimal cooperation and link selection can reduce energy consumption by almost an order of magnitude compared to non-cooperative approaches, our simple heuristics achieve similar energy savings while being computationally efficient as well.",
"There is a rich recent literature on how to assist secure communication between a single transmitter and receiver at the physical layer of wireless networks through techniques such as cooperative jamming. In this paper, we consider how these single-hop physical layer security techniques can be extended to multi-hop wireless networks and show how to augment physical layer security techniques with higher layer network mechanisms such as coding and routing. Specifically, we consider the secure minimum energy routing problem, in which the objective is to compute a minimum energy path between two network nodes subject to constraints on the end-to-end communication secrecy and goodput over the path. This problem is formulated as a constrained optimization of transmission power and link selection, which is proved to be NP-hard. Nevertheless, we show that efficient algorithms exist to compute both exact and approximate solutions for the problem. In particular, we develop an exact solution of pseudo-polynomial complexity, as well as an o-optimal approximation of polynomial complexity. Simulation results are also provided to show the utility of our algorithms and quantify their energy savings compared to a combination of (standard) security-agnostic minimum energy routing and physical layer security. In the simulated scenarios, we observe that, by jointly optimizing link selection at the network layer and cooperative jamming at the physical layer, our algorithms reduce the network energy consumption by half.",
"A routing problem in static wireless ad hoc networks is considered as it arises in a rapidly deployed, sensor based, monitoring system known as the wireless sensor network. Information obtained by the monitoring nodes needs to be routed to a set of designated gateway nodes. In these networks, every node is capable of sensing, data processing, and communication, and operates on its limited amount of battery energy consumed mostly in transmission and reception at its radio transceiver. If we assume that the transmitter power level can be adjusted to use the minimum energy required to reach the intended next hop receiver then the energy consumption rate per unit information transmission depends on the choice of the next hop node, i.e., the routing decision. We formulate the routing problem as a linear programming problem, where the objective is to maximize the network lifetime, which is equivalent to the time until the network partition due to battery outage. Two different models are considered for the information-generation processes. One assumes constant rates and the other assumes an arbitrary process. A shortest cost path routing algorithm is proposed which uses link costs that reflect both the communication energy consumption rates and the residual energy levels at the two end nodes. The algorithm is amenable to distributed implementation. Simulation results with both information-generation process models show that the proposed algorithm can achieve network lifetime that is very close to the optimal network lifetime obtained by solving the linear programming problem.",
"Previously proposed sensor network data dissemination schemes require periodic low-rate flooding of data in order to allow recovery from failure. We consider constructing two kinds of multipaths to enable energy efficient recovery from failure of the shortest path between source and sink. Disjoint multipath has been studied in the literature. We propose a novel braided multipath scheme, which results in several partially disjoint multipath schemes. We find that braided multipaths are a viable alternative for energy-efficient recovery from isolated and patterned failures."
]
} |
1411.3895 | 2007475688 | Graphical abstractDisplay Omitted HighlightsAn algorithm which is able to learn controllers with embedded preprocessing for mobile robotics is presented.Quantified Fuzzy Propositions, a model able to summarize the low-level input data, are used.The algorithm was tested with the wall-following behavior both in simulated and real environments.Results show a better and statistically significant performance of our proposal.The approach was also successfully tested in three real world behaviors. The automatic design of controllers for mobile robots usually requires two stages. In the first stage, sensorial data are preprocessed or transformed into high level and meaningful values of variables which are usually defined from expert knowledge. In the second stage, a machine learning technique is applied to obtain a controller that maps these high level variables to the control commands that are actually sent to the robot. This paper describes an algorithm that is able to embed the preprocessing stage into the learning stage in order to get controllers directly starting from sensorial raw data with no expert knowledge involved. Due to the high dimensionality of the sensorial data, this approach uses Quantified Fuzzy Rules (QFRs), that are able to transform low-level input variables into high-level input variables, reducing the dimensionality through summarization. The proposed learning algorithm, called Iterative Quantified Fuzzy Rule Learning (IQFRL), is based on genetic programming. IQFRL is able to learn rules with different structures, and can manage linguistic variables with multiple granularities. The algorithm has been tested with the implementation of the wall-following behavior both in several realistic simulated environments with different complexity and on a Pioneer 3-AT robot in two real environments. Results have been compared with several well-known learning algorithms combined with different data preprocessing techniques, showing that IQFRL exhibits a better and statistically significant performance. Moreover, three real world applications for which IQFRL plays a central role are also presented: path and object tracking with static and moving obstacles avoidance. | The learning of controllers for autonomous robots has been dealt with by using different machine learning techniques. Among the most popular approaches can be found evolutionary algorithms @cite_10 @cite_38 , neural networks @cite_34 and reinforcement learning @cite_30 @cite_41 . Also hibridations of them, like evolutionary neural networks @cite_32 , reinforcement learning with evolutionary algorithms @cite_17 @cite_13 , the widely used genetic fuzzy systems @cite_9 @cite_5 @cite_23 @cite_12 @cite_29 @cite_31 @cite_22 , or even more uncommon combinations like ant colony optimization with reinforcement learning @cite_24 or differential evolution @cite_4 or evolutionary group based particle swarm optimization @cite_0 have been successfully applied. Furthermore, over the last few years, mobile robotic controllers have been getting some attention as a test case for the automatic design of type-2 fuzzy logic controllers @cite_35 @cite_38 @cite_18 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_22",
"@cite_41",
"@cite_29",
"@cite_5",
"@cite_10",
"@cite_38",
"@cite_18",
"@cite_4",
"@cite_23",
"@cite_17",
"@cite_32",
"@cite_34",
"@cite_12",
"@cite_9",
"@cite_24",
"@cite_0",
"@cite_31",
"@cite_13"
],
"mid": [
"1561996023",
"2233677881",
"1963565134",
"",
"2124958386",
"1989329927",
"1540106380",
"",
"",
"2028249667",
"2168640251",
"2072881998",
"2079690958",
"1995515218",
"2132586633",
"2164301484",
"2111616957",
"2111065664",
"2024327854",
"2035397042"
],
"abstract": [
"The successful application of Reinforcement Learning (RL) techniques to robot control is limited by the fact that, in most robotic tasks, the state and action spaces are continuous, multidimensional, and in essence, too large for conventional RL algorithms to work. The well known curse of dimensionality makes infeasible using a tabular representation of the value function, which is the classical approach that provides convergence guarantees. When a function approximation technique is used to generalize among similar states, the convergence of the algorithm is compromised, since updates unavoidably affect an extended region of the domain, that is, some situations are modified in a way that has not been really experienced, and the update may degrade the approximation. We propose a RL algorithm that uses a probability density estimation in the joint space of states, actions and Q-values as a means of function approximation. This allows us to devise an updating approach that, taking into account the local sampling density, avoids an excessive modification of the approximation far from the observed sample.",
"In recent years, the autonomous mobile robot has found diverse applications such as home health care system, surveillance system in civil and military applications and exhibition robot. For surveillance tasks such as moving target pursuit or following and patrol in a region using mobile robot, this paper presents a fuzzy Q-learning, as an intelligent control for cost-based navigation, for autonomous learning of suitable behaviors without the supervision or external human command. The Q-learning is used to select the appropriate rule of interval type-2 fuzzy rule base. The initial testing of the intelligent control is demonstrated by simulation as well as experiment of a simple wall-following based patrolling task of autonomous mobile robot.",
"The design of fuzzy controllers for the implementation of behaviors in mobile robotics is a complex and highly time-consuming task. The use of machine learning techniques such as evolutionary algorithms or artificial neural networks for the learning of these controllers allows to automate the design process. In this paper, the automated design of a fuzzy controller using genetic algorithms for the implementation of the wall-following behavior in a mobile robot is described. The algorithm is based on the iterative rule learning approach, and is characterized by three main points. First, learning has no restrictions neither in the number of membership functions, nor in their values. In the second place, the training set is composed of a set of examples uniformly distributed along the universe of discourse of the variables. This warrantees that the quality of the learned behavior does not depend on the environment, and also that the robot will be capable to face different situations. Finally, the trade off between the number of rules and the quality accuracy of the controller can be adjusted selecting the value of a parameter. Once the knowledge base has been learned, a process for its reduction and tuning is applied, increasing the cooperation between rules and reducing its number.",
"",
"The problem of an effective behavior learning of autonomous robots is one of the most important tasks of the modern robotics. In fact, it is well known that the learning to optimize actions of autonomous agents in a dynamic environment is one of the most complex challenges of the intelligent system design. In this paper, we propose a hybrid approach integrating fuzzy logic system with genetic algorithm for high-level skills learning of robots within the RoboCup simulation soccer domain. Through the experiments, we found that the proposed method has good property of computation efficiency and also has a good advantage applied to the environment of RoboCup.",
"The design of fuzzy controllers for the implementation of behaviors in mobile robotics is a complex and highly time-consuming task. The use of machine learning techniques, such as evolutionary algorithms or artificial neural networks for the learning of these controllers allows to automate the design process. In this paper, the automated design of a fuzzy controller using genetic algorithms for the implementation of the wall-following behavior in a mobile robot is described. The algorithm is based on the Iterative Rule Learning (IRL) approach, and a parameter (@d) is defined with the aim of selecting the relation between the number of rules and the quality and accuracy of the controller. The designer has to define the universe of discourse and the precision of each variable, and also the scoring function. No restrictions are placed neither in the number of linguistic labels nor in the values that define the membership functions.",
"Genetic programming provides an automated design strategy to evolve complex controllers based on evolution in nature. In this contribution we use genetic programming to automatically evolve efficient robot controllers for a corridor following task. Based on tests executed in a simulation environment we show that very robust and efficient controllers can be obtained. Also, we stress that it is important to provide sufficiently diverse fitness cases, offering a sound basis for learning more complex behaviour. The evolved controller is successfully applied to real environments as well. Finally, controller and sensor morphology are co-evolved, clearly resulting in an improved sensor configuration.",
"",
"",
"This paper proposes evolutionary wall-following control of a mobile robot using an interval type-2 fuzzy controller (IT2FC) with species-differential-evolution-activated continuous ant colony optimization (SDE-CACO). Both the position and speed of a mobile robot are controlled by using two IT2FCs to improve noise resistance ability. A new cost function is defined to accurately evaluate the wall-following performance of an evolutionary IT2FC. A two-stage training approach is proposed that learns a position IT2FC followed by a speed IT2FC to optimize both the wall-following accuracy and the moving speed. The proposed learning approach avoids the time consuming task of the exhaustive collection of supervised input-output training pairs. All fuzzy rules are generated online using a clustering-based approach during the evolutionary learning process. All of the free parameters in an online-generated IT2FC are optimized using SDE-CACO, in which an SDE mutation operation is incorporated within a continuous ACO to improve its explorative ability. The proposed SDE-CACO is compared with various population-based optimization algorithms to demonstrate its efficiency and effectiveness in the wall-following control problem. This study also includes experiments that demonstrate wall-following control utilizing a real mobile robot.",
"A methodology for learning behaviors in mobile robotics has been developed. It consists of a technique to automatically generate input–output data plus a genetic fuzzy system that obtains cooperative weighted rules. The advantages of our methodology over other approaches are that the designer has to choose the values of only a few parameters, the obtained controllers are general (the quality of the controller does not depend on the environment), and the learning process takes place in simulation, but the controllers work also on the real robot with good performance. The methodology has been used to learn the wall-following behavior, and the obtained controller has been tested using a Nomad 200 robot in both simulated and real environments. © 2009 Wiley Periodicals, Inc.",
"Conventional fuzzy logic controller is applicable when there are only two fuzzy inputs with usually one output. Complexity increases when there are more than one inputs and outputs making the system unrealizable. The ordinal structure model of fuzzy reasoning has an advantage of managing high-dimensional problem with multiple input and output variables ensuring the interpretability of the rule set. This is achieved by giving an associated weight to each rule in the defuzzification process. In this work, a methodology to design an ordinal fuzzy logic controller with application for obstacle avoidance of Khepera mobile robot is presented. The implementation will show that ordinal structure fuzzy is easier to design with highly interpretable rules compared to conventional fuzzy controller. In order to achieve high accuracy, a specially tailored Genetic Algorithm (GA) approach for reinforcement learning has been proposed to optimize the ordinal structure fuzzy controller. Simulation results demonstrated improved obstacle avoidance performance in comparison with conventional fuzzy controllers. Comparison of direct and incremental GA for optimization of the controller is also presented.",
"Evolutionary Robotics (ER) is one of promising approaches to design robot controllers which essentially have complicated and or complex properties. In most ER research, the sensory-motor mappings of robots are represented as artificial neural networks, and their connection weights (and sometimes the structure of the networks) can be optimized in the parameter spaces by using evolutionary computation. However, generally, the evolved neural controllers could be fragile in unexperienced environments, especially in real worlds, because the evolutionary optimization processes would be executed in idealized simulators. This is known as the gap problem between the simulated and real worlds. To overcome this, the author focused on evolving an on-line learning ability instead of weight parameters in a simulated environment. According to recent biological findings, actually, the kinds of on-line adaptation abilities can be found in real nervous systems of insects and crustaceans, and it is also known that a variety of neuromodulators (NMs) play crucial roles to regulate the network characteristics (i.e. activating blocking changing of synaptic connections). Based on this, a neuromodulatory neural network model was proposed and it was utilized as a mobile robot controller. In the paper, the detail behavior analysis of the evolved neuromodulatory neural network is also discussed.",
"Robots have played an important role in the automation of computer aided manufacturing. The classical robot control implementation involves an expensive key step of model-based programming. An intuitive way to reduce this expensive exercise is to replace programming with machine learning of robot actions from demonstration where a (learner) robot learns an action by observing a demonstrator robot performing the same. To achieve this learning from demonstration (LFD) different machine learning techniques such as Artificial Neural Networks (ANN), Genetic Algorithms, Hidden Markov Models, Support Vector Machines, etc. can be used. This piece of work focuses exclusively on ANNs. Since ANNs have many standard architectural variations divided into two basic computational categories namely the recurrent networks and feed-forward networks, representative networks from each have been selected for study, i.e. Feed Forward Multilayer Perceptron (FF) network for feed-forward networks category and Elman (EL), and Nonlinear Autoregressive Exogenous Model (NARX) networks for the recurrent networks category. The main objective of this work is to identify the most suitable neural architecture for application of LFD in learning different robot actions. The sensor and actuator streams of demonstrated action are used as training data for ANN learning. Consequently, the learning capability is measured by comparing the error between demonstrator and corresponding learner streams. To achieve fairness in comparison three steps have been taken. First, Dynamic Time Warping is used to measure the error between demonstrator and learner streams, which gives resilience against translation in time. Second, comparison statistics are drawn between the best, instead of weight-equal, configurations of competing architectures so that learning capability of any architecture is not forced handicap. Third, each configuration's error is calculated as the average of ten trials of all possible learning sequences with random weight initialization so that the error value is independent of a particular sequence of learning or a particular set of initial weights. Six experiments are conducted to get a performance pattern of each architecture. In each experiment, a total of nine different robot actions were tested. Error statistics thus obtained have shown that NARX architecture is most suitable for this learning problem whereas Elman architecture has shown the worst suitability. Interestingly the computationally lesser MLP gives much lower and slightly higher error statistics compared to the computationally superior Elman and NARX neural architectures, respectively.",
"Service robots will play an increasing and more important role in the society in the next years. One of the main challenges is to endow robots with enough autonomy to operate on real environments. To reach that goal, the design of controllers to solve simple tasks must be automatized. Engineers look for learning algorithms that are general, robust, require low expertise knowledge, and generate controllers that can run on the real robot without any tuning stage. In this paper, a framework to learn behaviors (controllers) in mobile robotics, fulfilling the previous requirements, has been used. The framework is based on two modules: dataset generation and a data-driven evolutionary-based learning algorithm to obtain fuzzy controllers. Nevertheless, the design of a fuzzy controller still requires the selection of the type of learning algorithm, and also to choose the value of some design parameters. In this paper we present an exhaustive study on a set of evolutionary-based data-driven learning algorithms, for learning fuzzy controllers in mobile robotics, that cover a wide range of the accuracy interpretability trade-off. The study has also evaluated the influence of the values of all the design parameters over accuracy and interpretability. The objective is to analyze the performance of the different algorithms for the design of behaviors in mobile robotics, and to extract some general rules that can help in the process to design new behaviors. The analysis comprises two different behaviors (wall-following and moving object following) and more than 450 tests, both in simulation and on a Pioneer II AT robot. Results have shown very good performances in complex and realistic conditions for the different combinations of algorithms and parameters.",
"An Autonomous Mobile Robot (AMR) is a machine able to extract information from its environment and use knowledge about its world to move safely in a meaningful and purposeful manner. Robot Navigation and Obstacle Avoidance are from the most important problems in mobile robots, especially in unknown environments. It must be able to interact with other objects safely. Several techniques such as Fuzzy logic, Reinforcement learning, Neural Networks and Genetic Algorithms, have applied to AMR in order to improve their performance. During the past several years Hybrid Genetic-fuzzy method has emerged as one of the most active and fruitful areas for research in the application of intelligent system design. The objective of this work is to provide a Hybrid method by which an improved set of rules governing the actions and behavior of a simple navigating and obstacle avoiding AMR. Genes are in the form of distances and angles labels. The chromosomes are represented as a rule written in a Boolean algebraic form. The method used to enhance the performance employs a simulation model designed by using Visual Basic software.",
"This paper proposes a reinforcement ant optimized fuzzy controller (FC) design method, called RAOFC, and applies it to wheeled-mobile-robot wall-following control under reinforcement learning environments. The inputs to the designed FC are range-finding sonar sensors, and the controller output is a robot steering angle. The antecedent part in each fuzzy rule uses interval type-2 fuzzy sets in order to increase FC robustness. No a priori assignment of fuzzy rules is necessary in RAOFC. An online aligned interval type-2 fuzzy clustering (AIT2FC) method is proposed to generate rules automatically. The AIT2FC not only flexibly partitions the input space but also reduces the number of fuzzy sets in each input dimension, which improves controller interpretability. The consequent part of each fuzzy rule is designed using Q-value aided ant colony optimization (QACO). The QACO approach selects the consequent part from a set of candidate actions according to ant pheromone trails and Q-values, both of whose values are updated using reinforcement signals. Simulations and experiments on mobile-robot wall-following control show the effectiveness and efficiency of the proposed RAOFC.",
"This paper proposes an evolutionary-group-based particle-swarm-optimization (EGPSO) algorithm for fuzzy-controller (FC) design. The EGPSO uses a group-based framework to incorporate crossover and mutation operations into particle-swarm optimization. The EGPSO dynamically forms different groups to select parents in crossover operations, particle updates, and replacements. An adaptive velocity-mutated operation (AVMO) is incorporated to improve search ability. The EGPSO is applied to design all of the free parameters in a zero-order Takagi-Sugeno-Kang (TSK)-type FC. The objective of EGPSO is to improve fuzzy-control accuracy and design efficiency. Comparisons with different population-based optimizations of fuzzy-control problems demonstrate the superiority of EGPSO performance. In particular, the EGPSO-designed FC is applied to mobile-robot navigation in unknown environments. In this application, the robot learns to follow object boundaries through an EGPSO-designed FC. A simple learning environment is created to build this behavior without an exhaustive collection of input-output training pairs in advance. A behavior supervisor is proposed to combine the boundary-following behavior and the target-seeking behavior for navigation, and the problem of dead cycles is considered. Successful mobile-robot navigation in simulation and real environments verifies the EGPSO-designed FC-navigation approach.",
"In view of many applications, in recent years, there has been increasing interest in robot's control. Two intelligent controllers based on fuzzy logic and neural network are developed to trace the desired trajectory for a robot. A variety of evolutionary algorithms, have been proposed to approximately solve problems of common engineering applications. Increasingly common applications involve automatic learning of nonlinear mappings that govern the behavior of control systems. In many cases where robot control is of primary concern, the systems used to demonstrate the effectiveness of evolutionary algorithms often do not represent practical robotic systems. In this paper, genetic algorithms (GA) are the evolutionary strategy of interest. This procedure and the manner in which fuzzy controllers are codified into chromosomes is described. It is applied to learn fuzzy control rules for a practical autonomous vehicle steering control problem, namely, path tracking. GA handles the simultaneous evolution of membership functions and rule bases for the fuzzy path tracker. Simulation results show that the proposed fuzzy controller whose all parameters have been tuned simultaneously using GAs, offers advantages over existing controllers and has improved performance.",
"Genetic Network Programming (GNP) has been proposed as one of the evolutionary algorithms and extended with reinforcement learning (GNP-RL). The combination of evolution and learning can efficiently evolve programs and the fitness improvement has been confirmed in the simulations of tileworld problems, elevator group supervisory control systems, stock trading models and wall following behavior of Khepera robot. However, its robustness in testing environments has not been analyzed in detail yet. In this paper, the learning mechanism in the testing environment is introduced and it is confirmed that GNP-RL can show the robustness using a robot simulator WEBOTS, especially when unexperienced sensor troubles suddenly occur. The simulation results show that GNP-RL works well in the testing even if wrong sensor information is given because GNP-RL has a function to change programs using alternative actions automatically. In addition, the analysis on the effects of the parameters of GNP-RL is carried out in both training and testing simulations."
]
} |
1411.3895 | 2007475688 | Graphical abstractDisplay Omitted HighlightsAn algorithm which is able to learn controllers with embedded preprocessing for mobile robotics is presented.Quantified Fuzzy Propositions, a model able to summarize the low-level input data, are used.The algorithm was tested with the wall-following behavior both in simulated and real environments.Results show a better and statistically significant performance of our proposal.The approach was also successfully tested in three real world behaviors. The automatic design of controllers for mobile robots usually requires two stages. In the first stage, sensorial data are preprocessed or transformed into high level and meaningful values of variables which are usually defined from expert knowledge. In the second stage, a machine learning technique is applied to obtain a controller that maps these high level variables to the control commands that are actually sent to the robot. This paper describes an algorithm that is able to embed the preprocessing stage into the learning stage in order to get controllers directly starting from sensorial raw data with no expert knowledge involved. Due to the high dimensionality of the sensorial data, this approach uses Quantified Fuzzy Rules (QFRs), that are able to transform low-level input variables into high-level input variables, reducing the dimensionality through summarization. The proposed learning algorithm, called Iterative Quantified Fuzzy Rule Learning (IQFRL), is based on genetic programming. IQFRL is able to learn rules with different structures, and can manage linguistic variables with multiple granularities. The algorithm has been tested with the implementation of the wall-following behavior both in several realistic simulated environments with different complexity and on a Pioneer 3-AT robot in two real environments. Results have been compared with several well-known learning algorithms combined with different data preprocessing techniques, showing that IQFRL exhibits a better and statistically significant performance. Moreover, three real world applications for which IQFRL plays a central role are also presented: path and object tracking with static and moving obstacles avoidance. | An extensive use of expert knowledge is made in all of these approaches. In @cite_9 360 laser sensor beams are used as input data, and are heuristically combined into 8 sectors as inputs to the learning algorithm. On the other hand, in @cite_32 @cite_5 @cite_23 @cite_12 @cite_29 @cite_22 @cite_24 @cite_0 the input variables of the learning algorithm are defined by an expert. Moreover, in @cite_5 @cite_23 @cite_29 @cite_22 @cite_18 the evaluation function of the evolutionary algorithm must be defined by an expert for each particular behavior. As in the latter case, the reinforcement learning approaches need the definition of an appropriate reward function using expert knowledge. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_9",
"@cite_29",
"@cite_32",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_5",
"@cite_12"
],
"mid": [
"",
"1963565134",
"2164301484",
"2124958386",
"2079690958",
"2111616957",
"2111065664",
"2168640251",
"1989329927",
"2132586633"
],
"abstract": [
"",
"The design of fuzzy controllers for the implementation of behaviors in mobile robotics is a complex and highly time-consuming task. The use of machine learning techniques such as evolutionary algorithms or artificial neural networks for the learning of these controllers allows to automate the design process. In this paper, the automated design of a fuzzy controller using genetic algorithms for the implementation of the wall-following behavior in a mobile robot is described. The algorithm is based on the iterative rule learning approach, and is characterized by three main points. First, learning has no restrictions neither in the number of membership functions, nor in their values. In the second place, the training set is composed of a set of examples uniformly distributed along the universe of discourse of the variables. This warrantees that the quality of the learned behavior does not depend on the environment, and also that the robot will be capable to face different situations. Finally, the trade off between the number of rules and the quality accuracy of the controller can be adjusted selecting the value of a parameter. Once the knowledge base has been learned, a process for its reduction and tuning is applied, increasing the cooperation between rules and reducing its number.",
"An Autonomous Mobile Robot (AMR) is a machine able to extract information from its environment and use knowledge about its world to move safely in a meaningful and purposeful manner. Robot Navigation and Obstacle Avoidance are from the most important problems in mobile robots, especially in unknown environments. It must be able to interact with other objects safely. Several techniques such as Fuzzy logic, Reinforcement learning, Neural Networks and Genetic Algorithms, have applied to AMR in order to improve their performance. During the past several years Hybrid Genetic-fuzzy method has emerged as one of the most active and fruitful areas for research in the application of intelligent system design. The objective of this work is to provide a Hybrid method by which an improved set of rules governing the actions and behavior of a simple navigating and obstacle avoiding AMR. Genes are in the form of distances and angles labels. The chromosomes are represented as a rule written in a Boolean algebraic form. The method used to enhance the performance employs a simulation model designed by using Visual Basic software.",
"The problem of an effective behavior learning of autonomous robots is one of the most important tasks of the modern robotics. In fact, it is well known that the learning to optimize actions of autonomous agents in a dynamic environment is one of the most complex challenges of the intelligent system design. In this paper, we propose a hybrid approach integrating fuzzy logic system with genetic algorithm for high-level skills learning of robots within the RoboCup simulation soccer domain. Through the experiments, we found that the proposed method has good property of computation efficiency and also has a good advantage applied to the environment of RoboCup.",
"Evolutionary Robotics (ER) is one of promising approaches to design robot controllers which essentially have complicated and or complex properties. In most ER research, the sensory-motor mappings of robots are represented as artificial neural networks, and their connection weights (and sometimes the structure of the networks) can be optimized in the parameter spaces by using evolutionary computation. However, generally, the evolved neural controllers could be fragile in unexperienced environments, especially in real worlds, because the evolutionary optimization processes would be executed in idealized simulators. This is known as the gap problem between the simulated and real worlds. To overcome this, the author focused on evolving an on-line learning ability instead of weight parameters in a simulated environment. According to recent biological findings, actually, the kinds of on-line adaptation abilities can be found in real nervous systems of insects and crustaceans, and it is also known that a variety of neuromodulators (NMs) play crucial roles to regulate the network characteristics (i.e. activating blocking changing of synaptic connections). Based on this, a neuromodulatory neural network model was proposed and it was utilized as a mobile robot controller. In the paper, the detail behavior analysis of the evolved neuromodulatory neural network is also discussed.",
"This paper proposes a reinforcement ant optimized fuzzy controller (FC) design method, called RAOFC, and applies it to wheeled-mobile-robot wall-following control under reinforcement learning environments. The inputs to the designed FC are range-finding sonar sensors, and the controller output is a robot steering angle. The antecedent part in each fuzzy rule uses interval type-2 fuzzy sets in order to increase FC robustness. No a priori assignment of fuzzy rules is necessary in RAOFC. An online aligned interval type-2 fuzzy clustering (AIT2FC) method is proposed to generate rules automatically. The AIT2FC not only flexibly partitions the input space but also reduces the number of fuzzy sets in each input dimension, which improves controller interpretability. The consequent part of each fuzzy rule is designed using Q-value aided ant colony optimization (QACO). The QACO approach selects the consequent part from a set of candidate actions according to ant pheromone trails and Q-values, both of whose values are updated using reinforcement signals. Simulations and experiments on mobile-robot wall-following control show the effectiveness and efficiency of the proposed RAOFC.",
"This paper proposes an evolutionary-group-based particle-swarm-optimization (EGPSO) algorithm for fuzzy-controller (FC) design. The EGPSO uses a group-based framework to incorporate crossover and mutation operations into particle-swarm optimization. The EGPSO dynamically forms different groups to select parents in crossover operations, particle updates, and replacements. An adaptive velocity-mutated operation (AVMO) is incorporated to improve search ability. The EGPSO is applied to design all of the free parameters in a zero-order Takagi-Sugeno-Kang (TSK)-type FC. The objective of EGPSO is to improve fuzzy-control accuracy and design efficiency. Comparisons with different population-based optimizations of fuzzy-control problems demonstrate the superiority of EGPSO performance. In particular, the EGPSO-designed FC is applied to mobile-robot navigation in unknown environments. In this application, the robot learns to follow object boundaries through an EGPSO-designed FC. A simple learning environment is created to build this behavior without an exhaustive collection of input-output training pairs in advance. A behavior supervisor is proposed to combine the boundary-following behavior and the target-seeking behavior for navigation, and the problem of dead cycles is considered. Successful mobile-robot navigation in simulation and real environments verifies the EGPSO-designed FC-navigation approach.",
"A methodology for learning behaviors in mobile robotics has been developed. It consists of a technique to automatically generate input–output data plus a genetic fuzzy system that obtains cooperative weighted rules. The advantages of our methodology over other approaches are that the designer has to choose the values of only a few parameters, the obtained controllers are general (the quality of the controller does not depend on the environment), and the learning process takes place in simulation, but the controllers work also on the real robot with good performance. The methodology has been used to learn the wall-following behavior, and the obtained controller has been tested using a Nomad 200 robot in both simulated and real environments. © 2009 Wiley Periodicals, Inc.",
"The design of fuzzy controllers for the implementation of behaviors in mobile robotics is a complex and highly time-consuming task. The use of machine learning techniques, such as evolutionary algorithms or artificial neural networks for the learning of these controllers allows to automate the design process. In this paper, the automated design of a fuzzy controller using genetic algorithms for the implementation of the wall-following behavior in a mobile robot is described. The algorithm is based on the Iterative Rule Learning (IRL) approach, and a parameter (@d) is defined with the aim of selecting the relation between the number of rules and the quality and accuracy of the controller. The designer has to define the universe of discourse and the precision of each variable, and also the scoring function. No restrictions are placed neither in the number of linguistic labels nor in the values that define the membership functions.",
"Service robots will play an increasing and more important role in the society in the next years. One of the main challenges is to endow robots with enough autonomy to operate on real environments. To reach that goal, the design of controllers to solve simple tasks must be automatized. Engineers look for learning algorithms that are general, robust, require low expertise knowledge, and generate controllers that can run on the real robot without any tuning stage. In this paper, a framework to learn behaviors (controllers) in mobile robotics, fulfilling the previous requirements, has been used. The framework is based on two modules: dataset generation and a data-driven evolutionary-based learning algorithm to obtain fuzzy controllers. Nevertheless, the design of a fuzzy controller still requires the selection of the type of learning algorithm, and also to choose the value of some design parameters. In this paper we present an exhaustive study on a set of evolutionary-based data-driven learning algorithms, for learning fuzzy controllers in mobile robotics, that cover a wide range of the accuracy interpretability trade-off. The study has also evaluated the influence of the values of all the design parameters over accuracy and interpretability. The objective is to analyze the performance of the different algorithms for the design of behaviors in mobile robotics, and to extract some general rules that can help in the process to design new behaviors. The analysis comprises two different behaviors (wall-following and moving object following) and more than 450 tests, both in simulation and on a Pioneer II AT robot. Results have shown very good performances in complex and realistic conditions for the different combinations of algorithms and parameters."
]
} |
1411.3895 | 2007475688 | Graphical abstractDisplay Omitted HighlightsAn algorithm which is able to learn controllers with embedded preprocessing for mobile robotics is presented.Quantified Fuzzy Propositions, a model able to summarize the low-level input data, are used.The algorithm was tested with the wall-following behavior both in simulated and real environments.Results show a better and statistically significant performance of our proposal.The approach was also successfully tested in three real world behaviors. The automatic design of controllers for mobile robots usually requires two stages. In the first stage, sensorial data are preprocessed or transformed into high level and meaningful values of variables which are usually defined from expert knowledge. In the second stage, a machine learning technique is applied to obtain a controller that maps these high level variables to the control commands that are actually sent to the robot. This paper describes an algorithm that is able to embed the preprocessing stage into the learning stage in order to get controllers directly starting from sensorial raw data with no expert knowledge involved. Due to the high dimensionality of the sensorial data, this approach uses Quantified Fuzzy Rules (QFRs), that are able to transform low-level input variables into high-level input variables, reducing the dimensionality through summarization. The proposed learning algorithm, called Iterative Quantified Fuzzy Rule Learning (IQFRL), is based on genetic programming. IQFRL is able to learn rules with different structures, and can manage linguistic variables with multiple granularities. The algorithm has been tested with the implementation of the wall-following behavior both in several realistic simulated environments with different complexity and on a Pioneer 3-AT robot in two real environments. Results have been compared with several well-known learning algorithms combined with different data preprocessing techniques, showing that IQFRL exhibits a better and statistically significant performance. Moreover, three real world applications for which IQFRL plays a central role are also presented: path and object tracking with static and moving obstacles avoidance. | The approaches based on genetic fuzzy systems use different alternatives in the definition of the membership functions. In @cite_17 @cite_9 @cite_29 the membership functions are defined heuristically. In @cite_23 @cite_12 labels have been uniformly distributed, but the granularity of each input variable is defined using expert knowledge. On the other hand, in @cite_5 @cite_31 @cite_22 @cite_24 @cite_0 an approximative approach is used, i.e., different membership functions are learned for each rule, reducing the interpretability of the learned controller. | {
"cite_N": [
"@cite_22",
"@cite_9",
"@cite_29",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_5",
"@cite_31",
"@cite_12",
"@cite_17"
],
"mid": [
"1963565134",
"2164301484",
"2124958386",
"2111616957",
"2111065664",
"2168640251",
"1989329927",
"2024327854",
"2132586633",
"2072881998"
],
"abstract": [
"The design of fuzzy controllers for the implementation of behaviors in mobile robotics is a complex and highly time-consuming task. The use of machine learning techniques such as evolutionary algorithms or artificial neural networks for the learning of these controllers allows to automate the design process. In this paper, the automated design of a fuzzy controller using genetic algorithms for the implementation of the wall-following behavior in a mobile robot is described. The algorithm is based on the iterative rule learning approach, and is characterized by three main points. First, learning has no restrictions neither in the number of membership functions, nor in their values. In the second place, the training set is composed of a set of examples uniformly distributed along the universe of discourse of the variables. This warrantees that the quality of the learned behavior does not depend on the environment, and also that the robot will be capable to face different situations. Finally, the trade off between the number of rules and the quality accuracy of the controller can be adjusted selecting the value of a parameter. Once the knowledge base has been learned, a process for its reduction and tuning is applied, increasing the cooperation between rules and reducing its number.",
"An Autonomous Mobile Robot (AMR) is a machine able to extract information from its environment and use knowledge about its world to move safely in a meaningful and purposeful manner. Robot Navigation and Obstacle Avoidance are from the most important problems in mobile robots, especially in unknown environments. It must be able to interact with other objects safely. Several techniques such as Fuzzy logic, Reinforcement learning, Neural Networks and Genetic Algorithms, have applied to AMR in order to improve their performance. During the past several years Hybrid Genetic-fuzzy method has emerged as one of the most active and fruitful areas for research in the application of intelligent system design. The objective of this work is to provide a Hybrid method by which an improved set of rules governing the actions and behavior of a simple navigating and obstacle avoiding AMR. Genes are in the form of distances and angles labels. The chromosomes are represented as a rule written in a Boolean algebraic form. The method used to enhance the performance employs a simulation model designed by using Visual Basic software.",
"The problem of an effective behavior learning of autonomous robots is one of the most important tasks of the modern robotics. In fact, it is well known that the learning to optimize actions of autonomous agents in a dynamic environment is one of the most complex challenges of the intelligent system design. In this paper, we propose a hybrid approach integrating fuzzy logic system with genetic algorithm for high-level skills learning of robots within the RoboCup simulation soccer domain. Through the experiments, we found that the proposed method has good property of computation efficiency and also has a good advantage applied to the environment of RoboCup.",
"This paper proposes a reinforcement ant optimized fuzzy controller (FC) design method, called RAOFC, and applies it to wheeled-mobile-robot wall-following control under reinforcement learning environments. The inputs to the designed FC are range-finding sonar sensors, and the controller output is a robot steering angle. The antecedent part in each fuzzy rule uses interval type-2 fuzzy sets in order to increase FC robustness. No a priori assignment of fuzzy rules is necessary in RAOFC. An online aligned interval type-2 fuzzy clustering (AIT2FC) method is proposed to generate rules automatically. The AIT2FC not only flexibly partitions the input space but also reduces the number of fuzzy sets in each input dimension, which improves controller interpretability. The consequent part of each fuzzy rule is designed using Q-value aided ant colony optimization (QACO). The QACO approach selects the consequent part from a set of candidate actions according to ant pheromone trails and Q-values, both of whose values are updated using reinforcement signals. Simulations and experiments on mobile-robot wall-following control show the effectiveness and efficiency of the proposed RAOFC.",
"This paper proposes an evolutionary-group-based particle-swarm-optimization (EGPSO) algorithm for fuzzy-controller (FC) design. The EGPSO uses a group-based framework to incorporate crossover and mutation operations into particle-swarm optimization. The EGPSO dynamically forms different groups to select parents in crossover operations, particle updates, and replacements. An adaptive velocity-mutated operation (AVMO) is incorporated to improve search ability. The EGPSO is applied to design all of the free parameters in a zero-order Takagi-Sugeno-Kang (TSK)-type FC. The objective of EGPSO is to improve fuzzy-control accuracy and design efficiency. Comparisons with different population-based optimizations of fuzzy-control problems demonstrate the superiority of EGPSO performance. In particular, the EGPSO-designed FC is applied to mobile-robot navigation in unknown environments. In this application, the robot learns to follow object boundaries through an EGPSO-designed FC. A simple learning environment is created to build this behavior without an exhaustive collection of input-output training pairs in advance. A behavior supervisor is proposed to combine the boundary-following behavior and the target-seeking behavior for navigation, and the problem of dead cycles is considered. Successful mobile-robot navigation in simulation and real environments verifies the EGPSO-designed FC-navigation approach.",
"A methodology for learning behaviors in mobile robotics has been developed. It consists of a technique to automatically generate input–output data plus a genetic fuzzy system that obtains cooperative weighted rules. The advantages of our methodology over other approaches are that the designer has to choose the values of only a few parameters, the obtained controllers are general (the quality of the controller does not depend on the environment), and the learning process takes place in simulation, but the controllers work also on the real robot with good performance. The methodology has been used to learn the wall-following behavior, and the obtained controller has been tested using a Nomad 200 robot in both simulated and real environments. © 2009 Wiley Periodicals, Inc.",
"The design of fuzzy controllers for the implementation of behaviors in mobile robotics is a complex and highly time-consuming task. The use of machine learning techniques, such as evolutionary algorithms or artificial neural networks for the learning of these controllers allows to automate the design process. In this paper, the automated design of a fuzzy controller using genetic algorithms for the implementation of the wall-following behavior in a mobile robot is described. The algorithm is based on the Iterative Rule Learning (IRL) approach, and a parameter (@d) is defined with the aim of selecting the relation between the number of rules and the quality and accuracy of the controller. The designer has to define the universe of discourse and the precision of each variable, and also the scoring function. No restrictions are placed neither in the number of linguistic labels nor in the values that define the membership functions.",
"In view of many applications, in recent years, there has been increasing interest in robot's control. Two intelligent controllers based on fuzzy logic and neural network are developed to trace the desired trajectory for a robot. A variety of evolutionary algorithms, have been proposed to approximately solve problems of common engineering applications. Increasingly common applications involve automatic learning of nonlinear mappings that govern the behavior of control systems. In many cases where robot control is of primary concern, the systems used to demonstrate the effectiveness of evolutionary algorithms often do not represent practical robotic systems. In this paper, genetic algorithms (GA) are the evolutionary strategy of interest. This procedure and the manner in which fuzzy controllers are codified into chromosomes is described. It is applied to learn fuzzy control rules for a practical autonomous vehicle steering control problem, namely, path tracking. GA handles the simultaneous evolution of membership functions and rule bases for the fuzzy path tracker. Simulation results show that the proposed fuzzy controller whose all parameters have been tuned simultaneously using GAs, offers advantages over existing controllers and has improved performance.",
"Service robots will play an increasing and more important role in the society in the next years. One of the main challenges is to endow robots with enough autonomy to operate on real environments. To reach that goal, the design of controllers to solve simple tasks must be automatized. Engineers look for learning algorithms that are general, robust, require low expertise knowledge, and generate controllers that can run on the real robot without any tuning stage. In this paper, a framework to learn behaviors (controllers) in mobile robotics, fulfilling the previous requirements, has been used. The framework is based on two modules: dataset generation and a data-driven evolutionary-based learning algorithm to obtain fuzzy controllers. Nevertheless, the design of a fuzzy controller still requires the selection of the type of learning algorithm, and also to choose the value of some design parameters. In this paper we present an exhaustive study on a set of evolutionary-based data-driven learning algorithms, for learning fuzzy controllers in mobile robotics, that cover a wide range of the accuracy interpretability trade-off. The study has also evaluated the influence of the values of all the design parameters over accuracy and interpretability. The objective is to analyze the performance of the different algorithms for the design of behaviors in mobile robotics, and to extract some general rules that can help in the process to design new behaviors. The analysis comprises two different behaviors (wall-following and moving object following) and more than 450 tests, both in simulation and on a Pioneer II AT robot. Results have shown very good performances in complex and realistic conditions for the different combinations of algorithms and parameters.",
"Conventional fuzzy logic controller is applicable when there are only two fuzzy inputs with usually one output. Complexity increases when there are more than one inputs and outputs making the system unrealizable. The ordinal structure model of fuzzy reasoning has an advantage of managing high-dimensional problem with multiple input and output variables ensuring the interpretability of the rule set. This is achieved by giving an associated weight to each rule in the defuzzification process. In this work, a methodology to design an ordinal fuzzy logic controller with application for obstacle avoidance of Khepera mobile robot is presented. The implementation will show that ordinal structure fuzzy is easier to design with highly interpretable rules compared to conventional fuzzy controller. In order to achieve high accuracy, a specially tailored Genetic Algorithm (GA) approach for reinforcement learning has been proposed to optimize the ordinal structure fuzzy controller. Simulation results demonstrated improved obstacle avoidance performance in comparison with conventional fuzzy controllers. Comparison of direct and incremental GA for optimization of the controller is also presented."
]
} |
1411.3895 | 2007475688 | Graphical abstractDisplay Omitted HighlightsAn algorithm which is able to learn controllers with embedded preprocessing for mobile robotics is presented.Quantified Fuzzy Propositions, a model able to summarize the low-level input data, are used.The algorithm was tested with the wall-following behavior both in simulated and real environments.Results show a better and statistically significant performance of our proposal.The approach was also successfully tested in three real world behaviors. The automatic design of controllers for mobile robots usually requires two stages. In the first stage, sensorial data are preprocessed or transformed into high level and meaningful values of variables which are usually defined from expert knowledge. In the second stage, a machine learning technique is applied to obtain a controller that maps these high level variables to the control commands that are actually sent to the robot. This paper describes an algorithm that is able to embed the preprocessing stage into the learning stage in order to get controllers directly starting from sensorial raw data with no expert knowledge involved. Due to the high dimensionality of the sensorial data, this approach uses Quantified Fuzzy Rules (QFRs), that are able to transform low-level input variables into high-level input variables, reducing the dimensionality through summarization. The proposed learning algorithm, called Iterative Quantified Fuzzy Rule Learning (IQFRL), is based on genetic programming. IQFRL is able to learn rules with different structures, and can manage linguistic variables with multiple granularities. The algorithm has been tested with the implementation of the wall-following behavior both in several realistic simulated environments with different complexity and on a Pioneer 3-AT robot in two real environments. Results have been compared with several well-known learning algorithms combined with different data preprocessing techniques, showing that IQFRL exhibits a better and statistically significant performance. Moreover, three real world applications for which IQFRL plays a central role are also presented: path and object tracking with static and moving obstacles avoidance. | The main problem of learning behaviors using raw sensor input data is the curse of dimensionality. In @cite_30 , this issue has been managed from the reinforcement learning perspective, by using a probability density estimation of the joint space of states. Among all the approaches based on evolutionary algorithms, only in @cite_10 no expert knowledge has been taken into account. In this work, the number of sensors and their position are learned from a reduced number of sensors. | {
"cite_N": [
"@cite_30",
"@cite_10"
],
"mid": [
"1561996023",
"1540106380"
],
"abstract": [
"The successful application of Reinforcement Learning (RL) techniques to robot control is limited by the fact that, in most robotic tasks, the state and action spaces are continuous, multidimensional, and in essence, too large for conventional RL algorithms to work. The well known curse of dimensionality makes infeasible using a tabular representation of the value function, which is the classical approach that provides convergence guarantees. When a function approximation technique is used to generalize among similar states, the convergence of the algorithm is compromised, since updates unavoidably affect an extended region of the domain, that is, some situations are modified in a way that has not been really experienced, and the update may degrade the approximation. We propose a RL algorithm that uses a probability density estimation in the joint space of states, actions and Q-values as a means of function approximation. This allows us to devise an updating approach that, taking into account the local sampling density, avoids an excessive modification of the approximation far from the observed sample.",
"Genetic programming provides an automated design strategy to evolve complex controllers based on evolution in nature. In this contribution we use genetic programming to automatically evolve efficient robot controllers for a corridor following task. Based on tests executed in a simulation environment we show that very robust and efficient controllers can be obtained. Also, we stress that it is important to provide sufficiently diverse fitness cases, offering a sound basis for learning more complex behaviour. The evolved controller is successfully applied to real environments as well. Finally, controller and sensor morphology are co-evolved, clearly resulting in an improved sensor configuration."
]
} |
1411.3895 | 2007475688 | Graphical abstractDisplay Omitted HighlightsAn algorithm which is able to learn controllers with embedded preprocessing for mobile robotics is presented.Quantified Fuzzy Propositions, a model able to summarize the low-level input data, are used.The algorithm was tested with the wall-following behavior both in simulated and real environments.Results show a better and statistically significant performance of our proposal.The approach was also successfully tested in three real world behaviors. The automatic design of controllers for mobile robots usually requires two stages. In the first stage, sensorial data are preprocessed or transformed into high level and meaningful values of variables which are usually defined from expert knowledge. In the second stage, a machine learning technique is applied to obtain a controller that maps these high level variables to the control commands that are actually sent to the robot. This paper describes an algorithm that is able to embed the preprocessing stage into the learning stage in order to get controllers directly starting from sensorial raw data with no expert knowledge involved. Due to the high dimensionality of the sensorial data, this approach uses Quantified Fuzzy Rules (QFRs), that are able to transform low-level input variables into high-level input variables, reducing the dimensionality through summarization. The proposed learning algorithm, called Iterative Quantified Fuzzy Rule Learning (IQFRL), is based on genetic programming. IQFRL is able to learn rules with different structures, and can manage linguistic variables with multiple granularities. The algorithm has been tested with the implementation of the wall-following behavior both in several realistic simulated environments with different complexity and on a Pioneer 3-AT robot in two real environments. Results have been compared with several well-known learning algorithms combined with different data preprocessing techniques, showing that IQFRL exhibits a better and statistically significant performance. Moreover, three real world applications for which IQFRL plays a central role are also presented: path and object tracking with static and moving obstacles avoidance. | In @cite_43 a Genetic Cooperative-Competitive Learning (GCCL) approach was presented. The proposal learns knowledge bases without preprocessing raw data, but the rules involved approximative labels while the IQFRL proposal uses unconstrained multiple granularity. Moreover, in this approach it is difficult to adjust the balance between cooperation and competition, which is typical when learning rules in GCCL. As a result, the obtained rules where quite specific and the performance of the behavior was not comparable to other proposals based on expert knowledge. | {
"cite_N": [
"@cite_43"
],
"mid": [
"2124810271"
],
"abstract": [
"In complex systems it often occurs that relevant infor- mation about the system state and behavior is provided by groups of low-level variables rather than single variables. This grouping into high-level variables introduces a hierachy in the knowledge that can only be captured by means of rules involving propositions with a rep- resentation capability that is more complex than usual ones. In this paper we describe a genetic programming based approach for auto- mated learning of Quantified Fuzzy Rules that are capable to deal with such representation capability. An application of this approach for hierarchical grouping of the distance measures provided by the laser sensors of a mobile robot (for the wall-following behaviour) is presented. Experimentation results show the control action is ac- ceptable although no prior knowledge on the variables definition and structure was introduced in the controller."
]
} |
1411.3201 | 2286272280 | Power consumption costs takes upto half of operational expenses of datacenters making power management a critical concern. Advances in processor technology provide fine-grained control over operating frequency and voltage of processors and this control can be used to tradeoff power for performance. Although many power and performance models exist, they have a significant error margin while predicting the performance of memory or file-intensive tasks and HPC applications. Our investigations reveal that the prediction error is due in part to the fact that they do not take frequency AND CPU variations account, rather they just depend on the CPU by itself. In this paper, we empirically derive power and completion time models using linear regression with CPU utilization and operating frequency as parameters. We validate our power model on several Intel and AMD processors by predicting within 2-7 of measured power. We validate our completion time model using five kernels of NASA Parallel Benchmark suite and five CPU, memory and file-intensive benchmarks on four heterogeneous systems and predicting within 1-6 of observed performance. We then show how these models can be employed to realize as much as 15 savings in power while delivering 44 better performance for applications deployed in a virtualized environment. | @cite_14 addressed this concern when they proposed a power model that support multiple frequency steps. The power @math at any given utilization @math is given as @math @math @math | {
"cite_N": [
"@cite_14"
],
"mid": [
"2121574851"
],
"abstract": [
"This paper proposes and evaluates an approach for power and performance management in virtualized server clusters. The major goal of our approach is to reduce power consumption in the cluster while meeting performance requirements. The contributions of this paper are: (1) a simple but effective way of modeling power consumption and capacity of servers even under heterogeneous and changing workloads, and (2) an optimization strategy based on a mixed integer programming model for achieving improvements on power-efficiency while providing performance guarantees in the virtualized cluster. In the optimization model, we address application workload balancing and the often ignored switching costs due to frequent and undesirable turning servers on off and VM relocations. We show the effectiveness of the approach applied to a server cluster test bed. Our experiments show that our approach conserves about 50 of the energy required by a system designed for peak workload scenario, with little impact on the applications' performance goals. Also, by using prediction in our optimization strategy, further QoS improvement was achieved."
]
} |
1411.3201 | 2286272280 | Power consumption costs takes upto half of operational expenses of datacenters making power management a critical concern. Advances in processor technology provide fine-grained control over operating frequency and voltage of processors and this control can be used to tradeoff power for performance. Although many power and performance models exist, they have a significant error margin while predicting the performance of memory or file-intensive tasks and HPC applications. Our investigations reveal that the prediction error is due in part to the fact that they do not take frequency AND CPU variations account, rather they just depend on the CPU by itself. In this paper, we empirically derive power and completion time models using linear regression with CPU utilization and operating frequency as parameters. We validate our power model on several Intel and AMD processors by predicting within 2-7 of measured power. We validate our completion time model using five kernels of NASA Parallel Benchmark suite and five CPU, memory and file-intensive benchmarks on four heterogeneous systems and predicting within 1-6 of observed performance. We then show how these models can be employed to realize as much as 15 savings in power while delivering 44 better performance for applications deployed in a virtualized environment. | where @math and @math are the idle power at maximum and minimum frequencies i.e., @math and @math respectively. @math and @math are the peak power at @math and @math respectively. It is to be noted that power is linearly proportional to the frequency or cubically proportional as voltage of the processor is set based on the frequency. @cite_14 , however assumed a relationship between power and frequency. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2121574851"
],
"abstract": [
"This paper proposes and evaluates an approach for power and performance management in virtualized server clusters. The major goal of our approach is to reduce power consumption in the cluster while meeting performance requirements. The contributions of this paper are: (1) a simple but effective way of modeling power consumption and capacity of servers even under heterogeneous and changing workloads, and (2) an optimization strategy based on a mixed integer programming model for achieving improvements on power-efficiency while providing performance guarantees in the virtualized cluster. In the optimization model, we address application workload balancing and the often ignored switching costs due to frequent and undesirable turning servers on off and VM relocations. We show the effectiveness of the approach applied to a server cluster test bed. Our experiments show that our approach conserves about 50 of the energy required by a system designed for peak workload scenario, with little impact on the applications' performance goals. Also, by using prediction in our optimization strategy, further QoS improvement was achieved."
]
} |
1411.3201 | 2286272280 | Power consumption costs takes upto half of operational expenses of datacenters making power management a critical concern. Advances in processor technology provide fine-grained control over operating frequency and voltage of processors and this control can be used to tradeoff power for performance. Although many power and performance models exist, they have a significant error margin while predicting the performance of memory or file-intensive tasks and HPC applications. Our investigations reveal that the prediction error is due in part to the fact that they do not take frequency AND CPU variations account, rather they just depend on the CPU by itself. In this paper, we empirically derive power and completion time models using linear regression with CPU utilization and operating frequency as parameters. We validate our power model on several Intel and AMD processors by predicting within 2-7 of measured power. We validate our completion time model using five kernels of NASA Parallel Benchmark suite and five CPU, memory and file-intensive benchmarks on four heterogeneous systems and predicting within 1-6 of observed performance. We then show how these models can be employed to realize as much as 15 savings in power while delivering 44 better performance for applications deployed in a virtualized environment. | In this paper, we empirically establish power's linear dependency on the frequency and since our work is closely related to @cite_14 , we show how our model predicts power consumption more accurately than that given in @cite_14 . In the latest survey of power models @cite_9 , Petrucci's model that we have used for comparison, is the only model cited which combines CPU a combination of these two parameters. Consider the case of Intel i7 processor - it has a considerably low idle power - 38 | {
"cite_N": [
"@cite_9",
"@cite_14"
],
"mid": [
"2140644653",
"2121574851"
],
"abstract": [
"The power consumption of presently available Internet servers and data centers is not proportional to the work they accomplish. The scientific community is attempting to address this problem in a number of ways, for example, by employing dynamic voltage and frequency scaling, selectively switching off idle or underutilized servers, and employing energy-aware task scheduling. Central to these approaches is the accurate estimation of the power consumption of the various subsystems of a server, particularly, the processor. We distinguish between power consumption measurement techniques and power consumption estimation models. The techniques refer to the art of instrumenting a system to measure its actual power consumption whereas the estimation models deal with indirect evidences (such as information pertaining to CPU utilization or events captured by hardware performance counters) to reason about the power consumption of a system under consideration. The paper provides a comprehensive survey of existing or proposed approaches to estimate the power consumption of single-core as well as multicore processors, virtual machines, and an entire server.",
"This paper proposes and evaluates an approach for power and performance management in virtualized server clusters. The major goal of our approach is to reduce power consumption in the cluster while meeting performance requirements. The contributions of this paper are: (1) a simple but effective way of modeling power consumption and capacity of servers even under heterogeneous and changing workloads, and (2) an optimization strategy based on a mixed integer programming model for achieving improvements on power-efficiency while providing performance guarantees in the virtualized cluster. In the optimization model, we address application workload balancing and the often ignored switching costs due to frequent and undesirable turning servers on off and VM relocations. We show the effectiveness of the approach applied to a server cluster test bed. Our experiments show that our approach conserves about 50 of the energy required by a system designed for peak workload scenario, with little impact on the applications' performance goals. Also, by using prediction in our optimization strategy, further QoS improvement was achieved."
]
} |
1411.3201 | 2286272280 | Power consumption costs takes upto half of operational expenses of datacenters making power management a critical concern. Advances in processor technology provide fine-grained control over operating frequency and voltage of processors and this control can be used to tradeoff power for performance. Although many power and performance models exist, they have a significant error margin while predicting the performance of memory or file-intensive tasks and HPC applications. Our investigations reveal that the prediction error is due in part to the fact that they do not take frequency AND CPU variations account, rather they just depend on the CPU by itself. In this paper, we empirically derive power and completion time models using linear regression with CPU utilization and operating frequency as parameters. We validate our power model on several Intel and AMD processors by predicting within 2-7 of measured power. We validate our completion time model using five kernels of NASA Parallel Benchmark suite and five CPU, memory and file-intensive benchmarks on four heterogeneous systems and predicting within 1-6 of observed performance. We then show how these models can be employed to realize as much as 15 savings in power while delivering 44 better performance for applications deployed in a virtualized environment. | Hsu and Feng @cite_11 empirically observed the effect of frequency change on the completion time of tasks by characterizing the compute-boundedness of each microbenchmark. They proposed a model that verified that the relative performance can be approximated to the relative number of instructions executed per second (MIPS) and the relative frequency. @cite_18 and Marinoni and Buttazzo @cite_2 experimentally verified that frequency changes have lesser effects on memory-intensive applications and minimally affect network- and disk-intensive applications. Wang and Wang @cite_15 used Model Predictive Control (MPC) theory to design the Controller that changes the CPU allocation of the VM and the frequency of the servers based on a power cap. Though their performance model considers both the CPU and frequency of the server as parameters, their experimentation were neither performed on non-CPU-intensive applications, which have lesser performance loss with the change in frequency, nor heterogeneous applications with varied SLAs. | {
"cite_N": [
"@cite_15",
"@cite_18",
"@cite_2",
"@cite_11"
],
"mid": [
"2139052027",
"893280317",
"2095504329",
"2171935755"
],
"abstract": [
"Today's data centers face two critical challenges. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, server power consumption must be controlled in order to avoid failures caused by power capacity overload or system overheating due to increasing high server density. However, existing work controls power and application-level performance separately, and thus, cannot simultaneously provide explicit guarantees on both. In addition, as power and performance control strategies may come from different hardware software vendors and coexist at different layers, it is more feasible to coordinate various strategies to achieve the desired control objectives than relying on a single centralized control strategy. This paper proposes Co-Con, a novel cluster-level control architecture that coordinates individual power and performance control loops for virtualized server clusters. To emulate the current practice in data centers, the power control loop changes hardware power states with no regard to the application-level performance. The performance control loop is then designed for each virtual machine to achieve the desired performance even when the system model varies significantly due to the impact of power control. Co-Con configures the two control loops rigorously, based on feedback control theory, for theoretically guaranteed control accuracy and system stability. Empirical results on a physical testbed demonstrate that Co-Con can simultaneously provide effective control on both application-level performance and underlying power consumption.",
"In this paper we show that in modern computing systems, DVFS gives much more limited energy savings with relatively high performance overhead as compared to running workloads at high speed and then transitioning into low power state. The primary reasons for this are recent advancements in platform and CPU architectures such as sophisticated memory subsystem design, and more efficient low power state support. We justify our analysis with measurements on a state of the art system using benchmarks ranging from very CPU intensive to memory intensive workloads.",
"Applying classical dynamic voltage scaling (DVS) techniques to real-time systems running on processors with discrete voltage frequency modes causes a waste of computational resources. In fact, whenever the ideal speed level computed by the DVS algorithm is not available in the system, to guarantee the feasibility of the task set, the processor speed must be set to the nearest level greater than the optimal one, thus underutilizing the system. Whenever the task set allows a certain degree of flexibility in specifying timing constraints, rate adaptation techniques can be adopted to balance performance (which is a function of task rates) versus energy consumption (which is a function of the processor speed). In this paper, we propose a new method that combines discrete DVS management with elastic scheduling to fully exploit the available computational resources. Depending on the application requirements, the algorithm can be set to improve performance or reduce energy consumption, so enhancing the flexibility of the system. A reclaiming mechanism is also used to take advantage of early completions. To make the proposed approach usable in real-world applications, the task model is enhanced to consider some of the real CPU characteristics, such as discrete voltage frequency levels, switching overhead, task execution times nonlinear with the frequency, and tasks with different power consumption. Implementation issues and experimental results for the proposed algorithm are also discussed",
"For decades, the high-performance computing (HPC) community has focused on performance, where performance is defined as speed. To achieve better performance per compute node, microprocessor vendors have not only doubled the number of transistors (and speed) every 18-24 months, but they have also doubled the power densities. Consequently, keeping a large-scale HPC system functioning properly requires continual cooling in a largemachine room, thus resulting in substantial operational costs. Furthermore, the increase in power densities has led (in part) to a decrease in system reliability, thus leading to lost productivity. To address these problems, we propose a power-aware algorithm that automatically and transparently adapts its voltage and frequency settings to achieve significant power reduction and energy savings with minimal impact on performance. Specifically, we leverage a commodity technology called \"dynamic voltage and frequency scaling\" to implement our power-aware algorithm in the run-time system of commodity HPC systems."
]
} |
1411.3201 | 2286272280 | Power consumption costs takes upto half of operational expenses of datacenters making power management a critical concern. Advances in processor technology provide fine-grained control over operating frequency and voltage of processors and this control can be used to tradeoff power for performance. Although many power and performance models exist, they have a significant error margin while predicting the performance of memory or file-intensive tasks and HPC applications. Our investigations reveal that the prediction error is due in part to the fact that they do not take frequency AND CPU variations account, rather they just depend on the CPU by itself. In this paper, we empirically derive power and completion time models using linear regression with CPU utilization and operating frequency as parameters. We validate our power model on several Intel and AMD processors by predicting within 2-7 of measured power. We validate our completion time model using five kernels of NASA Parallel Benchmark suite and five CPU, memory and file-intensive benchmarks on four heterogeneous systems and predicting within 1-6 of observed performance. We then show how these models can be employed to realize as much as 15 savings in power while delivering 44 better performance for applications deployed in a virtualized environment. | Non-CPU-intensive applications were again neglected by @cite_14 when they proposed a performance model that depends on the CPU utilization and frequency of the server. They assumed CPU as the bottleneck for tool and predicted the performance @math for any given utilization @math and frequency @math using the following equation. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2121574851"
],
"abstract": [
"This paper proposes and evaluates an approach for power and performance management in virtualized server clusters. The major goal of our approach is to reduce power consumption in the cluster while meeting performance requirements. The contributions of this paper are: (1) a simple but effective way of modeling power consumption and capacity of servers even under heterogeneous and changing workloads, and (2) an optimization strategy based on a mixed integer programming model for achieving improvements on power-efficiency while providing performance guarantees in the virtualized cluster. In the optimization model, we address application workload balancing and the often ignored switching costs due to frequent and undesirable turning servers on off and VM relocations. We show the effectiveness of the approach applied to a server cluster test bed. Our experiments show that our approach conserves about 50 of the energy required by a system designed for peak workload scenario, with little impact on the applications' performance goals. Also, by using prediction in our optimization strategy, further QoS improvement was achieved."
]
} |
1411.3201 | 2286272280 | Power consumption costs takes upto half of operational expenses of datacenters making power management a critical concern. Advances in processor technology provide fine-grained control over operating frequency and voltage of processors and this control can be used to tradeoff power for performance. Although many power and performance models exist, they have a significant error margin while predicting the performance of memory or file-intensive tasks and HPC applications. Our investigations reveal that the prediction error is due in part to the fact that they do not take frequency AND CPU variations account, rather they just depend on the CPU by itself. In this paper, we empirically derive power and completion time models using linear regression with CPU utilization and operating frequency as parameters. We validate our power model on several Intel and AMD processors by predicting within 2-7 of measured power. We validate our completion time model using five kernels of NASA Parallel Benchmark suite and five CPU, memory and file-intensive benchmarks on four heterogeneous systems and predicting within 1-6 of observed performance. We then show how these models can be employed to realize as much as 15 savings in power while delivering 44 better performance for applications deployed in a virtualized environment. | where @math is the performance at maximum frequency @math and CPU utilization. This model was used in Figure to emphasize the gap in existing work. In this paper, we show how our model predicts completion time more accurately than that given in @cite_14 . | {
"cite_N": [
"@cite_14"
],
"mid": [
"2121574851"
],
"abstract": [
"This paper proposes and evaluates an approach for power and performance management in virtualized server clusters. The major goal of our approach is to reduce power consumption in the cluster while meeting performance requirements. The contributions of this paper are: (1) a simple but effective way of modeling power consumption and capacity of servers even under heterogeneous and changing workloads, and (2) an optimization strategy based on a mixed integer programming model for achieving improvements on power-efficiency while providing performance guarantees in the virtualized cluster. In the optimization model, we address application workload balancing and the often ignored switching costs due to frequent and undesirable turning servers on off and VM relocations. We show the effectiveness of the approach applied to a server cluster test bed. Our experiments show that our approach conserves about 50 of the energy required by a system designed for peak workload scenario, with little impact on the applications' performance goals. Also, by using prediction in our optimization strategy, further QoS improvement was achieved."
]
} |
1411.3201 | 2286272280 | Power consumption costs takes upto half of operational expenses of datacenters making power management a critical concern. Advances in processor technology provide fine-grained control over operating frequency and voltage of processors and this control can be used to tradeoff power for performance. Although many power and performance models exist, they have a significant error margin while predicting the performance of memory or file-intensive tasks and HPC applications. Our investigations reveal that the prediction error is due in part to the fact that they do not take frequency AND CPU variations account, rather they just depend on the CPU by itself. In this paper, we empirically derive power and completion time models using linear regression with CPU utilization and operating frequency as parameters. We validate our power model on several Intel and AMD processors by predicting within 2-7 of measured power. We validate our completion time model using five kernels of NASA Parallel Benchmark suite and five CPU, memory and file-intensive benchmarks on four heterogeneous systems and predicting within 1-6 of observed performance. We then show how these models can be employed to realize as much as 15 savings in power while delivering 44 better performance for applications deployed in a virtualized environment. | Another flavor in literature is to use OS or hypervisor-driven techniques to provide frequency scaling. Nathuji and Schwan, in VirtualPower @cite_8 , proposed Soft Scaling' - a technique where VM will execute at the required frequency using CPU scheduling policy rather than hypervisor changing the frequency of the processor. Many other authors such as @cite_17 and @cite_21 proportionally reallocated CPU to simulate frequency changes. | {
"cite_N": [
"@cite_21",
"@cite_17",
"@cite_8"
],
"mid": [
"2167788414",
"1965947049",
"2158459299"
],
"abstract": [
"In the area of system architecture, there are two most significant trends: multi-core and system virtualization technology. Both of them have quite close relationship with energy efficient computing. Industry turns to integrate more cores on a single chip instead of increasing frequency to solve the heat problem. Meanwhile system virtualization could decrease the total power consumption by sharing the same platform among different operating systems. So the necessity of power management on the multi-core virtualization platform has become increasingly evident. However, traditional virtual machine monitor schedulers could not make efficient use of DVFS, and thus could not take it into considering that the guest OSes may run at different frequency. In order to address this problem, this paper designs a power efficient scheduler which uses the load and the power level of the guest OSes as feedback, and implements a prototype based on Xen virtual machine monitor. This scheduler allocates power credit to VCPU of each guest OS, accounts the power consumption of VCPU sat different speed levels and makes scheduler decision by integrating it to the credit oriented to CPU time slice sharing. It also uses utilization of processors as feedback and sets frequency according to the load change trends instead of the simple static relationship between load and frequency policies, so as to decrease the speed steps required by response to burst load change. Experiment results show that the scheduling fairness of guest OS improved when using DVFS as main power saving method, and the power consumption of the whole system can be reduced by 5 percent - 30 percent. Therefore, this framework for feedback scheduling could make efficient use of varieties of power saving methods and maintain ideal balance between overall system power saving and single core over heat.",
"Nowadays, virtualization is present in almost all computing infrastructures. Thanks to server consolidation and VM migration, virtualization helps in power reduction. However, modern powerful computers with higher processor frequency, multiple cores and multiple CPUs constitute the main factor contributing to the continuously increase of energy consumption in numerous computing infrastructures. In this context, energy management takes a critical importance. A hardware technology, called Dynamic Voltage and Frequency Scaling (DVFS), serves to dynamically modify the processor frequency (according to the CPU needs) in order to achieve less energy consumption. However, lowering frequency also generates poor virtual machine (VM) performance. In this paper, we propose a solution consisting of an extended VM scheduler and DVFS, and report some experiments based on this proposal. This enhanced scheduler, according to VM CPU load, dynamically scales processor frequency in order to save energy. The idea is to adapt the current VM scheduler to analyze CPU load, and modify the current processor frequency to the lowest possible, but still support the guaranteed VM performance. The algorithm is designed and simulated on a web server as the sample application and Xen as the virtualization platform. Test results and performance evaluations prove our design and implementation.",
"Power management has become increasingly necessary in large-scale datacenters to address costs and limitations in cooling or power delivery. This paper explores how to integrate power management mechanisms and policies with the virtualization technologies being actively deployed in these environments. The goals of the proposed VirtualPower approach to online power management are (i) to support the isolated and independent operation assumed by guest virtual machines (VMs) running on virtualized platforms and (ii) to make it possible to control and globally coordinate the effects of the diverse power management policies applied by these VMs to virtualized resources. To attain these goals, VirtualPower extends to guest VMs soft' versions of the hardware power states for which their policies are designed. The resulting technical challenge is to appropriately map VM-level updates made to soft power states to actual changes in the states or in the allocation of underlying virtualized hardware. An implementation of VirtualPower Management (VPM) for the Xen hypervisor addresses this challenge by provision of multiple system-level abstractions including VPM states, channels, mechanisms, and rules. Experimental evaluations on modern multicore platforms highlight resulting improvements in online power management capabilities, including minimization of power consumption with little or no performance penalties and the ability to throttle power consumption while still meeting application requirements. Finally, coordination of online methods for server consolidation with VPM management techniques in heterogeneous server systems is shown to provide up to 34 improvements in power consumption."
]
} |
1411.2643 | 2132294263 | Abstract In this paper, we introduce a new (constructive) characterization of tight wavelet frames on non-flat domains in both continuum setting, i.e. on manifolds, and discrete setting, i.e. on graphs; we discuss how fast tight wavelet frame transforms can be computed and how they can be effectively used to process graph data. We start with defining the quasi-affine systems on a given manifold M . The quasi-affine system is formed by generalized dilations and shifts of a finite collection of wavelet functions Ψ : = ψ j : 1 ≤ j ≤ r ⊂ L 2 ( R ) . We further require that ψ j is generated by some refinable function ϕ with mask a j . We present the condition needed for the masks a j : 0 ≤ j ≤ r , as well as regularity conditions needed for ϕ and ψ j , so that the associated quasi-affine system generated by Ψ is a tight frame for L 2 ( M ) . The condition needed for the masks is a simple set of algebraic equations which are not only easy to verify for a given set of masks a j , but also make the construction of a j entirely painless. Then, we discuss how the transition from the continuum (manifolds) to the discrete setting (graphs) can be naturally done. In order for the proposed discrete tight wavelet frame transforms to be useful in applications, we show how the transforms can be computed efficiently and accurately by proposing the fast tight wavelet frame transforms for graph data (WFTG). Finally, we consider two specific applications of the proposed WFTG: graph data denoising and semi-supervised clustering. Utilizing the sparse representation provided by the WFTG, we propose l 1 -norm based optimization models on graphs for denoising and semi-supervised clustering. On one hand, our numerical results show significant advantage of the WFTG over the spectral graph wavelet transform (SGWT) by [1] for both applications. On the other hand, numerical experiments on two real data sets show that the proposed semi-supervised clustering model using the WFTG is overall competitive with the state-of-the-art methods developed in the literature of high-dimensional data classification, and is superior to some of these methods. | Redundant systems were considered by Maggioni and Mhaskar @cite_39 where they developed a theory of diffusion polynomial frames that is related to our framework. However, they did not provide any algorithm for efficient computation of the decomposition and reconstruction transforms which are crucial to many applications. Geller and Mayeli @cite_65 studied a construction for wavelets on compact differentiable manifolds. In particular, their scaling is defined using a pseudodifferential operator @math , where @math is the Laplace-Beltrami operator on the given manifold and @math is the scaling parameter. Wavelets are obtained by applying the pseudodifferential operator to a delta impulse. In our framework, the scaling is defined by @math where @math is a refinable function. The wavelets are obtained by applying @math , with @math and @math , to delta impulses at each points of the given manifold. Moreover, we do not need to assume the manifolds are smooth for our approach. | {
"cite_N": [
"@cite_65",
"@cite_39"
],
"mid": [
"2003334210",
"2001125076"
],
"abstract": [
"Let M be a smooth compact oriented Riemannian manifold, and let ΔM be the Laplace–Beltrami operator on M. Say ( 0 f S ( R ^+) ) , and that f (0) = 0. For t > 0, let Kt(x, y) denote the kernel of f (t2 ΔM). We show that Kt is well-localized near the diagonal, in the sense that it satisfies estimates akin to those satisfied by the kernel of the convolution operator f (t2Δ) on ( R ^n ) . We define continuous ( S )-wavelets on M, in such a manner that Kt(x, y) satisfies this definition, because of its localization near the diagonal. Continuous ( S )-wavelets on M are analogous to continuous wavelets on ( R ^n ) in ( S ) ( ( R ^n )). In particular, we are able to characterize the Holder continuous functions on M by the size of their continuous ( S )-wavelet transforms, for Holder exponents strictly between 0 and 1. If M is the torus ( T^2 ) or the sphere S2, and f (s) = se−s (the “Mexican hat” situation), we obtain two explicit approximate formulas for Kt, one to be used when t is large, and one to be used when t is small.",
"We construct a multiscale tight frame based on an arbitrary orthonormal basis for the L2 space of an arbitrary sigma finite measure space. The approximation properties of the resulting multiscale are studied in the context of Besov approximation spaces, which are characterized both in terms of suitable K-functionals and the frame transforms. The only major condition required is the uniform boundedness of a summability operator. We give sufficient conditions for this to hold in the context of a very general class of metric measure spaces. The theory is illustrated using the approximation of characteristic functions of caps on a dumbell manifold, and applied to the problem of recognition of hand-written digits. Our methods outperforms comparable methods for semi-supervised learning."
]
} |
1411.2953 | 1565825967 | The explosive demand for data has called for solution approaches that range from spectrally agile cognitive radios with novel spectrum sharing, to use of higher frequency spectrum as well as smaller and denser cell deployments with diverse access technologies, referred to as heterogeneous networks (HetNets). Simultaneously, advances in electronics and storage, has led to the advent of wireless devices equipped with multiple radio interfaces (e.g. WiFi, WiMAX, LTE, etc.) and the ability to store and efficiently process large amounts of data. Motivated by the convergence of HetNets and multi-platform radios, we propose HetNetwork Coding as a means to utilize the available radio interfaces in parallel along with network coding to increase wireless data throughput. Specifically we explore the use of random linear network coding at the network layer where packets can travel through multiple interfaces and be received via multihoming. Using both simulations and experimentation with real hardware on WiFi and WiMAX platforms, we study the scaling of throughput enabled by such HetNetwork coding. We find from our simulations and experiments that the use of this method increases the throughput, with greater gains achieved for cases when the system is heavily loaded or the channel quality is poor. Our results also reveal that the throughput gains achieved scale linearly with the number of radio interfaces at the nodes. | Heterogeneous networks have been studied for increasing the LTE throughput and coverage area by using a variety of cell sizes, access techniques and transmit powers @cite_18 @cite_31 . There are many technical challenges associated with HetNets such as resource allocation, interference, backhauling and handover among others and some of which is addressed by authors in @cite_16 @cite_32 . HetNets are being considered as the major solution to handle the huge data traffic demand in cellular networks and methods to efficiently and effectively model these are discussed in @cite_10 . | {
"cite_N": [
"@cite_18",
"@cite_32",
"@cite_31",
"@cite_16",
"@cite_10"
],
"mid": [
"2136530738",
"",
"2012163744",
"2154782861",
"1994080576"
],
"abstract": [
"As the spectral efficiency of a point-to-point link in cellular networks approaches its theoretical limits, with the forecasted explosion of data traffic, there is a need for an increase in the node density to further improve network capacity. However, in already dense deployments in today's networks, cell splitting gains can be severely limited by high inter-cell interference. Moreover, high capital expenditure cost associated with high power macro nodes further limits viability of such an approach. This article discusses the need for an alternative strategy, where low power nodes are overlaid within a macro network, creating what is referred to as a heterogeneous network. We survey current state of the art in heterogeneous deployments and focus on 3GPP LTE air interface to describe future trends. A high-level overview of the 3GPP LTE air interface, network nodes, and spectrum allocation options is provided, along with the enabling mechanisms for heterogeneous deployments. Interference management techniques that are critical for LTE heterogeneous deployments are discussed in greater detail. Cell range expansion, enabled through cell biasing and adaptive resource partitioning, is seen as an effective method to balance the load among the nodes in the network and improve overall trunking efficiency. An interference cancellation receiver plays a crucial role in ensuring acquisition of weak cells and reliability of control and data reception in the presence of legacy signals.",
"",
"Disruptive innovations in mobile broadband system design are required to help network providers meet the exponential growth in mobile traffic demand with relatively flat revenues per bit. Heterogeneous network architecture is one of the most promising low-cost approaches to provide significant areal capacity gain and indoor coverage improvement. In this introductory article, we provide a brief overview of heterogeneous network architectures comprising hierarchical multitier multiple radio access technologies (RAT) deployments based on newer infrastructure elements. We begin with presenting possible deployment scenarios of heterogeneous networks to better illustrate the concepts of multitier and multi-RAT. We then focus on multitier deployments with single RAT and investigate the challenges associated with enabling single frequency reuse across tiers. Based on the spectrum usage, heterogeneous networks can be categorized into single carrier usage, where all devices within the network share the same spectrum, and distinct carrier usage, where different types of devices are allocated separate spectra. For single carrier usage, we show that interference management schemes are critical for reducing the resulting cross-tier interference, and present several techniques that provide significant capacity and coverage improvements. The article also describes industry trends, standardization efforts, and future research directions in this rich area of investigation.",
"3GPP LTE-Advanced has recently been investigating heterogeneous network (HetNet) deployments as a cost effective way to deal with the unrelenting traffic demand. HetNets consist of a mix of macrocells, remote radio heads, and low-power nodes such as picocells, femtocells, and relays. Leveraging network topology, increasing the proximity between the access network and the end users, has the potential to provide the next significant performance leap in wireless networks, improving spatial spectrum reuse and enhancing indoor coverage. Nevertheless, deployment of a large number of small cells overlaying the macrocells is not without new technical challenges. In this article, we present the concept of heterogeneous networks and also describe the major technical challenges associated with such network architecture. We focus in particular on the standardization activities within the 3GPP related to enhanced intercell interference coordination.",
"Imagine a world with more base stations than cell phones: this is where cellular technology is headed in 10-20 years. This mega-trend requires many fundamental differences in visualizing, modeling, analyzing, simulating, and designing cellular networks vs. the current textbook approach. In this article, the most important shifts are distilled down to seven key factors, with the implications described and new models and techniques proposed for some, while others are ripe areas for future exploration."
]
} |
1411.2953 | 1565825967 | The explosive demand for data has called for solution approaches that range from spectrally agile cognitive radios with novel spectrum sharing, to use of higher frequency spectrum as well as smaller and denser cell deployments with diverse access technologies, referred to as heterogeneous networks (HetNets). Simultaneously, advances in electronics and storage, has led to the advent of wireless devices equipped with multiple radio interfaces (e.g. WiFi, WiMAX, LTE, etc.) and the ability to store and efficiently process large amounts of data. Motivated by the convergence of HetNets and multi-platform radios, we propose HetNetwork Coding as a means to utilize the available radio interfaces in parallel along with network coding to increase wireless data throughput. Specifically we explore the use of random linear network coding at the network layer where packets can travel through multiple interfaces and be received via multihoming. Using both simulations and experimentation with real hardware on WiFi and WiMAX platforms, we study the scaling of throughput enabled by such HetNetwork coding. We find from our simulations and experiments that the use of this method increases the throughput, with greater gains achieved for cases when the system is heavily loaded or the channel quality is poor. Our results also reveal that the throughput gains achieved scale linearly with the number of radio interfaces at the nodes. | There have been some previous studies related to offloading of cellular traffic on WiFi links, such as @cite_11 , @cite_19 where authors implement the offloading only at the destination cell by choosing a set of nodes, which can receive the data from the destination base station on behalf of the destination node and then forward it to the destination node via WiFi links. propose a system called iCAR (integrated cellular and ad-hoc relaying system) @cite_27 , where ad-hoc relay nodes are strategically placed inside a cellular network to offload the traffic from congested cells to non-congested cells. @cite_34 suggest various schemes for using an ad-hoc network in a cellular packet data network, but in these schemes the base station is actively involved with the ad-hoc network in improving the performance. The scheme proposed in this paper does not require any explicit coordination between the cellular and WiFi networks, but instead relies on the multiplatform radio enabled wireless nodes to adapt their packet processing at the network layer. | {
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_34",
"@cite_11"
],
"mid": [
"2139213954",
"2100958239",
"2063751039",
""
],
"abstract": [
"The paper presents an ad-hoc architecture for wireless sensor networks and other wireless systems similar to them. In this class of wireless system the physical resource at premium is energy. Bandwidth available to the system is in excess of system requirements. The approach to solve the problem of ad-hoc network formation here is to use available bandwidth in order to save energy. The method introduced solves the problem of connecting an ad-hoc network. This algorithm gives procedures for the joint formation of a time schedule (similar to a TDMA schedule) and activation of links therein for random network topologies. This self-organization method is energy-sensitive, distributed, scalable, and able to form a connected network rapidly.",
"Integrated cellular and ad hoc relaying systems (iCAR) is a new wireless system architecture based on the integration of cellular and modern ad hoc relaying technologies. It addresses the congestion problem due to unbalanced traffic in a cellular system and provides interoperability for heterogeneous networks. The iCAR system can efficiently balance traffic loads between cells by using ad hoc relaying stations (ARS) to relay traffic from one cell to another dynamically. This not only increases the system's capacity cost effectively, but also reduces the transmission power for mobile hosts and extends system coverage. We compare the performance of the iCAR system with conventional cellular systems in terms of the call blocking dropping probability, throughput, and signaling overhead via analysis and simulation. Our results show that with a limited number of ARSs and some increase in the signaling overhead (as well as hardware complexity), the call blocking dropping probability in a congested cell and the overall system can be reduced.",
"While several approaches have been proposed in literature for improving the performance of wireless packet data networks, a recent class of approaches has focused on improving the underlying wireless network model itself. Several of such approaches have shown that using peer-to-peer communication, a mode of communication used typically in ad-hoc wireless networks, can result in performance improvement in terms of both throughput and energy consumption. However, the true impact of using the ad-hoc network model in wireless packet data networks has neither been comprehensively studied, nor characterized. In this paper, we investigate the benefits of using an ad-hoc network model in cellular wireless packet data networks. We find that while the ad-hoc network model has significantly better spatial reuse characteristics, the improved spatial reuse does not translate into better throughput performance. Furthermore, although considerable improvement is seen in energy consumption performance, we observe that using the ad-hoc network model as-is might actually degrade the throughput performance of the network. We identify and discuss the reasons behind these observations. Finally, using the insights gained through our performance evaluations, we discuss strawman versions of three techniques which when used in tandem with the ad-hoc network model result in better throughput, energy consumption, fairness, and mobility-resilience characteristics. Through our simulation results, we motivate that using the ad-hoc network model in conventional wireless packet data networks is a promising approach when the network model is complemented with appropriate mechanisms.",
""
]
} |
1411.2337 | 1853111444 | Multi-task learning (MTL) improves prediction performance in different contexts by learning models jointly on multiple different, but related tasks. Network data, which are a priori data with a rich relational structure, provide an important context for applying MTL. In particular, the explicit relational structure implies that network data is not i.i.d. data. Network data also often comes with significant metadata (i.e., attributes) associated with each entity (node). Moreover, due to the diversity and variation in network data (e.g., multi-relational links or multi-category entities), various tasks can be performed and often a rich correlation exists between them. Learning algorithms should exploit all of these additional sources of information for better performance. In this work we take a metric-learning point of view for the MTL problem in the network context. Our approach builds on structure preserving metric learning (SPML). In particular SPML learns a Mahalanobis distance metric for node attributes using network structure as supervision, so that the learned distance function encodes the structure and can be used to predict link patterns from attributes. SPML is described for single-task learning on single network. Herein, we propose a multi-task version of SPML, abbreviated as MT-SPML, which is able to learn across multiple related tasks on multiple networks via shared intermediate parametrization. MT-SPML learns a specific metric for each task and a common metric for all tasks. The task correlation is carried through the common metric and the individual metrics encode task specific information. When combined together, they are structure-preserving with respect to individual tasks. MT-SPML works on general networks, thus is suitable for a wide variety of problems. In experiments, we challenge MT-SPML on two real-word problems, where MT-SPML achieves significant improvement. | There is a large body of work on MTL for i.i.d. data. @cite_25 applied hierarchical Bayesian modeling to nonparametric Gaussian processes, and the resulting method was used for text categorization. @cite_15 extended Support Vector Machines (SVMs) to MTL via parameter sharing, and the method was applied to learn predictive models for exam scores of student at different schools. Following the same intuition as @cite_15 , @cite_10 proposed the multi-task version of large margin nearest neighbor metric learning @cite_8 , which was tested on speech recognition. @cite_4 , applied MTL to help face recognition and image retrieval. Very recently, @cite_2 showed how multi-task deep neural network can further help phoneme recognition. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_25"
],
"mid": [
"2126288184",
"2106053110",
"2094035326",
"2143104527",
"2110994494",
"2148522164"
],
"abstract": [
"Face verification has many potential applications including filtering and ranking image video search results on celebrities. Since these images videos are taken under uncontrolled environments, the problem is very challenging due to dramatic lighting and pose variations, low resolutions, compression artifacts, etc. In addition, the available number of training images for each celebrity may be limited, hence learning individual classifiers for each person may cause overfitting. In this paper, we propose two ideas to meet the above challenges. First, we propose to use individual bins, instead of whole histograms, of Local Binary Patterns (LBP) as features for learning, which yields significant performance improvements and computation reduction in our experiments. Second, we present a novel Multi-Task Learning (MTL) framework, called Boosted MTL, for face verification with limited training data. It jointly learns classifiers for multiple people by sharing a few boosting classifiers in order to avoid overfitting. The effectiveness of Boosted MTL and LBP bin features is verified with a large number of celebrity images videos from the web.",
"The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.",
"In this paper we demonstrate how to improve the performance of deep neural network (DNN) acoustic models using multi-task learning. In multi-task learning, the network is trained to perform both the primary classification task and one or more secondary tasks using a shared representation. The additional model parameters associated with the secondary tasks represent a very small increase in the number of trained parameters, and can be discarded at runtime. In this paper, we explore three natural choices for the secondary task: the phone label, the phone context, and the state context. We demonstrate that, even on a strong baseline, multi-task learning can provide a significant decrease in error rate. Using phone context, the phonetic error rate (PER) on TIMIT is reduced from 21.63 to 20.25 on the core test set, and surpassing the best performance in the literature for a DNN that uses a standard feed-forward network architecture.",
"Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs.",
"Multi-task learning (MTL) improves the prediction performance on multiple, different but related, learning problems through shared parameters or representations. One of the most prominent multi-task learning algorithms is an extension to support vector machines (svm) by [15]. Although very elegant, multi-task svm is inherently restricted by the fact that support vector machines require each class to be addressed explicitly with its own weight vector which, in a multi-task setting, requires the different learning tasks to share the same set of classes. This paper proposes an alternative formulation for multi-task learning by extending the recently published large margin nearest neighbor (1mnn) algorithm to the MTL paradigm. Instead of relying on separating hyperplanes, its decision function is based on the nearest neighbor rule which inherently extends to many classes and becomes a natural fit for multi-task learning. We evaluate the resulting multi-task 1mnn on real-world insurance data and speech classification problems and show that it consistently outperforms single-task kNN under several metrics and state-of-the-art MTL classifiers.",
"We consider the problem of multi-task learning, that is, learning multiple related functions. Our approach is based on a hierarchical Bayesian framework, that exploits the equivalence between parametric linear models and nonparametric Gaussian processes (GPs). The resulting models can be learned easily via an EM-algorithm. Empirical studies on multi-label text categorization suggest that the presented models allow accurate solutions of these multi-task problems."
]
} |
1411.2337 | 1853111444 | Multi-task learning (MTL) improves prediction performance in different contexts by learning models jointly on multiple different, but related tasks. Network data, which are a priori data with a rich relational structure, provide an important context for applying MTL. In particular, the explicit relational structure implies that network data is not i.i.d. data. Network data also often comes with significant metadata (i.e., attributes) associated with each entity (node). Moreover, due to the diversity and variation in network data (e.g., multi-relational links or multi-category entities), various tasks can be performed and often a rich correlation exists between them. Learning algorithms should exploit all of these additional sources of information for better performance. In this work we take a metric-learning point of view for the MTL problem in the network context. Our approach builds on structure preserving metric learning (SPML). In particular SPML learns a Mahalanobis distance metric for node attributes using network structure as supervision, so that the learned distance function encodes the structure and can be used to predict link patterns from attributes. SPML is described for single-task learning on single network. Herein, we propose a multi-task version of SPML, abbreviated as MT-SPML, which is able to learn across multiple related tasks on multiple networks via shared intermediate parametrization. MT-SPML learns a specific metric for each task and a common metric for all tasks. The task correlation is carried through the common metric and the individual metrics encode task specific information. When combined together, they are structure-preserving with respect to individual tasks. MT-SPML works on general networks, thus is suitable for a wide variety of problems. In experiments, we challenge MT-SPML on two real-word problems, where MT-SPML achieves significant improvement. | Researchers also have been studying the problem of learning across multiple graph data for various purposes. @cite_23 improved document recommendation by finding an embedding for multiple graphs via matrix factorization. @cite_22 , attempted to do clustering jointly over different graphs. @cite_3 developed an algorithm to jointly do clustering and classification on networks. In the area of relational learning, tensor decomposition-based methods are usually applied @cite_12 for problems on multi-relational data. | {
"cite_N": [
"@cite_12",
"@cite_3",
"@cite_22",
"@cite_23"
],
"mid": [
"1888732573",
"2068965752",
"2113573459",
"2165611133"
],
"abstract": [
"We propose a modular framework for multi-relational learning via tensor decomposition. In our learning setting, the training data contains multiple types of relationships among a set of objects, which we represent by a sparse three-mode tensor. The goal is to predict the values of the missing entries. To do so, we model each relationship as a function of a linear combination of latent factors. We learn this latent representation by computing a low-rank tensor decomposition, using quasi-Newton optimization of a weighted objective function. Sparsity in the observed data is captured by the weighted objective, leading to improved accuracy when training data is limited. Exploiting sparsity also improves efficiency, potentially up to an order of magnitude over unweighted approaches. In addition, our framework accommodates arbitrary combinations of smooth, task-specific loss functions, making it better suited for learning different types of relations. For the typical cases of real-valued functions and binary relations, we propose several loss functions and derive the associated parameter gradients. We evaluate our method on synthetic and real data, showing significant improvements in both accuracy and scalability over related factorization techniques.",
"With the rapid proliferation of online social networks, the need for newer class of learning algorithm to simultaneously deal with multiple related networks has become increasingly important. This paper proposes an approach for multi-task learning in multiple related networks, where in we perform different tasks such as classification on one network and clustering on the other. We show that the framework can be extended to incorporate prior information about the correspondences between the clusters and classes in different networks. We have performed experiments on real-world data sets to demonstrate the effectiveness of the proposed framework.",
"In graph-based learning models, entities are often represented as vertices in an undirected graph with weighted edges describing the relationships between entities. In many real-world applications, however, entities are often associated with relations of different types and or from different sources, which can be well captured by multiple undirected graphs over the same set of vertices. How to exploit such multiple sources of information to make better inferences on entities remains an interesting open problem. In this paper, we focus on the problem of clustering the vertices based on multiple graphs in both unsupervised and semi-supervised settings. As one of our contributions, we propose Linked Matrix Factorization (LMF) as a novel way of fusing information from multiple graph sources. In LMF, each graph is approximated by matrix factorization with a graph-specific factor and a factor common to all graphs, where the common factor provides features for all vertices. Experiments on SIAM journal data show that (1) we can improve the clustering accuracy through fusing multiple sources of information with several models, and (2) LMF yields superior or competitive results compared to other graph-based clustering methods.",
"The Web offers rich relational data with different semantics. In this paper, we address the problem of document recommendation in a digital library, where the documents in question are networked by citations and are associated with other entities by various relations. Due to the sparsity of a single graph and noise in graph construction, we propose a new method for combining multiple graphs to measure document similarities, where different factorization strategies are used based on the nature of different graphs. In particular, the new method seeks a single low-dimensional embedding of documents that captures their relative similarities in a latent space. Based on the obtained embedding, a new recommendation framework is developed using semi-supervised learning on graphs. In addition, we address the scalability issue and propose an incremental algorithm. The new incremental method significantly improves the efficiency by calculating the embedding for new incoming documents only. The new batch and incremental methods are evaluated on two real world datasets prepared from CiteSeer. Experiments demonstrate significant quality improvement for our batch method and significant efficiency improvement with tolerable quality loss for our incremental method."
]
} |
1411.2893 | 2950796003 | The spreading of unsubstantiated rumors on online social networks (OSN) either unintentionally or intentionally (e.g., for political reasons or even trolling) can have serious consequences such as in the recent case of rumors about Ebola causing disruption to health-care workers. Here we show that indicators aimed at quantifying information consumption patterns might provide important insights about the virality of false claims. In particular, we address the driving forces behind the popularity of contents by analyzing a sample of 1.2M Facebook Italian users consuming different (and opposite) types of information (science and conspiracy news). We show that users' engagement across different contents correlates with the number of friends having similar consumption patterns (homophily), indicating the area in the social network where certain types of contents are more likely to spread. Then, we test diffusion patterns on an external sample of @math intentional satirical false claims showing that neither the presence of hubs (structural properties) nor the most active users (influencers) are prevalent in viral phenomena. Instead, we found out that in an environment where misinformation is pervasive, users' aggregation around shared beliefs may make the usual exposure to conspiracy stories (polarization) a determinant for the virality of false information. | Recent studies settled on Facebook aimed at unfolding cascades characteristics @cite_47 and predicting their trajectories and shapes @cite_5 . As for the characteristics, it is found that a small but significant fraction of posts forms wide and deep cascades and that different cascades may evolve in different ways. Many aspects of cascades' behavior -- e.g., under which structural and user-constrained properties is possible to predict them -- are hard tasks that have not been completely exploited. In the last years, a new online phenomenon is attracting the interest of the researchers community, the spreading of unsubstantiated and false claims through OSN (as Facebook), that often reverberate leading to mass misinformation. The study in @cite_19 is a detailed analysis of the information consumption by Facebook users on different categories of pages: alternative information sources, political activism and mainstream media. Authors pointed out evidences that mainstream media information reverberate as long as unsubstantiated one, and that the exposition to the latter makes users more likely to interact with intentionally injected false information. More recently, in @cite_21 it has been shown that the exposure to debunking posts might increase the engagement of users in consuming conspiracy information. | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_47",
"@cite_21"
],
"mid": [
"2039222427",
"",
"90853004",
"170316166"
],
"abstract": [
"Display Omitted How 2.3 Facebook users consumed different information.Qualitatively different information is consumed in a similar way.Users more prone to interact with false claims are usually exposed to conspiracy rumors. In this work we study, on a sample of 2.3million individuals, how Facebook users consumed different information at the edge of political discussion and news during the last Italian electoral competition. Pages are categorized, according to their topics and the communities of interests they pertain to, in (a) alternative information sources (diffusing topics that are neglected by science and main stream media); (b) online political activism; and (c) main stream media. We show that attention patterns are similar despite the different qualitative nature of the information, meaning that unsubstantiated claims (mainly conspiracy theories) reverberate for as long as other information. Finally, we classify users according to their interaction patterns among the different topics and measure how they responded to the injection of 2788 false information. Our analysis reveals that users which are prominently interacting with conspiracists information sources are more prone to interact with intentional false claims.",
"",
"When users post photos on Facebook, they have the option of allowing their friends, followers, or anyone at all to subsequently reshare the photo. A portion of the billions of photos posted to Facebook generates cascades of reshares, enabling many additional users to see, like, comment, and reshare the photos. In this paper we present characteristics of such cascades in aggregate, finding that a small fraction of photos account for a significant proportion of reshare activity and generate cascades of non-trivial size and depth. We also show that the true influence chains in such cascades can be much deeper than what is visible through direct attribution. To illuminate how large cascades can form, we study the diffusion trees of two widely distributed photos: one posted on President Barack Obama’s page following his reelection victory, and another posted by an individual Facebook user hoping to garner enough likes for a cause. We show that the two cascades, despite achieving comparable total sizes, are markedly different in their time evolution, reshare depth distribution, predictability of subcascade sizes, and the demographics of users who propagate them. The findings suggest not only that cascades can achieve considerable size but that they can do so in distinct ways.",
"Despite the enthusiastic rhetoric about the so called collective intelligence, conspiracy theories – e.g. global warming induced by chemtrails or the link between vaccines and autism – find on the Web a natural medium for their dissemination. Users preferentially consume information according to their system of beliefs and the strife within users of opposite worldviews (e.g., scientific and conspiracist) may result in heated debates. In this work we provide a genuine example of information consumption on a set of 1.2 million of Facebook Italian users. We show by means of a thorough quantitative analysis that information supporting different worldviews – i.e. scientific and conspiracist news – are consumed in a comparable way. Moreover, we measure the effect of 4709 evidently false information (satirical version of conspiracist stories) and 4502 debunking memes (information aiming at contrasting unsubstantiated rumors) on polarized users of conspiracy claims."
]
} |
1411.2499 | 2066882247 | The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. We show that knowledge base dynamics has interesting connection with kernel change via hitting set and abduction. The approach extends and integrates standard techniques for efficient query answering and integrity checking. The generation of hitting set is carried out through a hyper tableaux calculus and magic set that is focussed on the goal of minimality. Many different view update algorithms have been proposed in the literature to address this problem. The present paper provides a comparative study of view update algorithms in rational approach. | We begin by recalling previous work on view deletion. Chandrabose @cite_94 @cite_12 and Delhibabu @cite_42 @cite_73 @cite_103 , defines a contraction and revision operator in view deletion with respect to a set of formulae or sentences using Hansson's @cite_100 belief change. Similar to our approach, he focused on set of formulae or sentences in knowledge base revision for view update wrt. insertion and deletion and formulae are considered at the same level. Chandrabose proposed different ways to change knowledge base via only database deletion, devising particular postulate which is shown to be necessary and sufficient for such an update process. | {
"cite_N": [
"@cite_73",
"@cite_42",
"@cite_94",
"@cite_100",
"@cite_103",
"@cite_12"
],
"mid": [
"2104168313",
"2145398873",
"2015512057",
"156961058",
"2125194189",
""
],
"abstract": [
"The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. We introduced the knowledge base dynamics to deal with two important points: fi rst, to handle belief states that need not be deductively closed; and the second point is the ability to declare certain parts of t he belief as immutable. In this paper, we address another, radically new approach to this problem. This approach is very close to the Hansson's dyadic representation of belief. Here, we consider the immutable part as defining a new logical system. By a logical syste m, we mean that it defines its own consequence relation and closure oper ator. Based on this, we provide an abductive framework for knowledge base dynamics.",
"The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. In this paper, we argue that to apply rationality result of belief dynamics theory to various practical problems, it should be generalized in two respects: first of all, it should allow a certain part of belief to be declared as immutable; and second, the belief state need not be deductively closed. Such a generalization of belief dynamics, referred to as base dynamics, is presented, along with the concept of a generalized revision algorithm for Horn knowledge bases. We show that Horn knowledge base dynamics has interesting connection with kernel change and abduction. Finally, we also show that both variants are rational in the sense that they satisfy certain rationality postulates stemming from philosophical works on belief dynamics.",
"In this paper, we introduce a new concept of generalized partial meet contraction for contracting a sentence from a belief base. We show that a special case of belief dynamics, referred to as knowledge base dynamics, where certain part of the belief base is declared to be immutable, has interesting connections with abduction, thus enabling us to use abductive procedures to realize contractions. Finally, an important application of knowledge base dynamics in providing an axiomatic characterization for deleting view atoms from databases is discussed in detail.",
"",
"The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. In order to apply the rationality result of belief dynamics theory to various practical problems, it should be generalized in two respects: first it should allow a certain part of belief to be declared as immutable; and second, the belief state need not be deductively closed. Such a generalization of belief dynamics, referred to as base dynamics, is presented in this paper, along with the concept of a generalized revision algorithm for knowledge bases (Horn or Horn logic with stratified negation). We show that knowledge base dynamics has an interesting connection with kernel change via hitting set and abduction. In this paper, we show how techniques from disjunctive logic programming can be used for efficient (deductive) database updates. The key idea is to transform the given database together with the update request into a disjunctive (datalog) logic program and apply disjunctive techniques (such as minimal model reasoning) to solve the original update problem. The approach extends and integrates standard techniques for efficient query answering and integrity checking. The generation of a hitting set is carried out through a hyper tableaux calculus and magic set that is focused on the goal of minimality.",
""
]
} |
1411.2499 | 2066882247 | The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. We show that knowledge base dynamics has interesting connection with kernel change via hitting set and abduction. The approach extends and integrates standard techniques for efficient query answering and integrity checking. The generation of hitting set is carried out through a hyper tableaux calculus and magic set that is focussed on the goal of minimality. Many different view update algorithms have been proposed in the literature to address this problem. The present paper provides a comparative study of view update algorithms in rational approach. | On other hand, we are dealing with view update problem. Keller's @cite_29 thesis is motivation for view update problem. There is a lot of papers on view update problem (for example, recent survey paper on view update by Chen and Liao @cite_58 , survey paper on view algorithm by Mayol and Teniente @cite_52 and current survey paper on view selection ( @cite_74 @cite_85 @cite_92 @cite_67 @cite_88 @cite_14 ). More similar to our work is paper presented by @cite_8 , local search-based heuristic technique that empirically proves to be often viable, even in the context of very large propositional applications. @cite_30 parented updating deductive databases in which every insertion or deletion of a fact can be performed in a deterministic way. | {
"cite_N": [
"@cite_30",
"@cite_67",
"@cite_14",
"@cite_8",
"@cite_92",
"@cite_29",
"@cite_85",
"@cite_52",
"@cite_74",
"@cite_88",
"@cite_58"
],
"mid": [
"2026124000",
"2124751909",
"1793365533",
"1577599253",
"2505626938",
"1483162910",
"2152191782",
"2123859812",
"2044279394",
"2081417442",
"2054490363"
],
"abstract": [
"Abstract We present an approach to updating deductive databases in which every insertion or deletion of a fact (atomic formula without variables) can be performed in a deterministic way. The main features of our approach are the following: (i) the inserted and deleted facts may concern any predicate of the underlying alphabet—not just extensional predicates, and (ii) deleted facts are explicitly stored in the database. We show that logic programs in our approach can be associated with well-founded semantics. Moreover, as the explicit storage of deleted facts introduces a significant overhead, we also study the problem of storage optimization.",
"A data warehouse stores information that is collected from multiple, heterogeneous information sources for the purpose of complex querying and analysis. Information in the warehouse is typically stored in the form of materialized views, which represent pre-computed portions of frequently asked queries. One of the most important tasks when designing a warehouse is the selection of materialized views to be maintained in the warehouse. The goal is to select a set of views in such a way as to minimize the total query response time over all queries, given a limited amount of time for maintaining the views (maintenance-cost view selection problem). In this paper, we propose an efficient solution to the maintenance-cost view selection problem using a genetic algorithm for computing a near-optimal set of views. Specifically, we explore the maintenance-cost view selection problem in the context of OR view graphs. We show that our approach represents a dramatic improvement in time complexity over existing search-ba...",
"Data Warehouse applications use a large number of materialized views to assist a Data Warehouse to perform well. But how to select views to be materialized is challenging. Several heuristic algorithms have been proposed in the past to tackle with this problem. In this paper, we propose a completely different approach, Genetic Algorithm, to choose materialized views and demonstrate that it is practical and effective compared with heuristic approaches.",
"In this paper, a new syntax-based approach to belief revision is presented. It is developed within a nonmonotonic framework that allows a two-steps handling of inconsistency to be adopted. First, a disciplined use of non-monotonic ingredients is made available to the knowledge engineer to prevent many inconsistencies that would occur if a standard logical interpretation and representation of beliefs were conducted. Remaining inconsistencies are considered unexpected and revised by weakening the formulas occurring in any minimally inconsistent subbase, as if they were representing exceptional cases that do not actually occur. While the computation of revised knowledge bases remains intractable in the worst case, our approach benefits from an efficient local search-based heuristic technique that empirically proves often viable, even in the context of very large prepositional applications.",
"",
"",
"The problem of answering queries using views is to find efficient methods of answering a query using a set of previously defined materialized views over the database, rather than accessing the database relations. The problem has recently received significant attention because of its relevance to a wide variety of data management problems. In query optimization, finding a rewriting of a query using a set of materialized views can yield a more efficient query execution plan. To support the separation of the logical and physical views of data, a storage schema can be described using views over the logical schema. As a result, finding a query execution plan that accesses the storage amounts to solving the problem of answering queries using views. Finally, the problem arises in data integration systems, where data sources can be described as precomputed views over a mediated schema. This article surveys the state of the art on the problem of answering queries using views, and synthesizes the disparate works into a coherent framework. We describe the different applications of the problem, the algorithms proposed to solve it and the relevant theoretical results.",
"During the process of updating a database, two interrelated problems could arise. On one hand, when an update is applied to the database, integrity constraints could become violated, thus falsifying database consistency. In this case, the integrity constraint maintenance approach tries to obtain additional updates to be applied to re-establish database consistency. On the other hand, when an update request consist on updating some derived predicate, a view updating mechanism must be applied to translate the update request into correct updates on the underlying base facts. In this paper, we propose a general framework to compare and classify current methods in the field of view updating and integrity constraint maintenance. In this sense, we classify them considering how they tackle with both problems and, we also state the main drawbacks these methods have.",
"",
"Materialized view selection is a critical problem in many applications such as query processing, data warehousing, distributed and semantic web databases, etc. We refer to the problem of selecting an appropriate set of materialized views as the view selection problem. Many different view selection methods have been proposed in the literature to address this issue. The present paper provides a survey of view selection methods. It defines a framework for highlighting the view selection problem by identifying the main dimensions that are the basis in the classification of view selection methods. Based on this classification, this study reviews most of the view selection methods by identifying respective potentials and limits.",
"XML has become the de facto standard for representing and interchanging data in web-based applications. And XML view, a virtual window for specified users, has been widely applied. In practical system, users encounter the so-called view update problem when they need update source data through the view. For a long time, the view update problem is an open question in database community. With the development of various data models, the corresponding view update problem has been widely researched. In this paper, we introduce the conception of view update problem. We survey and compare previous methods for resolving it. Especially, we emphasize the role of semantics. Focusing on the problem in XML context, we analyze it and propose a framework, which collects the semantic information at view definition time."
]
} |
1411.2499 | 2066882247 | The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. We show that knowledge base dynamics has interesting connection with kernel change via hitting set and abduction. The approach extends and integrates standard techniques for efficient query answering and integrity checking. The generation of hitting set is carried out through a hyper tableaux calculus and magic set that is focussed on the goal of minimality. Many different view update algorithms have been proposed in the literature to address this problem. The present paper provides a comparative study of view update algorithms in rational approach. | Furthermore, and at a first sight more related to our work, some work has been done on ontology systems and description logics (Qi and Yang @cite_35 , and Kogalovsky @cite_38 ). Finally, when we presented connection between belief update versus database update, we did not talk about complexity (see the works of Liberatore @cite_99 @cite_1 , Caroprese @cite_34 , Calvanese's @cite_11 , and Cong @cite_82 ). | {
"cite_N": [
"@cite_99",
"@cite_35",
"@cite_38",
"@cite_82",
"@cite_1",
"@cite_34",
"@cite_11"
],
"mid": [
"1984302133",
"1521011401",
"2078954026",
"2117854600",
"1623257973",
"1496385246",
"2039781767"
],
"abstract": [
"Abstract Belief revision and belief update are two different forms of belief change, and they serve different purposes. In this paper we focus on belief update, the formalization of change in beliefs due to changes in the world. The complexity of the basic update (introduced by Winslett, 1990) has been determined by Eiter and Gottlob (1992). Since then, many other formalizations have been proposed to overcome the limitations and drawbacks of Winslett's update. In this paper we analyze the complexity of the proposals presented in the literature: the standard semantics by Winslett (1986), the minimal change with exception and the minimal change with maximal disjunctive information by Zhang and Foo (1996), the update with disjunctive function by Herzig (1996), the abduction-based update and the generalized update by Boutilier (1996). We relate some of these approaches to belief update to previous work on closed world reasoning.",
"This invention relates to elbow crutches and provides a crutch to enable a patient with leg disability to raise himself from a sitting position to a standing position. Basically the invention consists in an elbow crutch, the length of which is adjustable under the control of a manually operable lever. In particular, the crutch comprises two telescopically arranged tubular members and a spring urging one tubular member outwardly with respect to the other. A rack and pawl are provided to prevent relative movement between the two members, and the pawl is controlled by the manually operable lever. Normally a patient will use a pair of crutches in accordance with the invention, and will initially operate the lever of each crutch to reduce the length of the crutch to the minimum by pressing the crutch on the floor. He can then raise himself by supporting his weight on the two crutches alternately and operating the control lever of the crutch which is not supporting his weight to allow that crutch to extend.",
"Studies aimed at ensuring semantic access to databases have a long history and originated at early stages of database technology development. Unfortunately, they have not led yet to the creation of widely accepted industrial technologies. In the last decade, the activity of the W3C consortium in the field of Semantic Web and development of standards of the ontology description languages induced a new activity wave in developing tools for systems of semantic access to databases and a new class of database systems, the so-called ontology-based data access (OBDA) systems. In such systems, ontology is used as a conceptual schema of the subject domain and as a basis of the user interface for SQL database systems. Approaches proposed in recent years do not ensure \"final\" solution of the problem. Nevertheless, ontology description languages were created that make it possible to achieve an acceptable compromise between their expressiveness, which remains sufficient for many applications, and computational complexity of reasoning on ontologies and processing queries to data stored in large databases. Prerequisites have been created for appearance of industrial technologies for development of systems of the above-specified class. In the paper, a survey of recent basic results and developments in this field is presented.",
"This paper investigates three problems identified in [1] for annotation propagation, namely, the view side-effect, source side-effect, and annotation placement problems. Given annotations entered for a tuple or an attribute in a view, these problems ask what tuples or attributes in the source have to be annotated to produce the view annotations. As observed in [1], these problems are fundamental not only for data provenance but also for the management of view updates. For an annotation attached to a single existing tuple in a view, it has been shown that these problems are often intractable even for views defined in terms of simple SPJU queries [1]. We revisit these problems by considering several dichotomies: (1) views defined in various subclasses of SPJU, versus SPJU views under a practical key preserving condition; (2) annotations attached to existing tuples in a view versus annotations on tuples to be inserted into the view; and (3) a single-tuple annotation versus a group of annotations. We provide a complete picture of intractability and tractability for the three problems in all these settings. We show that key preserving views often simplify the propagation analysis. Indeed, some problems become tractable for certain key preserving views, as opposed to the intractability of their counterparts that are not key preserving. However, group annotations often make the analysis harder. In addition, the problems have quite diverse complexity when annotations are attached to existing tuples in a view and when they are entered for tuples to be inserted into the view.",
"A propositional knowledge base can be seen as a compact representation of a set of models. When a knowledge base T is updated with a formula P, the resulting set of models can be represented in two ways: either by a theory T' that is equivalent to TaP or by the pair 〈T,P〉. The second representation can be super-polinomially more compact than the first. In this paper, we prove that the compactness of this representation depends on the specific semantics of a, e.g., Winslett's semantics is more compact than Ginsberg's.",
"This paper introduces and studies a declarative framework for updating views over indefinite databases. An indefinite database is a database with null values that are represented, following the standard database approach, by a single null constant. The paper formalizes views over such databases as indefinite deductive databases, and defines for them several classes of database repairs that realize view-update requests. Most notable is the class of constrained repairs. Constrained repairs change the database \"minimally\" and avoid making arbitrary commitments. They narrow down the space of alternative ways to fulfill the view-update request to those that are grounded, in a certain strong sense, in the database, the view and the view-update request.",
"View-based query answering is the problem of answering a query based only on the precomputed answers to a set of views. While this problem has been widely investigated in databases, it is largely unexplored in the context of Description Logic ontologies. Differently from traditional databases, Description Logics may express several forms of incomplete information, and this poses challenging problems in characterizing the semantics of views. In this paper, we first present a general framework for view-based query answering, where we address the above semantical problems by providing two notions of view-based query answering over ontologies, all based on the idea that the precomputed answers to views are the certain answers to the corresponding queries. We also relate such notions to privacy-aware access to ontologies. Then, we provide decidability results, algorithms, and data complexity characterizations for view-based query answering in several Description Logics, ranging from those with limited modeling capability to highly expressive ones."
]
} |
1411.2047 | 1883496636 | The amount of software running on mobile devices is constantly growing as consumers and industry purchase more battery powered devices. On the other hand, tools that provide developers with feed- back on how their software changes affect battery life are not widely available. This work employs Green Mining, the study of the rela- tionship between energy consumption and software changesets, and n-gram language models to evaluate if source code changeset perplex- ity correlates with change in energy consumption. A correlation be- tween perplexity and change in energy consumption would permit the development of a tool that predicts the impact a code changeset may have on a software applications energy consumption. The case study results show that there is weak to no correlation between cross en- tropy and change in energy consumption. Therefore, future areas of investigation are proposed. | Green Mining practitioners have investigated software changesets for features that correlate with change in software energy consumption. One such work @cite_12 shows that lines of code in a changeset do not correlate with change in energy consumption. In another study @cite_6 , the object oriented metrics Number of Children (NOC) and Depth of Inheritance Tree (DIT) were found to have a rank-correlation with mean energy consumption. Aggarwal @cite_1 show a relationship can exist between change in syscall profiles and energy profiles. There are still many features of software changesets to be investigated. | {
"cite_N": [
"@cite_1",
"@cite_6",
"@cite_12"
],
"mid": [
"2250533127",
"2131704937",
"2127822338"
],
"abstract": [
"Battery is a critical resource for smartphones. Software developers as the builders and maintainers of applications, are responsible for updating and deploying energy efficient applications to end users. Unfortunately, the impact of software change on energy consumption is still unclear. Estimation based on software metrics has proved difficult. As energy consumption profiling requires special infrastructure, developers have difficulty assessing the impact of their actions on energy consumption. System calls are the interface between applications and the OS kernel and provide insight into how software utilizes hardware and software resources. As profiling system calls requires no specialized infrastructure, unlike energy consumption, it is much easier for the developers to track changes to system calls. Thus we relate software change to energy consumption by tracing the changes in an application's pattern of system call invocations. We find that significant changes to system call profiles often induce significant changes in energy consumption.",
"Power consumption is becoming more and more important with the increased popularity of smart-phones, tablets and laptops. The threat of reducing a customer's battery-life now hangs over the software developer who asks, \"will this next change be the one that causes my software to drain a customer's battery?\" One solution is to detect power consumption regressions by measuring the power usage of tests, but this is time-consuming and often noisy. An alternative is to rely on software metrics that allow us to estimate the impact that a change might have on power consumption thus relieving the developer from expensive testing. This paper presents a general methodology for investigating the impact of software change on power consumption, we relate power consumption to software changes, and then investigate the impact of static OO software metrics on power consumption. We demonstrated that software change can effect power consumption using the Firefox web-browser and the Azureus Vuze BitTorrent client. We found evidence of a potential relationship between some software metrics and power consumption. In conclusion, we explored the effect of software change on power consumption on two projects; and we provide an initial investigation on the impact of software metrics on power consumption.",
"Power consumption is increasingly becoming a concern for not only electrical engineers, but for software engineers as well, due to the increasing popularity of new power-limited contexts such as mobile-computing, smart-phones and cloud-computing. Software changes can alter software power consumption behaviour and can cause power performance regressions. By tracking software power consumption we can build models to provide suggestions to avoid power regressions. There is much research on software power consumption, but little focus on the relationship between software changes and power consumption. Most work measures the power consumption of a single software task; instead we seek to extend this work across the history (revisions) of a project. We develop a set of tests for a well established product and then run those tests across all versions of the product while recording the power usage of these tests. We provide and demonstrate a methodology that enables the analysis of power consumption performance for over 500 nightly builds of Firefox 3.6; we show that software change does induce changes in power consumption. This methodology and case study are a first step towards combining power measurement and mining software repositories research, thus enabling developers to avoid power regressions via power consumption awareness."
]
} |
1411.2045 | 2950255726 | The problem of f-divergence estimation is important in the fields of machine learning, information theory, and statistics. While several nonparametric divergence estimators exist, relatively few have known convergence properties. In particular, even for those estimators whose MSE convergence rates are known, the asymptotic distributions are unknown. We establish the asymptotic normality of a recently proposed ensemble estimator of f-divergence between two distributions from a finite number of samples. This estimator has MSE convergence rate of O(1 T), is simple to implement, and performs well in high dimensions. This theory enables us to perform divergence-based inference tasks such as testing equality of pairs of distributions based on empirical samples. We experimentally validate our theoretical results and, as an illustration, use them to empirically bound the best achievable classification error. | Estimators for some @math -divergences already exist. For example, P ' o czos & Schneider @cite_1 and @cite_16 provided consistent @math -nn estimators for R ' e nyi- @math and the KL divergences, respectively. Consistency has been proven for other mutual information and divergence estimators based on plug-in histogram schemes @cite_10 @cite_32 @cite_17 @cite_9 . @cite_28 provided an estimator for R ' e nyi- @math divergence but assumed that one of the densities was known. However none of these works study the convergence rates of their estimators nor do they derive the asymptotic distributions. | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_32",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"2060677448",
"1974829031",
"121168560",
"2149268774",
"2150879893",
"2127234432",
"2171585891"
],
"abstract": [
"This article presents applications of entropic spanning graphs to imaging and feature clustering applications. Entropic spanning graphs span a set of feature vectors in such a way that the normalized spanning length of the graph converges to the entropy of the feature distribution as the number of random feature vectors increases. This property makes these graphs naturally suited to applications where entropy and information divergence are used as discriminants: texture classification, feature clustering, image indexing, and image registration. Among other areas, these problems arise in geographical information systems, digital libraries, medical information processing, video indexing, multisensor fusion, and content-based retrieval.",
"Abstract The Darbellay–Vajda partition scheme is a well known method to estimate the information dependency. This estimator belongs to a class of data-dependent partition estimators. We would like to prove that with some simple conditions, the Darbellay–Vajda partition estimator is a strong consistency for the information dependency estimation of a bivariate random vector. This result is an extension of Silva and Narayanan, 2010a , Silva and Narayanan, 2010b work which gives some simple conditions to confirm that the Gessaman's partition estimator and the tree-quantization partition estimator, other estimators in the class of data-dependent partition estimators, are strongly consistent.",
"We propose new nonparametric, consistent Renyi-α and Tsallis-α divergence estimators for continuous distributions. Given two independent and identically distributed samples, a “naive” approach would be to simply estimate the underlying densities and plug the estimated densities into the corresponding formulas. Our proposed estimators, in contrast, avoid density estimation completely, estimating the divergences directly using only simple k-nearest-neighbor statistics. We are nonetheless able to prove that the estimators are consistent under certain conditions. We also describe how to apply these estimators to mutual information and demonstrate their efficiency via numerical experiments.",
"We present a universal estimator of the divergence D(P spl par Q) for two arbitrary continuous distributions P and Q satisfying certain regularity conditions. This algorithm, which observes independent and identically distributed (i.i.d.) samples from both P and Q, is based on the estimation of the Radon-Nikodym derivative dP dQ via a data-dependent partition of the observation space. Strong convergence of this estimator is proved with an empirically equivalent segmentation of the space. This basic estimator is further improved by adaptive partitioning schemes and by bias correction. The application of the algorithms to data with memory is also investigated. In the simulations, we compare our estimators with the direct plug-in estimator and estimators based on other partitioning approaches. Experimental results show that our methods achieve the best convergence performance in most of the tested cases.",
"A new universal estimator of divergence is presented for multidimensional continuous densities based on k-nearest-neighbor (k-NN) distances. Assuming independent and identically distributed (i.i.d.) samples, the new estimator is proved to be asymptotically unbiased and mean-square consistent. In experiments with high-dimensional data, the k-NN approach generally exhibits faster convergence than previous algorithms. It is also shown that the speed of convergence of the k-NN method can be further improved by an adaptive choice of k.",
"We demonstrate that it is possible to approximate the mutual information arbitrarily closely in probability by calculating the relative frequencies on appropriate partitions and achieving conditional independence on the rectangles of which the partitions are made. Empirical results, including a comparison with maximum-likelihood estimators, are presented.",
"This work studies the problem of information divergence estimation based on data-dependent partitions. A histogrambased data-dependent estimate is proposed adopting a version of Barron-type histogram-based estimate. The main result is the stipulation of su cient conditions on the partition scheme to make the estimate strongly consistent. Furthermore, when the distributions are equipped with density functions in ( d ;B( d )), we obtain su cient conditions that guarantee a density-free strongly consistent information divergence estimate. In this context, the result is presented for two emblematic partition schemes: the statistically equivalent blocks (Gessaman’s data-driven partition) and data-dependent tree-structured vector quantization (TSVQ)."
]
} |
1411.2156 | 2951988149 | Ubiquity of Internet-connected and sensor-equipped portable devices sparked a new set of mobile computing applications that leverage the proliferating sensing capabilities of smart-phones. For many of these applications, accurate estimation of the user heading, as compared to the phone heading, is of paramount importance. This is of special importance for many crowd-sensing applications, where the phone can be carried in arbitrary positions and orientations relative to the user body. Current state-of-the-art focus mainly on estimating the phone orientation, require the phone to be placed in a particular position, require user intervention, and or do not work accurately indoors; which limits their ubiquitous usability in different applications. In this paper we present Humaine, a novel system to reliably and accurately estimate the user orientation relative to the Earth coordinate system. Humaine requires no prior-configuration nor user intervention and works accurately indoors and outdoors for arbitrary cell phone positions and orientations relative to the user body. The system applies statistical analysis techniques to the inertial sensors widely available on today's cell phones to estimate both the phone and user orientation. Implementation of the system on different Android devices with 170 experiments performed at different indoor and outdoor testbeds shows that Humaine significantly outperforms the state-of-the-art in diverse scenarios, achieving a median accuracy of @math averaged over a wide variety of phone positions. This is @math better than the-state-of-the-art. The accuracy is bounded by the error in the inertial sensors readings and can be enhanced with more accurate sensors and sensor fusion. | In @cite_8 , authors used inertial sensors, a wearable camera, and an inertial head tracker. The forward direction is determined by testing whether the slope of vertical acceleration at the peak for forward acceleration is increasing. This algorithm requires sensors to be attached to the torso for correct detection of the acceleration patterns, limiting its applicability for different positions. Also, using a wearable camera imposes further limitations on the applicability of the technique and the environment (e.g. lightning). | {
"cite_N": [
"@cite_8"
],
"mid": [
"1482049143"
],
"abstract": [
"In this paper, we present a wearable augmented reality (AR) system with personal positioning based on walking locomotion analysis that allows a user to freely mover around indoors and outdoors. The user is equipped with self-contained sensors, a wearable camera, an inertial head tracker and display. The system is based on the sensor fusion of estimates for relative displacement caused by human walking locomotion and estimates for absolute position and orientation within a Kalman filtering framework. The former is based on intensive analysis of human walking behavior using self-contained sensors. The latter is based on image matching of video frames from a wearable camera with an image database that was prepared beforehand."
]
} |
1411.2156 | 2951988149 | Ubiquity of Internet-connected and sensor-equipped portable devices sparked a new set of mobile computing applications that leverage the proliferating sensing capabilities of smart-phones. For many of these applications, accurate estimation of the user heading, as compared to the phone heading, is of paramount importance. This is of special importance for many crowd-sensing applications, where the phone can be carried in arbitrary positions and orientations relative to the user body. Current state-of-the-art focus mainly on estimating the phone orientation, require the phone to be placed in a particular position, require user intervention, and or do not work accurately indoors; which limits their ubiquitous usability in different applications. In this paper we present Humaine, a novel system to reliably and accurately estimate the user orientation relative to the Earth coordinate system. Humaine requires no prior-configuration nor user intervention and works accurately indoors and outdoors for arbitrary cell phone positions and orientations relative to the user body. The system applies statistical analysis techniques to the inertial sensors widely available on today's cell phones to estimate both the phone and user orientation. Implementation of the system on different Android devices with 170 experiments performed at different indoor and outdoor testbeds shows that Humaine significantly outperforms the state-of-the-art in diverse scenarios, achieving a median accuracy of @math averaged over a wide variety of phone positions. This is @math better than the-state-of-the-art. The accuracy is bounded by the error in the inertial sensors readings and can be enhanced with more accurate sensors and sensor fusion. | Recently, researchers have focused on using standard cell phones sensors to detect the user heading direction. In @cite_28 , the system . The system leverages the periodicity of the leg movement during walking to identify a point during each step where the relative orientation of the phone to the user's body is the same as in the initial standing state. The system uses a particle filter to mitigate the magnetic field noise effect. However, this particle filter requires a map of the building, which may not be ubiquitously available especially for crowd-sensing applications. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2100045669"
],
"abstract": [
"This paper addresses reliable and accurate indoor localization using inertial sensors commonly found on commodity smartphones. We believe indoor positioning is an important primitive that can enable many ubiquitous computing applications. To tackle the challenges of drifting in estimation, sensitivity to phone position, as well as variability in user walking profiles, we have developed algorithms for reliable detection of steps and heading directions, and accurate estimation and personalization of step length. We've built an end-to-end localization system integrating these modules and an indoor floor map, without the need for infrastructure assistance. We demonstrated for the first time a meter-level indoor positioning system that is infrastructure free, phone position independent, user adaptive, and easy to deploy. We have conducted extensive experiments on users with smartphone devices, with over 50 subjects walking over an aggregate distance of over 40 kilometers. Evaluation results showed our system can achieve a mean accuracy of 1.5m for the in-hand case and 2m for the in-pocket case in a 31m×15m testing area."
]
} |
1411.2156 | 2951988149 | Ubiquity of Internet-connected and sensor-equipped portable devices sparked a new set of mobile computing applications that leverage the proliferating sensing capabilities of smart-phones. For many of these applications, accurate estimation of the user heading, as compared to the phone heading, is of paramount importance. This is of special importance for many crowd-sensing applications, where the phone can be carried in arbitrary positions and orientations relative to the user body. Current state-of-the-art focus mainly on estimating the phone orientation, require the phone to be placed in a particular position, require user intervention, and or do not work accurately indoors; which limits their ubiquitous usability in different applications. In this paper we present Humaine, a novel system to reliably and accurately estimate the user orientation relative to the Earth coordinate system. Humaine requires no prior-configuration nor user intervention and works accurately indoors and outdoors for arbitrary cell phone positions and orientations relative to the user body. The system applies statistical analysis techniques to the inertial sensors widely available on today's cell phones to estimate both the phone and user orientation. Implementation of the system on different Android devices with 170 experiments performed at different indoor and outdoor testbeds shows that Humaine significantly outperforms the state-of-the-art in diverse scenarios, achieving a median accuracy of @math averaged over a wide variety of phone positions. This is @math better than the-state-of-the-art. The accuracy is bounded by the error in the inertial sensors readings and can be enhanced with more accurate sensors and sensor fusion. | The uDirect system @cite_7 @cite_10 employs a similar technique to estimate the user direction; They identify a point in the middle between the user's detected heel strike and toe-off moments as the point where the device orientation is close to the device orientation in the standing mode. These systems, however, require a model for the acceleration pattern within a step for each phone position; A model for the phone placed in the pants pocket was presented in @cite_7 @cite_10 . Deriving the model for a new position is not straightforward and the acceleration pattern for other positions may not be as clear as in the case of the presented pants pocket. Furthermore, the magnetic field noise, which affects the acceleration pattern used in heading estimation, degrades uDirect accuracy indoors as we quantify in . | {
"cite_N": [
"@cite_10",
"@cite_7"
],
"mid": [
"2114182580",
"2116274755"
],
"abstract": [
"A novel method for a mobile phone centric observation of a user’s facing direction is presented. To estimate this direction, our proposed technique exploits the acceleration pattern that can be measured by a smartphone as the user is walking. For an accurate analysis of the acceleration pattern, the proposed approach benefits from a new trigonometric interpolation scheme. Our algorithm is independent of the initial orientation of the device and is adaptable to various wearing positions on a user’s body, which gives the user a larger degree of freedom. A detailed description of the algorithm, which has been customized for a trouser pocket is presented. In addition, complementary hints for adaptation of the algorithm to other wearing positions along with an example of chest pocket position are provided. We have evaluated a prototype implementation of our algorithm on a smartphone, through several field experiments. It has been observed that our algorithm outperforms the conventional GPS and PCA-based techniques in terms of accuracy, reliability and energy consumption. The results also show that our approach has been able to handle the sudden variations of the user’s direction. We have further incorporated our algorithm into a dead-reckoning application as an example of its real-world utility.",
"In this paper we present the uDirect algorithm as a novel approach for mobile phone centric observation of a user's facing direction, through which the device and user orientations relative to earth coordinate are estimated. While the device orientation estimation is based on accelerometer and magnetometer measurements in standing mode, the unique behavior of measured acceleration during stance phase of a human's walking cycle is used for detecting user direction. Furthermore, the algorithm is independent of initial orientation of the device which gives the user higher space of freedom for long term observations. As the algorithm only relies on embedded accelerometer and magnetometer sensors of the mobile phone, it is not susceptible to shadowing effect as GPS. In addition, by performing independent estimations during each step of walking the model is robust to error accumulation. Evaluating the algorithm with 180 data samples from 10 participates has empirically confirmed the assumptions of our analytical model about the unique characteristics of the human stance phase for direction estimation. Moreover, our initial inspection has shown a system based on our algorithm outperforms conventional use of GPS and PCA analysis based techniques for walking distances more than 2 steps."
]
} |
1411.1646 | 2953305740 | Domain specific (dis-)similarity or proximity measures used e.g. in alignment algorithms of sequence data, are popular to analyze complex data objects and to cover domain specific data properties. Without an underlying vector space these data are given as pairwise (dis-)similarities only. The few available methods for such data focus widely on similarities and do not scale to large data sets. Kernel methods are very effective for metric similarity matrices, also at large scale, but costly transformations are necessary starting with non-metric (dis-) similarities. We propose an integrative combination of Nystroem approximation, potential double centering and eigenvalue correction to obtain valid kernel matrices at linear costs in the number of samples. By the proposed approach effective kernel approaches, become accessible. Experiments with several larger (dis-)similarity data sets show that the proposed method achieves much better runtime performance than the standard strategy while keeping competitive model accuracy. The main contribution is an efficient and accurate technique, to convert (potentially non-metric) large scale dissimilarity matrices into approximated positive semi-definite kernel matrices at linear costs. | Another strategy is to use a more general theory of learning with similarity functions proposed in @cite_55 . Which can be used to identify descriptive or discriminative models based on a available similarity function under some conditions @cite_29 . A practical approach of the last type for classification problems was provided in @cite_6 . The model is defined on a fixed randomly chosen set of landmarks per class and a transfer function. Thereby the landmarks are a small set of columns (or rows) of a kernel matrix which are used to formulate the decision function. The weights of the decision function are then optimized by standard approaches. The results are however in general substantially worse than those provided in @cite_40 where the datasets are taken from. | {
"cite_N": [
"@cite_40",
"@cite_55",
"@cite_29",
"@cite_6"
],
"mid": [
"",
"1987091651",
"2952715980",
"2952366910"
],
"abstract": [
"",
"Kernel functions have become an extremely popular tool in machine learning, with an attractive theory as well. This theory views a kernel as implicitly mapping data points into a possibly very high dimensional space, and describes a kernel function as being good for a given learning problem if data is separable by a large margin in that implicit space. However, while quite elegant, this theory does not directly correspond to one's intuition of a good kernel as a good similarity function. Furthermore, it may be difficult for a domain expert to use the theory to help design an appropriate kernel for the learning task at hand since the implicit mapping may not be easy to calculate. Finally, the requirement of positive semi-definiteness may rule out the most natural pairwise similarity functions for the given problem domain.In this work we develop an alternative, more general theory of learning with similarity functions (i.e., sufficient conditions for a similarity function to allow one to learn well) that does not require reference to implicit spaces, and does not require the function to be positive semi-definite (or even symmetric). Our results also generalize the standard theory in the sense that any good kernel function under the usual definition can be shown to also be a good similarity function under our definition (though with some loss in the parameters). In this way, we provide the first steps towards a theory of kernels that describes the effectiveness of a given kernel function in terms of natural similarity-based properties.",
"We address the problem of general supervised learning when data can only be accessed through an (indefinite) similarity function between data points. Existing work on learning with indefinite kernels has concentrated solely on binary multi-class classification problems. We propose a model that is generic enough to handle any supervised learning task and also subsumes the model previously proposed for classification. We give a \"goodness\" criterion for similarity functions w.r.t. a given supervised learning task and then adapt a well-known landmarking technique to provide efficient algorithms for supervised learning using \"good\" similarity functions. We demonstrate the effectiveness of our model on three important super-vised learning problems: a) real-valued regression, b) ordinal regression and c) ranking where we show that our method guarantees bounded generalization error. Furthermore, for the case of real-valued regression, we give a natural goodness definition that, when used in conjunction with a recent result in sparse vector recovery, guarantees a sparse predictor with bounded generalization error. Finally, we report results of our learning algorithms on regression and ordinal regression tasks using non-PSD similarity functions and demonstrate the effectiveness of our algorithms, especially that of the sparse landmark selection algorithm that achieves significantly higher accuracies than the baseline methods while offering reduced computational costs.",
"We consider the problem of classification using similarity distance functions over data. Specifically, we propose a framework for defining the goodness of a (dis)similarity function with respect to a given learning task and propose algorithms that have guaranteed generalization properties when working with such good functions. Our framework unifies and generalizes the frameworks proposed by [Balcan-Blum ICML 2006] and [ ICML 2007]. An attractive feature of our framework is its adaptability to data - we do not promote a fixed notion of goodness but rather let data dictate it. We show, by giving theoretical guarantees that the goodness criterion best suited to a problem can itself be learned which makes our approach applicable to a variety of domains and problems. We propose a landmarking-based approach to obtaining a classifier from such learned goodness criteria. We then provide a novel diversity based heuristic to perform task-driven selection of landmark points instead of random selection. We demonstrate the effectiveness of our goodness criteria learning method as well as the landmark selection heuristic on a variety of similarity-based learning datasets and benchmark UCI datasets on which our method consistently outperforms existing approaches by a significant margin."
]
} |
1411.1646 | 2953305740 | Domain specific (dis-)similarity or proximity measures used e.g. in alignment algorithms of sequence data, are popular to analyze complex data objects and to cover domain specific data properties. Without an underlying vector space these data are given as pairwise (dis-)similarities only. The few available methods for such data focus widely on similarities and do not scale to large data sets. Kernel methods are very effective for metric similarity matrices, also at large scale, but costly transformations are necessary starting with non-metric (dis-) similarities. We propose an integrative combination of Nystroem approximation, potential double centering and eigenvalue correction to obtain valid kernel matrices at linear costs in the number of samples. By the proposed approach effective kernel approaches, become accessible. Experiments with several larger (dis-)similarity data sets show that the proposed method achieves much better runtime performance than the standard strategy while keeping competitive model accuracy. The main contribution is an efficient and accurate technique, to convert (potentially non-metric) large scale dissimilarity matrices into approximated positive semi-definite kernel matrices at linear costs. | Especially for metric dissimilarities the approach keeps the known guarantees, like generalization bounds (see e.g. @cite_25 ). For non-psd data we give a convergence proof, but the corresponding bounds are still open, yet our experiments are promising. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2160840682"
],
"abstract": [
"A problem for many kernel-based methods is that the amount of computation required to find the solution scales as O(n3), where n is the number of training examples. We develop and analyze an algorithm to compute an easily-interpretable low-rank approximation to an n × n Gram matrix G such that computations of interest may be performed more rapidly. The approximation is of the form Gk = CWk+CT, where C is a matrix consisting of a small number c of columns of G and Wk is the best rank-k approximation to W, the matrix formed by the intersection between those c columns of G and the corresponding c rows of G. An important aspect of the algorithm is the probability distribution used to randomly sample the columns; we will use a judiciously-chosen and data-dependent nonuniform probability distribution. Let ||·||2 and ||·||F denote the spectral norm and the Frobenius norm, respectively, of a matrix, and let Gk be the best rank-k approximation to G. We prove that by choosing O(k e4) columns||G-CWk+CT||ξ ≤ ||G-Gk||ξ + e Σi=1n Gii2 ,both in expectation and with high probability, for both ξ = 2, F, and for all k: 0 ≤ k ≤ rank(W). This approximation can be computed using O(n) additional space and time, after making two passes over the data from external storage. The relationships between this algorithm, other related matrix decompositions, and the Nystrom method from integral equation theory are discussed."
]
} |
1411.1455 | 1793812012 | In recent years, there has been much research in Ranked Retrieval model in structured databases, especially those in web databases. With this model, a search query returns top-k tuples according to not just exact matches of selection conditions, but a suitable ranking function. This paper studies a novel problem on the privacy implications of database ranking. The motivation is a novel yet serious privacy leakage we found on real-world web databases which is caused by the ranking function design. Many such databases feature private attributes - e.g., a social network allows users to specify certain attributes as only visible to him herself, but not to others. While these websites generally respect the privacy settings by not directly displaying private attribute values in search query answers, many of them nevertheless take into account such private attributes in the ranking function design. The conventional belief might be that tuple ranks alone are not enough to reveal the private attribute values. Our investigation, however, shows that this is not the case in reality. To address the problem, we introduce a taxonomy of the problem space with two dimensions, (1) the type of query interface and (2) the capability of adversaries. For each subspace, we develop a novel technique which either guarantees the successful inference of private attributes, or does so for a significant portion of real-world tuples. We demonstrate the effectiveness and efficiency of our techniques through theoretical analysis, extensive experiments over real-world datasets, as well as successful online attacks over websites with tens to hundreds of millions of users - e.g., Amazon Goodreads and Renren.com. | Database Ranking: The area of ranking has been extensively studied in the context of deterministic @cite_20 @cite_14 , probabilistic @cite_27 and incomplete @cite_10 data. Processing top- @math query when the ranking score is a combination of scores of individual attributes was studied in @cite_31 @cite_12 . A popular ranking function is nearest neighbor @cite_6 where the tuples are ordered based on the distance between tuple @math and the given query @math . Other categorizations such as monotone, generic or no ranking (such as Skyline queries) has also been studied @cite_20 . Recently, there have been studies on learning the rank of a tuple @cite_11 or the ranking function design @cite_3 @cite_32 through a top- @math static ranking interface. | {
"cite_N": [
"@cite_14",
"@cite_32",
"@cite_6",
"@cite_3",
"@cite_27",
"@cite_31",
"@cite_10",
"@cite_12",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"",
"2110822809",
"",
"2041398293",
"2666600683",
"2165211504",
"",
"2009688537",
"1964232526"
],
"abstract": [
"",
"",
"Many ranking models have been proposed in information retrieval, and recently machine learning techniques have also been applied to ranking model construction. Most of the existing methods do not take into consideration the fact that significant differences exist between queries, and only resort to a single function in ranking of documents. In this paper, we argue that it is necessary to employ different ranking models for different queries and onduct what we call query-dependent ranking. As the first such attempt, we propose a K-Nearest Neighbor (KNN) method for query-dependent ranking. We first consider an online method which creates a ranking model for a given query by using the labeled neighbors of the query in the query feature space and then rank the documents with respect to the query using the created model. Next, we give two offline approximations of the method, which create the ranking models in advance to enhance the efficiency of ranking. And we prove a theory which indicates that the approximations are accurate in terms of difference in loss of prediction, if the learning algorithm used is stable with respect to minor changes in training examples. Our experimental results show that the proposed online and offline methods both outperform the baseline method of using a single ranking function.",
"",
"The rapid growth of transactional data brought, soon enough, into attention the need of its further exploitation. In this paper, we investigate the problem of securing sensitive knowledge from being exposed in patterns extracted during association rule mining. Instead of hiding the produced rules directly, we decide to hide the sensitive frequent itemsets that may lead to the production of these rules. As a first step, we introduce the notion of distance between two databases and a measure for quantifying it. By trying to minimize the distance between the original database and its sanitized version (that can safely be released), we propose a novel, exact algorithm for association rule hiding and evaluate it on real world datasets demonstrating its effectiveness towards solving the problem.",
"Assume that each object in a database has m grades, or scores, one for each of m attributes. For example, an object can have a color grade, that tells how red it is, and a shape grade, that tells how round it is. For each attribute, there is a sorted list, which lists each object and its grade under that attribute, sorted by grade (highest grade first). Each object is assigned an overall grade, that is obtained by combining the attribute grades using a fixed monotone aggregation function, or combining rule, such as min or average. To determine the top k objects, that is, k objects with the highest overall grades, the naive algorithm must access every object in the database, to find its grade under each attribute. Fagin has given an algorithm (\"Fagin's Algorithm\", or FA) that is much more efficient. For some monotone aggregation functions, FA is optimal with high probability in the worst case. We analyze an elegant and remarkably simple algorithm (\"the threshold algorithm\", or TA) that is optimal in a much stronger sense than FA. We show that TA is essentially optimal, not just for some monotone aggregation functions, but for all of them, and not just in a high-probability worst-case sense, but over every database. Unlike FA, which requires large buffers (whose size may grow unboundedly as the database size grows), TA requires only a small, constant-size buffer. TA allows early stopping, which yields, in a precise sense, an approximate version of the top k answers. We distinguish two types of access: sorted access (where the middleware system obtains the grade of an object in some sorted list by proceeding through the list sequentially from the top), and random access (where the middleware system requests the grade of object in a list, and obtains it in one step). We consider the scenarios where random access is either impossible, or expensive relative to sorted access, and provide algorithms that are essentially optimal for these cases as well.",
"We discuss, compare and relate some old and some new models for incomplete and probabilistic databases. We characterize the expressive power of c-tables over infinite domains and we introduce a new kind of result, algebraic completion, for studying less expressive models. By viewing probabilistic models as incompleteness models with additional probability information, we define completeness and closure under query languages of general probabilistic database models and we introduce a new such model, probabilistic c-tables, that is shown to be complete and closed under the relational algebra.",
"",
"Efficient processing of top-k queries is a crucial requirement in many interactive environments that involve massive amounts of data. In particular, efficient top-k processing in domains such as the Web, multimedia search, and distributed systems has shown a great impact on performance. In this survey, we describe and classify top-k processing techniques in relational databases. We discuss different design dimensions in the current techniques including query models, data access methods, implementation levels, data and query certainty, and supported scoring functions. We show the implications of each dimension on the design of the underlying techniques. We also discuss top-k queries in XML domain, and show their connections to relational approaches.",
"Many web databases are only accessible through a proprietary search interface which allows users to form a query by entering the desired values for a few attributes. After receiving a query, the system returns the top-k matching tuples according to a pre-determined ranking function. Since the rank of a tuple largely determines the attention it receives from website users, ranking information for any tuple - not just the top-ranked ones - is often of significant interest to third parties such as sellers, customers, market researchers and investors. In this paper, we define a novel problem of rank discovery over hidden web databases. We introduce a taxonomy of ranking functions, and show that different types of ranking functions require fundamentally different approaches for rank discovery. Our technical contributions include principled and efficient randomized algorithms for estimating the rank of a given tuple, as well as negative results which demonstrate the inefficiency of any deterministic algorithm. We show extensive experimental results over real-world databases, including an online experiment at Amazon.com, which illustrates the effectiveness of our proposed techniques."
]
} |
1411.1455 | 1793812012 | In recent years, there has been much research in Ranked Retrieval model in structured databases, especially those in web databases. With this model, a search query returns top-k tuples according to not just exact matches of selection conditions, but a suitable ranking function. This paper studies a novel problem on the privacy implications of database ranking. The motivation is a novel yet serious privacy leakage we found on real-world web databases which is caused by the ranking function design. Many such databases feature private attributes - e.g., a social network allows users to specify certain attributes as only visible to him herself, but not to others. While these websites generally respect the privacy settings by not directly displaying private attribute values in search query answers, many of them nevertheless take into account such private attributes in the ranking function design. The conventional belief might be that tuple ranks alone are not enough to reveal the private attribute values. Our investigation, however, shows that this is not the case in reality. To address the problem, we introduce a taxonomy of the problem space with two dimensions, (1) the type of query interface and (2) the capability of adversaries. For each subspace, we develop a novel technique which either guarantees the successful inference of private attributes, or does so for a significant portion of real-world tuples. We demonstrate the effectiveness and efficiency of our techniques through theoretical analysis, extensive experiments over real-world datasets, as well as successful online attacks over websites with tens to hundreds of millions of users - e.g., Amazon Goodreads and Renren.com. | Inference Control: Prior work on privacy inference @cite_9 studied the problem of inferring individual tuple values @cite_33 @cite_16 and the existence of a tuple in a database @cite_17 from aggregates such as SUM, MIN, MAX, etc. The field of inference control @cite_9 @cite_19 @cite_18 seeks to prevent such attacks by through query auditing, controlling the number of tuples that match a query or modify query responses using perturbation, distortion etc @cite_30 . Researchers have also proposed multiple privacy preserving aggregate query processing techniques @cite_5 @cite_37 . Recently, @cite_1 has showed that it is possible to infer the location of a user in a Location based Social Network (LBSN) (which could be considered as a private attribute) if the ranking function returns the distance between the query and the victim tuple. However, we do not assume the availability of such information as most websites do not display the score of a tuple for a query. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_37",
"@cite_33",
"@cite_9",
"@cite_1",
"@cite_19",
"@cite_5",
"@cite_16",
"@cite_17"
],
"mid": [
"2011095877",
"22424597",
"2517104773",
"",
"2058970046",
"2088318379",
"2113427031",
"1996842421",
"",
"2139864694"
],
"abstract": [
"A statistical database (SDB) may be defined as an ordinary database with the capability of providing statistical information to user queries. The security problem for the SDB is to limit the use of the SDB so(that only statistical information is available and no sequence of queries is sufficient to infer protected information about any individual. When such information is obtained, the SDB is said to be compromised.",
"Inference control in databases, also known as Statistical Disclosure Control (SDC), is about protecting data so they can be published without revealing confidential information that can be linked to specific individuals among those to which the data correspond. This is an important application in several areas, such as official statistics, health statistics, e-commerce (sharing of consumer data), etc. Since data protection ultimately means data modification, the challenge for SDC is to achieve protection with minimum loss of the accuracy sought by database users. In this chapter, we survey the current state of the art in SDC methods for protecting individual data (microdata). We discuss several information loss and disclosure risk measures and analyze several ways of combining them to assess the performance of the various methods. Last but not least, topics which need more research in the area are identified and possible directions hinted.",
"We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.",
"",
"Access control models protect sensitive data from unauthorized disclosure via direct accesses, however, they fail to prevent indirect accesses. Indirect data disclosure via inference channels occurs when sensitive information can be inferred from non-sensitive data and metadata. Inference channels are often low-bandwidth and complex; nevertheless, detection and removal of inference channels is necessary to guarantee data security. This paper presents a survey of the current and emerging research in data inference control and emphasizes the importance of targeting this so often overlooked problem during database security design.",
"Location-based social networks (LBSNs) feature friend discovery by location proximity that has attracted hundreds of millions of users world-wide. While leading LBSN providers claim the well-protection of their users' location privacy, for the first time we show through real world attacks that these claims do not hold. In our identified attacks, a malicious individual with the capability of no more than a regular LBSN user can easily break most LBSNs by manipulating location information fed to LBSN client apps and running them as location oracles. We further develop an automated user location tracking system and test it on leading LBSNs including Wechat, Skout, and Momo. We demonstrate its effectiveness and efficiency via a 3 week real-world experiment on 30 volunteers and show that we could geo-locate any target with high accuracy and readily recover his her top 5 locations. Finally, we also develop a framework that explores a grid reference system and location classifications to mitigate the attacks. Our result serves as a critical security reminder of the current LBSNs pertaining to a vast number of users.",
"This paper considers the problem of providing security to statistical databases against disclosure of confidential information. Security-control methods suggested in the literature are classified into four general approaches: conceptual, query restriction, data perturbation, and output perturbation. Criteria for evaluating the performance of the various security-control methods are identified. Security-control methods that are based on each of the four approaches are discussed, together with their performance with respect to the identified evaluation criteria. A detailed comparative analysis of the most promising methods for protecting dynamic-online statistical databases is also presented. To date no single security-control method prevents both exact and partial disclosures. There are, however, a few perturbation-based methods that prevent exact disclosure and enable the database administrator to exercise \"statistical disclosure control.\" Some of these methods, however introduce bias into query responses or suffer from the 0 1 query-set-size problem (i.e., partial disclosure is possible in case of null query set or a query set of size 1). We recommend directing future research efforts toward developing new methods that prevent exact disclosure and provide statistical-disclosure control, while at the same time do not suffer from the bias problem and the 0 1 query-set-size problem. Furthermore, efforts directed toward developing a bias-correction mechanism and solving the general problem of small query-set-size would help salvage a few of the current perturbation-based methods.",
"This paper focuses on privacy risks in health databases that arise in assistive environments, where humans interact with the environment and this information is captured, assimilated and events of interest are extracted. The stakeholders of such an environment can range from caregivers to doctors and supporting family. The environment also includes objects the person interacts with, such as, wireless devices that generate data about these interactions. The data streams generated by such an environment are massive. Such databases are usually considered hidden, i.e., are only accessible online via restrictive front-end web interfaces. Security issues specific to such hidden databases, however, have been largely overlooked by the research community, possibly due to the false sense of security provided by the restrictive access to such databases. We argue that an urgent challenge facing such databases is the disclosure of sensitive aggregates enabled by recent studies on the sampling of hidden databases through its public web interface. To protect sensitive aggregates, we enunciate the key design principles, propose a three-component design, and suggest a number of possible techniques that may protect sensitive aggregates while maintaining the service quality for normal search users. Our hope is that this paper sheds lights on a fruitful direction of future research in security issues related to hidden web databases.",
"",
"Advances in information technology, and its use in research, are increasing both the need for anonymized data and the risks of poor anonymization. We present a metric, δ-presence, that clearly links the quality of anonymization to the risk posed by inadequate anonymization. We show that existing anonymization techniques are inappropriate for situations where δ-presence is a good metric (specifically, where knowing an individual is in the database poses a privacy risk), and present algorithms for effectively anonymizing to meet δ-presence. The algorithms are evaluated in the context of a real-world scenario, demonstrating practical applicability of the approach."
]
} |
1411.1490 | 1882226547 | It has been a long-standing goal in machine learning, as well as in AI more generally, to develop lifelong learning systems that learn many different tasks over time, and reuse insights from tasks learned, “learning to learn” as they do so. In this work we pose and provide efficient algorithms for several natural theoretical formulations of this goal. Specifically, we consider the problem of learning many different target functions over time, that share certain commonalities that are initially unknown to the learning algorithm. Our aim is to learn new internal representations as the algorithm learns new target functions, that capture this commonality and allow subsequent learning tasks to be solved more efficiently and from less data. We develop efficient algorithms for two very different kinds of commonalities that target functions might share: one based on learning common low-dimensional and unions of low-dimensional subspaces and one based on learning nonlinear Boolean combinations of features. Our algorithms for learning Boolean feature combinations additionally have a dual interpretation, and can be viewed as giving an efficient procedure for constructing near-optimal sparse Boolean autoencoders under a natural “anchor-set” assumption. | Most related work in multi-task or transfer learning considers the case that all target functions are present simultaneously or that target functions are drawn from some easily learnable distribution. Baxter @cite_23 @cite_11 developed some of the earliest foundations for transfer learning, by providing sample complexity results for achieving low average error in such settings. Other related sample complexity results appear in @cite_1 . | {
"cite_N": [
"@cite_1",
"@cite_23",
"@cite_11"
],
"mid": [
"2036043322",
"2143419558",
"2162888803"
],
"abstract": [
"The approach of learning of multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underline these task.",
"A Bayesian model of learning to learn by sampling from multiple tasks is presented. The multiple tasks are themselves generated by sampling from a distribution over an environment of related tasks. Such an environment is shown to be naturally modelled within a Bayesian context by the concept of an objective prior distribution. It is argued that for many common machine learning problems, although in general we do not know the true (objective) prior for the problem, we do have some idea of a set of possible priors to which the true prior belongs. It is shown that under these circumstances a learner can use Bayesian inference to learn the true prior by learning sufficiently many tasks from the environment. In addition, bounds are given on the amount of information required to learn a task when it is simultaneously learnt with several other tasks. The bounds show that if the learner has little knowledge of the true prior, but the dimensionality of the true prior is small, then sampling multiple tasks is highly advantageous. The theory is applied to the problem of learning a common feature set or equivalently a low-dimensional-representation (LDR) for an environment of related tasks.",
"A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is supplied by hand through the skill and insights of experts. In this paper a model for automatically learning bias is investigated. The central assumption of the model is that the learner is embedded within an environment of related learning tasks. Within such an environment the learner can sample from multiple tasks, and hence it can search for a hypothesis space that contains good solutions to many of the problems in the environment. Under certain restrictions on the set of all hypothesis spaces available to the learner, we show that a hypothesis space that performs well on a sufficiently large number of training tasks will also perform well when learning novel tasks in the same environment. Explicit bounds are also derived demonstrating that learning multiple tasks within an environment of related tasks can potentially give much better generalization than learning a single task."
]
} |
1411.1490 | 1882226547 | It has been a long-standing goal in machine learning, as well as in AI more generally, to develop lifelong learning systems that learn many different tasks over time, and reuse insights from tasks learned, “learning to learn” as they do so. In this work we pose and provide efficient algorithms for several natural theoretical formulations of this goal. Specifically, we consider the problem of learning many different target functions over time, that share certain commonalities that are initially unknown to the learning algorithm. Our aim is to learn new internal representations as the algorithm learns new target functions, that capture this commonality and allow subsequent learning tasks to be solved more efficiently and from less data. We develop efficient algorithms for two very different kinds of commonalities that target functions might share: one based on learning common low-dimensional and unions of low-dimensional subspaces and one based on learning nonlinear Boolean combinations of features. Our algorithms for learning Boolean feature combinations additionally have a dual interpretation, and can be viewed as giving an efficient procedure for constructing near-optimal sparse Boolean autoencoders under a natural “anchor-set” assumption. | Recent work of @cite_24 @cite_2 considers the problem of learning multiple linear separators that share a common low-dimensional subspace in the batch setting where all tasks are given up front. They specifically provide guarantees for a natural ERM algorithm with trace norm regularization. There has also been work on applying the Group Lasso method to batch multi-task learning which solves a specific multi-task optimization problem @cite_18 . By contrast with these results, our setting is more demanding since we aim to achieve small error on all tasks and to do so online without keeping all training data from past learning tasks in memory. | {
"cite_N": [
"@cite_24",
"@cite_18",
"@cite_2"
],
"mid": [
"2963917643",
"2739284222",
"1942758450"
],
"abstract": [
"Trace norm regularization is a popular method of multitask learning. We give excess risk bounds with explicit dependence on the number of tasks, the number of examples per task and properties of the data distribution. The bounds are independent of the dimension of the input space, which may be innite as in the case of reproducing kernel Hilbert spaces. A byproduct of the proof are bounds on the expected norm of sums of random positive semidenite matrices with subexponential moments.",
"The Group-Lasso is a well-known tool for joint regularization in machine learning methods. While the l1,2 and the l1,∞ version have been studied in detail and efficient algorithms exist, there are still open questions regarding other l1,p variants. We characterize conditions for solutions of the l1,p Group-Lasso for all p-norms with 1 ≤ p ≤ ∞, and we present a unified active set algorithm. For all p-norms, a highly efficient projected gradient algorithm is presented. This new algorithm enables us to compare the prediction performance of many variants of the Group-Lasso in a multi-task learning setting, where the aim is to solve many learning problems in parallel which are coupled via the Group-Lasso constraint. We conduct large-scale experiments on synthetic data and on two real-world data sets. In accordance with theoretical characterizations of the different norms we observe that the weak-coupling norms with p between 1.5 and 2 consistently outperform the strong-coupling norms with p ≫ 2.",
"In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis tasks. The coefficients of the linear combination are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on the assumption that task parameters within a group lie in a low dimensional subspace but allows the tasks in different groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods."
]
} |
1411.1490 | 1882226547 | It has been a long-standing goal in machine learning, as well as in AI more generally, to develop lifelong learning systems that learn many different tasks over time, and reuse insights from tasks learned, “learning to learn” as they do so. In this work we pose and provide efficient algorithms for several natural theoretical formulations of this goal. Specifically, we consider the problem of learning many different target functions over time, that share certain commonalities that are initially unknown to the learning algorithm. Our aim is to learn new internal representations as the algorithm learns new target functions, that capture this commonality and allow subsequent learning tasks to be solved more efficiently and from less data. We develop efficient algorithms for two very different kinds of commonalities that target functions might share: one based on learning common low-dimensional and unions of low-dimensional subspaces and one based on learning nonlinear Boolean combinations of features. Our algorithms for learning Boolean feature combinations additionally have a dual interpretation, and can be viewed as giving an efficient procedure for constructing near-optimal sparse Boolean autoencoders under a natural “anchor-set” assumption. | @cite_10 considers multi-task learning where explicit known relationships among tasks are exploited for faster learning. In their setting each learning problem is an online problem but the collection of learning problems are all occurring simultaneously. Discussion in @cite_7 hints toward the type of the algorithms we analyze in , but without formal analysis about how the error accumulation could harm the sample complexity (which, as we will see, is one of the central challenges in this setting). | {
"cite_N": [
"@cite_10",
"@cite_7"
],
"mid": [
"2116772053",
"2122308921"
],
"abstract": [
"We introduce new Perceptron-based algorithms for the online multitask binary classification problem. Under suitable regularity conditions, our algorithms are shown to improve on their baselines by a factor proportional to the number of tasks. We achieve these improvements using various types of regularization that bias our algorithms towards specific notions of task relatedness. More specifically, similarity among tasks is either measured in terms of the geometric closeness of the task reference vectors or as a function of the dimension of their spanned subspace. In addition to adapting to the online setting a mix of known techniques, such as the multitask kernels of , our analysis also introduces a matrix-based multitask extension of the p-norm Perceptron, which is used to implement spectral co-regularization. Experiments on real-world data sets complement and support our theoretical findings.",
"An architecture is described for designing systems that acquire and ma nipulate large amounts of unsystematized, or so-called commonsense, knowledge. Its aim is to exploit to the full those aspects of computational learning that are known to offer powerful solutions in the acquisition and maintenance of robust knowledge bases. The architecture makes explicit the requirements on the basic computational tasks that are to be performed and is designed to make this computationally tractable even for very large databases. The main claims are that (i) the basic learning and deduction tasks are provably tractable and (ii) tractable learning offers viable approaches to a range of issues that have been previously identified as problematic for artificial intelligence systems that are programmed. Among the issues that learning offers to resolve are robustness to inconsistencies, robustness to incomplete information and resolving among alternatives. Attribute-efficient learning algorithms, which allow learning from few examples in large dimensional systems, are fundamental to the approach. Underpinning the overall architecture is a new principled approach to manipulating relations in learning systems. This approach, of independently quantified arguments, allows propositional learning algorithms to be applied systematically to learning relational concepts in polynomial time and in modular fashion."
]
} |
1411.1490 | 1882226547 | It has been a long-standing goal in machine learning, as well as in AI more generally, to develop lifelong learning systems that learn many different tasks over time, and reuse insights from tasks learned, “learning to learn” as they do so. In this work we pose and provide efficient algorithms for several natural theoretical formulations of this goal. Specifically, we consider the problem of learning many different target functions over time, that share certain commonalities that are initially unknown to the learning algorithm. Our aim is to learn new internal representations as the algorithm learns new target functions, that capture this commonality and allow subsequent learning tasks to be solved more efficiently and from less data. We develop efficient algorithms for two very different kinds of commonalities that target functions might share: one based on learning common low-dimensional and unions of low-dimensional subspaces and one based on learning nonlinear Boolean combinations of features. Our algorithms for learning Boolean feature combinations additionally have a dual interpretation, and can be viewed as giving an efficient procedure for constructing near-optimal sparse Boolean autoencoders under a natural “anchor-set” assumption. | The problem of trying to learn invariants or other commonalities when faced with a series of learning tasks arriving over time has a long history in applied machine learning (e.g., @cite_15 @cite_27 ). Our work is the first to give provable efficiency guarantees for learning multi-layer representations in this life-long learning setting. | {
"cite_N": [
"@cite_27",
"@cite_15"
],
"mid": [
"1991564165",
"1513681384"
],
"abstract": [
"Learning provides a useful tool for the automatic design of autonomous robots. Recent research on learning robot control has predominantly focussed on learning single tasks that were studied in isolation. If robots encounter a multitude of control learning tasks over their entire lifetime there is an opportunity to transfer knowledge between them. In order to do so, robots may learn the invariants and the regularities of their individual tasks and environments. This task-independent knowledge can be employed to bias generalization when learning control, which reduces the need for real-world experimentation. We argue that knowledge transfer is essential if robots are to learn control with moderate learning times in complex scenarios. Two approaches to lifelong robot learning which both capture invariant knowledge about the robot and its environments are presented. Both approaches have been evaluated using a HERO-2000 mobile robot. Learning tasks included navigation in unknown indoor environments and a simple find-and-fetch task.",
"From the Publisher: Lifelong learning addresses situations in which a learner faces a series of different learning tasks providing the opportunity for synergy among them. Explanation-based neural network learning (EBNN) is a machine learning algorithm that transfers knowledge across multiple learning tasks. When faced with a new learning task, EBNN exploits domain knowledge accumulated in previous learning tasks to guide generalization in the new one. As a result, EBNN generalizes more accurately from less data than comparable methods. Explanation-Based Neural Network Learning: A Lifelong Learning Approach describes the basic EBNN paradigm and investigates it in the context of supervised learning, reinforcement learning, robotics, and chess."
]
} |
1411.1091 | 2950124505 | Convolutional neural nets (convnets) trained from massive labeled datasets have substantially improved the state-of-the-art in image classification and object detection. However, visual understanding requires establishing correspondence on a finer level than object category. Given their large pooling regions and training from whole-image labels, it is not clear that convnets derive their success from an accurate correspondence model which could be used for precise localization. In this paper, we study the effectiveness of convnet activation features for tasks requiring correspondence. We present evidence that convnet features localize at a much finer scale than their receptive field sizes, that they can be used to perform intraclass alignment as well as conventional hand-engineered features, and that they outperform conventional features in keypoint prediction on objects from PASCAL VOC 2011. | Image alignment is a key step in many computer vision tasks, including face verification, motion analysis, stereo matching, and object recognition. Alignment results in correspondence across different images by removing intraclass variability and canonicalizing pose. Alignment methods exist on a supervision spectrum from requiring manually labeled fiducial points or landmarks, to requiring class labels, to fully unsupervised joint alignment and clustering models. Congealing @cite_33 is an unsupervised joint alignment method based on an entropy objective. Deep congealing @cite_32 builds on this idea by replacing hand-engineered features with unsupervised feature learning from multiple resolutions. Inspired by optical flow, SIFT flow @cite_2 matches densely sampled SIFT features for correspondence and has been applied to motion prediction and motion transfer. In Section , we apply SIFT flow using deep features for aligning different instances of the same class. | {
"cite_N": [
"@cite_32",
"@cite_33",
"@cite_2"
],
"mid": [
"2157558673",
"2158844819",
""
],
"abstract": [
"Unsupervised joint alignment of images has been demonstrated to improve performance on recognition tasks such as face verification. Such alignment reduces undesired variability due to factors such as pose, while only requiring weak supervision in the form of poorly aligned examples. However, prior work on unsupervised alignment of complex, real-world images has required the careful selection of feature representation based on hand-crafted image descriptors, in order to achieve an appropriate, smooth optimization landscape. In this paper, we instead propose a novel combination of unsupervised joint alignment with unsupervised feature learning. Specifically, we incorporate deep learning into the congealing alignment framework. Through deep learning, we obtain features that can represent the image at differing resolutions based on network depth, and that are tuned to the statistics of the specific data being aligned. In addition, we modify the learning algorithm for the restricted Boltzmann machine by incorporating a group sparsity penalty, leading to a topographic organization of the learned filters and improving subsequent alignment results. We apply our method to the Labeled Faces in the Wild database (LFW). Using the aligned images produced by our proposed unsupervised algorithm, we achieve higher accuracy in face verification compared to prior work in both unsupervised and supervised alignment. We also match the accuracy for the best available commercial method.",
"Many recognition algorithms depend on careful positioning of an object into a canonical pose, so the position of features relative to a fixed coordinate system can be examined. Currently, this positioning is done either manually or by training a class-specialized learning algorithm with samples of the class that have been hand-labeled with parts or poses. In this paper, we describe a novel method to achieve this positioning using poorly aligned examples of a class with no additional labeling. Given a set of unaligned examplars of a class, such as faces, we automatically build an alignment mechanism, without any additional labeling of parts or poses in the data set. Using this alignment mechanism, new members of the class, such as faces resulting from a face detector, can be precisely aligned for the recognition process. Our alignment method improves performance on a face recognition task, both over unaligned images and over images aligned with a face alignment algorithm specifically developed for and trained on hand-labeled face images. We also demonstrate its use on an entirely different class of objects (cars), again without providing any information about parts or pose to the learning algorithm.",
""
]
} |
1411.0921 | 2949908031 | Static mapping is the assignment of parallel processes to the processing elements (PEs) of a parallel system, where the assignment does not change during the application's lifetime. In our scenario we model an application's computations and their dependencies by an application graph. This graph is first partitioned into (nearly) equally sized blocks. These blocks need to communicate at block boundaries. To assign the processes to PEs, our goal is to compute a communication-efficient bijective mapping between the blocks and the PEs. This approach of partitioning followed by bijective mapping has many degrees of freedom. Thus, users and developers of parallel applications need to know more about which choices work for which application graphs and which parallel architectures. To this end, we not only develop new mapping algorithms (derived from known greedy methods). We also perform extensive experiments involving different classes of application graphs (meshes and complex networks), architectures of parallel computers (grids and tori), as well as different partitioners and mapping algorithms. Surprisingly, the quality of the partitions, unless very poor, has little influence on the quality of the mapping. More importantly, one of our new mapping algorithms always yields the best results in terms of the quality measure maximum congestion when the application graphs are complex networks. In case of meshes as application graphs, this mapping algorithm always leads in terms of maximum congestion AND maximum dilation, another common quality measure. | One can apply a wide range of optimization techniques to the topology mapping problem. Hoefler and Snir @cite_24 employ (among others) the Reverse Cuthill-McKee (RCM) algorithm, originally devised for minimizing the bandwidth of a sparse matrix @cite_13 . If both @math and @math are sparse, the simultaneous optimization of both graph layouts can lead to good mapping results @cite_0 . | {
"cite_N": [
"@cite_24",
"@cite_0",
"@cite_13"
],
"mid": [
"1992432622",
"",
"2095420020"
],
"abstract": [
"The steadily increasing number of nodes in high-performance computing systems and the technology and power constraints lead to sparse network topologies. Efficient mapping of application communication patterns to the network topology gains importance as systems grow to petascale and beyond. Such mapping is supported in parallel programming frameworks such as MPI, but is often not well implemented. We show that the topology mapping problem is NP-complete and analyze and compare different practical topology mapping heuristics. We demonstrate an efficient and fast new heuristic which is based on graph similarity and show its utility with application communication patterns on real topologies. Our mapping strategies support heterogeneous networks and show significant reduction of congestion on torus, fat-tree, and the PERCS network topologies, for irregular communication patterns. We also demonstrate that the benefit of topology mapping grows with the network size and show how our algorithms can be used in a practical setting to optimize communication performance. Our efficient topology mapping strategies are shown to reduce network congestion by up to 80 , reduce average dilation by up to 50 , and improve benchmarked communication performance by 18 .",
"",
"The finite element displacement method of analyzing structures involves the solution of large systems of linear algebraic equations with sparse, structured, symmetric coefficient matrices. There is a direct correspondence between the structure of the coefficient matrix, called the stiffness matrix in this case, and the structure of the spatial network delineating the element layout. For the efficient solution of these systems of equations, it is desirable to have an automatic nodal numbering (or renumbering) scheme to ensure that the corresponding coefficient matrix will have a narrow bandwidth. This is the problem considered by R. Rosen 1 . A direct method of obtaining such a numbering scheme is presented. In addition several methods are reviewed and compared."
]
} |
1411.0921 | 2949908031 | Static mapping is the assignment of parallel processes to the processing elements (PEs) of a parallel system, where the assignment does not change during the application's lifetime. In our scenario we model an application's computations and their dependencies by an application graph. This graph is first partitioned into (nearly) equally sized blocks. These blocks need to communicate at block boundaries. To assign the processes to PEs, our goal is to compute a communication-efficient bijective mapping between the blocks and the PEs. This approach of partitioning followed by bijective mapping has many degrees of freedom. Thus, users and developers of parallel applications need to know more about which choices work for which application graphs and which parallel architectures. To this end, we not only develop new mapping algorithms (derived from known greedy methods). We also perform extensive experiments involving different classes of application graphs (meshes and complex networks), architectures of parallel computers (grids and tori), as well as different partitioners and mapping algorithms. Surprisingly, the quality of the partitions, unless very poor, has little influence on the quality of the mapping. More importantly, one of our new mapping algorithms always yields the best results in terms of the quality measure maximum congestion when the application graphs are complex networks. In case of meshes as application graphs, this mapping algorithm always leads in terms of maximum congestion AND maximum dilation, another common quality measure. | Many metaheuristics have been used to solve the mapping problem. U c ar al @cite_9 implement a large variety of methods within a clustering approach, among them genetic algorithms, simulated annealing, tabu search, and particle swarm optimization. The authors require, however, that the processor graph is homogeneous, @math depends only on whether @math or not. Our approach is more general than theirs in that we allow @math to take different values for @math (see Equation ). | {
"cite_N": [
"@cite_9"
],
"mid": [
"2151089374"
],
"abstract": [
"The problem of task assignment in heterogeneous computing systems has been studied for many years with many variations. We consider the version in which communicating tasks are to be assigned to heterogeneous processors with identical communication links to minimize the sum of the total execution and communication costs. Our contributions are three fold: a task clustering method which takes the execution times of the tasks into account; two metrics to determine the order in which tasks are assigned to the processors; a refinement heuristic which improves a given assignment. We use these three methods to obtain a family of task assignment algorithms including multilevel ones that apply clustering and refinement heuristics repeatedly. We have implemented eight existing algorithms to test the proposed methods. Our refinement algorithm improves the solutions of the existing algorithms by up to 15 and the proposed algorithms obtain better solutions than these refined solutions."
]
} |
1411.1147 | 2952444811 | We introduce a framework for unsupervised learning of structured predictors with overlapping, global features. Each input's latent representation is predicted conditional on the observable data using a feature-rich conditional random field. Then a reconstruction of the input is (re)generated, conditional on the latent structure, using models for which maximum likelihood estimation has a closed-form. Our autoencoder formulation enables efficient learning without making unrealistic independence assumptions or restricting the kinds of features that can be used. We illustrate insightful connections to traditional autoencoders, posterior regularization and multi-view learning. We show competitive results with instantiations of the model for two canonical NLP tasks: part-of-speech induction and bitext word alignment, and show that training our model can be substantially more efficient than comparable feature-rich baselines. | Our framework borrows its general structure, Fig. (left), as well as its name, from autoencoders. The goal of neural autoencoders has been to learn feature representations that improve generalization in otherwise supervised learning problems @cite_29 @cite_26 @cite_2 . In contrast, the goal of CRF autoencoders is to learn specific regularities of interest. This is possible in CRF autoencoders due to the interdependencies among variables in the hidden structure and the manually specified feature templates which capture the relationship between observations and their hidden structures. It is not clear how neural autoencoders could be used to learn the latent structures that CRF autoencoders learn, without providing supervised training examples. presented a related approach for discriminative graphical model learning, including features and latent variables, based on backpropagation, which could be used to instantiate the CRF autoencoder. | {
"cite_N": [
"@cite_29",
"@cite_26",
"@cite_2"
],
"mid": [
"2025768430",
"2117130368",
""
],
"abstract": [
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.",
""
]
} |
1411.0052 | 163356031 | Network visualization allows a quick glance at how nodes (or actors) are connected by edges (or ties). A conventional network diagram of "contact tree" maps out a root and branches that represent the structure of nodes and edges, often without further specifying leaves or fruits that would have grown from small branches. By furnishing such a network structure with leaves and fruits, we reveal details about "contacts" in our ContactTrees that underline ties and relationships. Our elegant design employs a bottom-up approach that resembles a recent attempt to understand subjective well-being by means of a series of emotions. Such a bottom-up approach to social-network studies decomposes each tie into a series of interactions or contacts, which help deepen our understanding of the complexity embedded in a network structure. Unlike previous network visualizations, ContactTrees can highlight how relationships form and change based upon interactions among actors, and how relationships and networks vary by contact attributes. Based on a botanical tree metaphor, the design is easy to construct and the resulting tree-like visualization can display many properties at both tie and contact levels, a key ingredient missing from conventional techniques of network visualization. We first demonstrate ContactTrees using a dataset consisting of three waves of 3-month contact diaries over the 2004-2012 period, then compare ContactTrees with alternative tools and discuss how this tool can be applied to other types of datasets. | Most approaches to visualizing relationships are based on graphs, where nodes represent persons and edges represent the relations among them @cite_31 . Such approaches are closely related to the domain of graph drawing, which focuses on algorithms that help embed graphs in readable ways (see @cite_25 @cite_0 for an introduction). Although the sizes of most social networks generate highly cluttered drawings, researchers of information visualization have developed many techniques to simplify representations (see @cite_16 @cite_2 for an overview). Some of the most powerful techniques involve clustering and navigation, such as @cite_37 , @cite_12 , @cite_50 , @cite_34 , @cite_14 , and edge bundling @cite_4 , or hybrid drawing methods like , where some communities of the network are displayed as matrices @cite_32 . | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_32",
"@cite_34",
"@cite_0",
"@cite_50",
"@cite_2",
"@cite_31",
"@cite_16",
"@cite_25",
"@cite_12"
],
"mid": [
"2111781385",
"1748521661",
"2117088188",
"2111347866",
"2122814890",
"",
"2114989451",
"2158453355",
"348384746",
"2147468287",
"2102664288",
""
],
"abstract": [
"We describe TopoLayout, a feature-based, multilevel algorithm that draws undirected graphs based on the topological features they contain. Topological features are detected recursively inside the graph, and their subgraphs are collapsed into single nodes, forming a graph hierarchy. Each feature is drawn with an algorithm tuned for its topology. As would be expected from a feature-based approach, the runtime and visual quality of TopoLayout depends on the number and types of topological features present in the graph. We show experimental results comparing speed and visual quality for TopoLayout against four other multilevel algorithms on a variety of data sets with a range of connectivities and sizes. TopoLayout frequently improves the results in terms of speed and visual quality on these data sets",
"In this paper, we present a new approach to exploring dynamic graphs. We have developed a new clustering algorithm for dynamic graphs which finds an ideal clustering for each time-step and links the clusters together. The resulting time-varying clusters are then used to define two visual representations. The first view is an overview that shows how clusters evolve over time and provides an interface to find and select interesting time-steps. The second view consists of a node link diagram of a selected time-step which uses the clustering to efficiently define the layout. By using the time-dependant clustering, we ensure the stability of our visualization and preserve user mental map by minimizing node motion, while simultaneously producing an ideal layout for each time step. Also, as the clustering is computed ahead of time, the second view updates in linear time which allows for interactivity even for graphs with upwards of tens of thousands of nodes.",
"Graphs depicted as node-link diagrams are widely used to show relationships between entities. However, nodelink diagrams comprised of a large number of nodes and edges often suffer from visual clutter. The use of edge bundling remedies this and reveals high-level edge patterns. Previous methods require the graph to contain a hierarchy for this, or they construct a control mesh to guide the edge bundling process, which often results in bundles that show considerable variation in curvature along the overall bundle direction. We present a new edge bundling method that uses a self-organizing approach to bundling in which edges are modeled as flexible springs that can attract each other. In contrast to previous methods, no hierarchy is used and no control mesh. The resulting bundled graphs show significant clutter reduction and clearly visible high-level edge patterns. Curvature variation is furthermore minimized, resulting in smooth bundles that are easy to follow. Finally, we present a rendering technique that can be used to emphasize the bundling.",
"The need to visualize large social networks is growing as hardware capabilities make analyzing large networks feasible and many new data sets become available. Unfortunately, the visualizations in existing systems do not satisfactorily resolve the basic dilemma of being readable both for the global structure of the network and also for detailed analysis of local communities. To address this problem, we present NodeTrix, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities. A key contribution is a set of interaction techniques. These allow analysts to create a NodeTrix visualization by dragging selections to and from node-link and matrix forms, and to flexibly manipulate the NodeTrix representation to explore the dataset and create meaningful summary visualizations of their findings. Finally, we present a case study applying NodeTrix to the analysis of the InfoVis 2004 coauthorship dataset to illustrate the capabilities of NodeTrix as both an exploration tool and an effective means of communicating results.",
"Network data frequently arises in a wide variety of fields, and node-link diagrams are a very natural and intuitive representation of such data. In order for a node-link diagram to be effective, the nodes must be arranged well on the screen. While many graph layout algorithms exist for this purpose, they often have limitations such as high computational complexity or node colocation. This paper proposes a new approach to graph layout through the use of space filling curves which is very fast and guarantees that there will be no nodes that are colocated. The resulting layout is also aesthetic and satisfies several criteria for graph layout effectiveness.",
"",
"Many real world graphs have small world characteristics, that is, they have a small diameter compared to the number of nodes and exhibit a local cluster structure. Examples are social networks, software structures, bibliographic references and biological neural nets. Their high connectivity makes both finding a pleasing layout and a suitable clustering hard. In this paper we present a method to create scalable, interactive visualizations of small world graphs, allowing the user to inspect local clusters while maintaining a global overview of the entire structure. The visualization method uses a combination of both semantical and geometrical distortions, while the layout is generated by a spring embedder algorithm using recently developed force model. We use a cross referenced database of 500 artists as a running example",
"The analysis of large graphs plays a prominent role in various fields of research and is relevant in many important application areas. Effective visual analysis of graphs requires appropriate visual presentations in combination with respective user interaction facilities and algorithmic graph analysis methods. How to design appropriate graph analysis systems depends on many factors, including the type of graph describing the data, the analytical task at hand and the applicability of graph analysis methods. The most recent surveys of graph visualization and navigation techniques cover techniques that had been introduced until 2000 or concentrate only on graph layouts published until 2002. Recently, new techniques have been developed covering a broader range of graph types, such as timevarying graphs. Also, in accordance with ever growing amounts of graph-structured data becoming available, the inclusion of algorithmic graph analysis and interaction techniques becomes increasingly important. In this State-of-the-Art Report, we survey available techniques for the visual analysis of large graphs. Our review first considers graph visualization techniques according to the type of graphs supported. The visualization techniques form the basis for the presentation of interaction approaches suitable for visual graph exploration. As an important component of visual graph analysis, we discuss various graph algorithmic aspects useful for the different stages of the visual graph analysis process. We also present main open research challenges in this field.",
"The use of visual images is common in many branches of science. And reviewers often suggest that such images are important for progress in the various fields (Koestler, 1964; Arnheim, 1970; Taylor, 1971; Tukey, 1972; Klovdahl, 1981; Tufte, 1983; Belien and Leenders). The historian Alfred Crosby (1997) has gone much further. He has proposed that visualization is one of only two factors that are responsible for the explosive development of all of modern science. The other is measurement.",
"This is a survey on graph visualization and navigation techniques, as used in information visualization. Graphs appear in numerous applications such as Web browsing, state-transition diagrams, and data structures. The ability to visualize and to navigate in these potentially large, abstract graphs is often a crucial part of an application. Information visualization has specific requirements, which means that this survey approaches the results of traditional graph drawing from a different perspective.",
"From the Publisher: This book is designed to describe fundamental algorithmic techniques for constructing drawings of graphs. Suitable as a book or reference manual, its chapters offer an accurate, accessible reflection of the rapidly expanding field of graph drawing.",
""
]
} |
1411.0052 | 163356031 | Network visualization allows a quick glance at how nodes (or actors) are connected by edges (or ties). A conventional network diagram of "contact tree" maps out a root and branches that represent the structure of nodes and edges, often without further specifying leaves or fruits that would have grown from small branches. By furnishing such a network structure with leaves and fruits, we reveal details about "contacts" in our ContactTrees that underline ties and relationships. Our elegant design employs a bottom-up approach that resembles a recent attempt to understand subjective well-being by means of a series of emotions. Such a bottom-up approach to social-network studies decomposes each tie into a series of interactions or contacts, which help deepen our understanding of the complexity embedded in a network structure. Unlike previous network visualizations, ContactTrees can highlight how relationships form and change based upon interactions among actors, and how relationships and networks vary by contact attributes. Based on a botanical tree metaphor, the design is easy to construct and the resulting tree-like visualization can display many properties at both tie and contact levels, a key ingredient missing from conventional techniques of network visualization. We first demonstrate ContactTrees using a dataset consisting of three waves of 3-month contact diaries over the 2004-2012 period, then compare ContactTrees with alternative tools and discuss how this tool can be applied to other types of datasets. | Navigating through networks from local views has also been addressed. For example, methods presented in @cite_49 and @cite_51 are based on tree layouts allowing users to explore a network from a given node. Van Ham and Perer also proposed a large graph visualization technique @cite_38 based on the computation of degrees of interest, in order to guide the user during the navigation. | {
"cite_N": [
"@cite_38",
"@cite_51",
"@cite_49"
],
"mid": [
"2123322679",
"2132174862",
"2099166093"
],
"abstract": [
"A common goal in graph visualization research is the design of novel techniques for displaying an overview of an entire graph. However, there are many situations where such an overview is not relevant or practical for users, as analyzing the global structure may not be related to the main task of the users that have semi-specific information needs. Furthermore, users accessing large graph databases through an online connection or users running on less powerful (mobile) hardware simply do not have the resources needed to compute these overviews. In this paper, we advocate an interaction model that allows users to remotely browse the immediate context graph around a specific node of interest. We show how Furnas' original degree of interest function can be adapted from trees to graphs and how we can use this metric to extract useful contextual subgraphs, control the complexity of the generated visualization and direct users to interesting datapoints in the context. We demonstrate the effectiveness of our approach with an exploration of a dense online database containing over 3 million legal citations.",
"Despite extensive research, it is still difficult to produce effective interactive layouts for large graphs. Dense layout and occlusion make food Webs, ontologies and social networks difficult to understand and interact with. We propose a new interactive visual analytics component called TreePlus that is based on a tree-style layout. TreePlus reveals the missing graph structure with visualization and interaction while maintaining good readability. To support exploration of the local structure of the graph and gathering of information from the extensive reading of labels, we use a guiding metaphor of \"plant a seed and watch it grow.\" It allows users to start with a node and expand the graph as needed, which complements the classic overview techniques that can be effective at (but often limited to) revealing clusters. We describe our design goals, describe the interface and report on a controlled user study with 28 participants comparing TreePlus with a traditional graph interface for six tasks. In general, the advantage of TreePlus over the traditional interface increased as the density of the displayed data increased. Participants also reported higher levels of confidence in their answers with TreePlus and most of them preferred TreePlus",
"We describe a new animation technique for supporting interactive exploration of a graph. We use the well-known radial tree layout method, in which the view is determined by the selection of a focus node. Our main contribution is a method for animating the transition to a new layout when a new focus node is selected. In order to keep the transition easy to follow, the animation linearly interpolates the polar coordinates of the nodes, while enforcing ordering and orientation constraints. We apply this technique to visualizations of social networks and of the Gnutella file-sharing network, and discuss the results from our informal usability tests."
]
} |
1411.0052 | 163356031 | Network visualization allows a quick glance at how nodes (or actors) are connected by edges (or ties). A conventional network diagram of "contact tree" maps out a root and branches that represent the structure of nodes and edges, often without further specifying leaves or fruits that would have grown from small branches. By furnishing such a network structure with leaves and fruits, we reveal details about "contacts" in our ContactTrees that underline ties and relationships. Our elegant design employs a bottom-up approach that resembles a recent attempt to understand subjective well-being by means of a series of emotions. Such a bottom-up approach to social-network studies decomposes each tie into a series of interactions or contacts, which help deepen our understanding of the complexity embedded in a network structure. Unlike previous network visualizations, ContactTrees can highlight how relationships form and change based upon interactions among actors, and how relationships and networks vary by contact attributes. Based on a botanical tree metaphor, the design is easy to construct and the resulting tree-like visualization can display many properties at both tie and contact levels, a key ingredient missing from conventional techniques of network visualization. We first demonstrate ContactTrees using a dataset consisting of three waves of 3-month contact diaries over the 2004-2012 period, then compare ContactTrees with alternative tools and discuss how this tool can be applied to other types of datasets. | More specific techniques have also been proposed. Jeffrey Heer and Danah Boyd have designed and implemented a graph visualization tool for online social networks @cite_27 . Baur propose a software that includes graph visualizations and many network analysis metrics and techniques @cite_41 . Fisher and Dourish @cite_33 developed applications based on collaboration network visualization to help coordinate and manage these collaborations. @cite_44 is an interface for visualizing groups of one's personal contacts. A similar approach has been proposed in @cite_7 . In this tool, users can navigate through overlapping groups of their friends. The visualization of more structured relationships like genealogies also has been addressed in @cite_1 @cite_21 @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_7",
"@cite_41",
"@cite_21",
"@cite_1",
"@cite_44",
"@cite_27"
],
"mid": [
"2141466899",
"2137382277",
"1982789685",
"",
"2116904179",
"1487602974",
"2071667608",
"1546650522"
],
"abstract": [
"Public genealogical databases are becoming increasingly populated with historical data and records of the current population's ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree's structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user's ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout.",
"Everyday work frequently involves coordinating and collaborating with others, but the structure of collaboration is largely invisible to conventional desktop applications. We are exploring ways to support everyday collaboration by allowing applications access to the social, organizational, and temporal settings within which work is conducted. In this paper, we present two generations of systems supporting everyday collaboration, focusing on ways to recover and represent the temporal and social structures of online activity.",
"As people accumulate hundreds of \"friends\" in social media, a flat list of connections becomes unmanageable. Interfaces agnostic to social structure hinder the nuanced sharing of personal data such as photos, status updates, news feeds, and comments. To address this problem, we propose social topologies, a set of potentially overlapping and nested social groups, that represent the structure and content of a person's social network as a first-class object. We contribute an algorithm for creating social topologies by mining communication history and identifying likely groups based on co-occurrence patterns. We use our algorithm to populate a browser interface that supports creation and editing of social groups via direct manipulation. A user study confirms that our approach models subjects' social topologies well, and that our interface enables intuitive browsing and management of a personal social landscape.",
"",
"GeneaQuilts is a new visualization technique for representing large genealogies of up to several thousand individuals. The visualization takes the form of a diagonally-filled matrix, where rows are individuals and columns are nuclear families. After identifying the major tasks performed in genealogical research and the limits of current software, we present an interactive genealogy exploration system based on GeneaQuilts. The system includes an overview, a timeline, search and filtering components, and a new interaction technique called Bring & Slide that allows fluid navigation in very large genealogies. We report on preliminary feedback from domain experts and show how our system supports a number of their tasks.",
"The general problem of visualizing \"family trees\", or genealogical graphs, in 2D, is considered. A graph theoretic analysis is given, which identifies why genealogical graphs can be difficult to draw. This motivates some novel graphical representations, including one based on a dual tree, a subgraph formed by the union of two trees. Dual trees can be drawn in various styles, including an indented outline style, and allow users to browse general multitrees in addition to genealogical graphs, by transitioning between different dual tree views. A software prototype for such browsing is described, that supports smoothly animated transitions, automatic camera framing, rotation of subtrees, and a novel interaction technique for expanding or collapsing subtrees to any depth with a single mouse drag",
"Modern work is a highly social process, offering many cues for people to organize communication and access information. Shared physical workplaces provide natural support for tasks such as (a) social reminding about communication commitments and keeping track of collaborators and friends, and (b) social data mining of local expertise for advice and information. However, many people now collaborate remotely using tools such as email and voicemail. Our field studies show that these tools do not provide the social cues needed for group work processes. In part, this is because the tools are organized around messages, rather than people. In response to this problem, we created ContactMap, a system that makes people the primary unit of interaction. ContactMap provides a structured social desktop representation of users' important contacts that directly supports social reminding and social data mining. We conducted an empirical evaluation of ContactMap, comparing it with traditional email systems, on tasks suggested by our fieldwork. Users performed better with ContactMap and preferred ContactMap for the majority of these tasks. We discuss future enhancements of our system and the implications of these results for future communication interfaces and for theories of mediated communication.",
"Recent years have witnessed the dramatic popularity of online social networking services, in which millions of members publicly articulate mutual \"friendship\" relations. Guided by ethnographic research of these online communities, we have designed and implemented a visualization system for playful end-user exploration and navigation of large scale online social networks. Our design builds upon familiar node link network layouts to contribute customized techniques for exploring connectivity in large graph structures, supporting visual search and analysis, and automatically identifying and visualizing community structures. Both public installation and controlled studies of the system provide evidence of the system's usability, capacity for facilitating discovery, and potential for fun and engaged social activity"
]
} |
1411.0052 | 163356031 | Network visualization allows a quick glance at how nodes (or actors) are connected by edges (or ties). A conventional network diagram of "contact tree" maps out a root and branches that represent the structure of nodes and edges, often without further specifying leaves or fruits that would have grown from small branches. By furnishing such a network structure with leaves and fruits, we reveal details about "contacts" in our ContactTrees that underline ties and relationships. Our elegant design employs a bottom-up approach that resembles a recent attempt to understand subjective well-being by means of a series of emotions. Such a bottom-up approach to social-network studies decomposes each tie into a series of interactions or contacts, which help deepen our understanding of the complexity embedded in a network structure. Unlike previous network visualizations, ContactTrees can highlight how relationships form and change based upon interactions among actors, and how relationships and networks vary by contact attributes. Based on a botanical tree metaphor, the design is easy to construct and the resulting tree-like visualization can display many properties at both tie and contact levels, a key ingredient missing from conventional techniques of network visualization. We first demonstrate ContactTrees using a dataset consisting of three waves of 3-month contact diaries over the 2004-2012 period, then compare ContactTrees with alternative tools and discuss how this tool can be applied to other types of datasets. | This technique is inspiring but not sufficient for our purpose. First, not only do we focus on how network members are directly connected to a focal person, we also aim to map various properties of these members. Second, in addition to such relationships and properties, we want to further distinguish the attributes of social interactions between each network member and the focal person, contact by contact. There are three main approaches for the visualization of trees. According to the paradigm of node-link diagrams, persons are represented by small shapes and relations by lines. A good introduction to the techniques is given by two books on graph drawing @cite_25 @cite_0 and the website http: treevis.net @cite_43 . Persons can also be represented by areas and relations by the positioning of these areas. This is the case of @cite_47 , @cite_23 and @cite_48 . The third approach is to visualize tree elements as nested areas. Two kind of methods have been proposed to create such maps: (1) dividing the plan recursively (a detailed overview by Ben Shneiderman, and updated by Catherine Plaisant, can be found at @cite_8 ) (2) positioning leaves along space-filing curves @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_48",
"@cite_0",
"@cite_43",
"@cite_23",
"@cite_47",
"@cite_25"
],
"mid": [
"1989683591",
"70321236",
"2106441507",
"",
"199486525",
"190320872",
"2135708569",
"2102664288"
],
"abstract": [
"The emergence of very large hierarchies that result from the increase in available data raises many problems of visualization and navigation. On data sets of such scale, classical graph drawing methods do not take advantage of certain human cognitive skills such as shape recognition. These cognitive skills could make it easier to remember the global structure of the data. In this paper, we propose a method that is based on the use of nested irregular shapes. We name it GosperMap as we rely on the use of a Gosper Curve to generate these shapes. By employing human perception mechanisms that were developed by handling, for example, cartographic maps, this technique facilitates the visualization and navigation of a hierarchy. An algorithm has been designed to preserve region containment according to the hierarchy and to set the leaves' sizes proportionally to a property, in such a way that the size of nonleaf regions corresponds to the sum of their children's sizes. Moreover, the input ordering of the hierarchy's nodes is preserved, i.e., the areas that represent two consecutive children of a node in the hierarchy are adjacent to one another. This property is especially useful because it guarantees some stability in our algorithm. We illustrate our technique by providing visualization examples of the repartition of tax money in the US over time. Furthermore, we validate the use of the GosperMap in a professional documentation context and show the stability and ease of memorization for this type of map.",
"During 1990, in response to the common problem of a filled hard disk, I became obsessed with the idea of producing a compact visualization of directory tree structures. Since the 80 Megabyte hard disk in the HCIL was shared by 14 users it was difficult to determine how and where space was used. Finding large files that could be deleted, or even determining which users consumed the largest shares of disk space were difficult tasks.",
"Radial, space-filling visualizations can be useful for depicting information hierarchies, but they suffer from one major problem. As the hierarchy grows in size, many items become small, peripheral slices that are difficult to distinguish. We have developed three visualization interaction techniques that provide flexible browsing of the display. The techniques allow viewers to examine the small items in detail while providing context within the entire information hierarchy. Additionally, smooth transitions between views help users maintain orientation within the complete information space.",
"",
"",
"This paper presents work in progress on a new technique for visu- alising and manipulating large hierarchies. The information slices approach compactly visualises hierarchical structures using a series of semi-circular discs. The technique is described in the context of our early experience with a prototype file system visualiser based on information slices. Over the last few years, the emerging field of information visuali- sation has resulted in a numerous techniques for helping visualise and make sense of large information spaces (10). Among these are several techniques for visualising and interacting with large hierar- chies which go beyond the traditional approach taken in 2d scrolling browsers such as the Windows Explorer. Treemaps (6, 9) are space-filling visualisations of hierarchies based on successive horizontal and vertical subdivision of screen rectangles. The area of each rectangle is proportional to some at- tribute of the underlying hierarchy such as the (total) size of each subtree. Xdu (4) is a utility for the X window system which displays a graphical disk usage for Unix file systems. Rectangles are stacked from left to right as the directory tree is descended. The current di- rectory is represented by the leftmost rectangle, which is the entire height of the window. Subdirectories are represented by neighbour- ing rectangles in the next column, whose height are proportional to the size of each subdirectory. The hyperbolic browser (7) uses a focus and context technique based on hyperbolic geometry. A hierarchy is laid out uniformly on the hyperbolic plane and then mapped to the unit disc for display on screen. Nodes in the centre of the disc are largest and nodes are assigned progressively less space towards the perimeter of the disc. Cheops (3) is based on multiple re-use of overlaid triangles in the display. Working top-down, the selection of a node (triangle) at a particular level designates that node's children are to be represented",
"Abstract An icicle plot is a method for presenting a hierarchical clustering. Compared with other methods of presentation, it is far easier in an icicle plot to read off which objects belong to which clusters, and which objects join or drop out from a cluster as we move up and down the levels of the hierarchy, though these benefits only appear when enough objects are being clustered. Icicle plots are described, and their benefits are illustrated using a clustering of 48 objects.",
"From the Publisher: This book is designed to describe fundamental algorithmic techniques for constructing drawings of graphs. Suitable as a book or reference manual, its chapters offer an accurate, accessible reflection of the rapidly expanding field of graph drawing."
]
} |
1411.0052 | 163356031 | Network visualization allows a quick glance at how nodes (or actors) are connected by edges (or ties). A conventional network diagram of "contact tree" maps out a root and branches that represent the structure of nodes and edges, often without further specifying leaves or fruits that would have grown from small branches. By furnishing such a network structure with leaves and fruits, we reveal details about "contacts" in our ContactTrees that underline ties and relationships. Our elegant design employs a bottom-up approach that resembles a recent attempt to understand subjective well-being by means of a series of emotions. Such a bottom-up approach to social-network studies decomposes each tie into a series of interactions or contacts, which help deepen our understanding of the complexity embedded in a network structure. Unlike previous network visualizations, ContactTrees can highlight how relationships form and change based upon interactions among actors, and how relationships and networks vary by contact attributes. Based on a botanical tree metaphor, the design is easy to construct and the resulting tree-like visualization can display many properties at both tie and contact levels, a key ingredient missing from conventional techniques of network visualization. We first demonstrate ContactTrees using a dataset consisting of three waves of 3-month contact diaries over the 2004-2012 period, then compare ContactTrees with alternative tools and discuss how this tool can be applied to other types of datasets. | Following the seminal papers of Ulam @cite_39 and Hondal @cite_26 , computer modelling of trees has been an active area of research. The Previous Work section of @cite_24 gives a good overview of previous results. These methods attempt to characterize the way real-world botanical trees grow. Therefore, they do not reflect a structure defined a priori. Moreover, most of them incorporate random settings, which make them incompatible with our purpose: Our visualization must incorporate a pre-defined structure and be based on a deterministic algorithm to facilitate comparisons. | {
"cite_N": [
"@cite_24",
"@cite_26",
"@cite_39"
],
"mid": [
"2010403194",
"1963897690",
"124245106"
],
"abstract": [
"We present a method for generating realistic models of temperate-climate trees and shrubs. This method is based on the biological hypothesis that the form of a developing tree emerges from a self-organizing process dominated by the competition of buds and branches for light or space, and regulated by internal signaling mechanisms. Simulations of this process robustly generate a wide range of realistic trees and bushes. The generated forms can be controlled with a variety of interactive techniques, including procedural brushes, sketching, and editing operations such as pruning and bending of branches. We illustrate the usefulness and versatility of the proposed method with diverse tree models, forest scenes, animations of tree development, and examples of combined interactive-procedural tree modeling.",
"Abstract An attempt was made to describe the multifarious form of erect trees by a few parameters. Trees were approximated by the tree-like body which was made up of repeated bifurcations of branches. Three-dimensional positions of the end-points of the branches could be calculated using suitable but tentative solid geometrical assumptions which included some parameters: branching angle and relative ratio of the branch lengths. The branching angle and the relative ratio of the branch lengths were well demonstrated to have great effects upon the whole form of the tree-like body. The whole form of actual trees, therefore, was speculated to be affected also by their branching angle and relative ratio of their branch lengths. This attempt had a relationship to the problems of the morphogenetic process in living organisms and, in general, of the pattern-generation by generation rules.",
""
]
} |
1411.0557 | 2951408080 | Community detection has become an extremely active area of research in recent years, with researchers proposing various new metrics and algorithms to address the problem. Recently, the Weighted Community Clustering (WCC) metric was proposed as a novel way to judge the quality of a community partitioning based on the distribution of triangles in the graph, and was demonstrated to yield superior results over other commonly used metrics like modularity. The same authors later presented a parallel algorithm for optimizing WCC on large graphs. In this paper, we propose a new distributed, vertex-centric algorithm for community detection using the WCC metric. Results are presented that demonstrate the algorithm's performance and scalability on up to 32 worker machines and real graphs of up to 1.8 billion vertices. The algorithm scales best with the largest graphs, and to our knowledge, it is the first distributed algorithm for optimizing the WCC metric. | There also exist several proposals based on random walks. The intuition is that in a random walk, the probability of remaining inside of a community is higher than going outside, due to the higher density of internal edges. This strategy is the main idea exploited in Walktrap @cite_15 . Another algorithm based on random walks that is highly adopted in the literature is Infomap @cite_20 , which searches for a codification for describing random walks based on communities. The codification that requires the least amount of memory (attains the highest compression rates) is selected. According to the comparison performed by @cite_24 , Infomap stands as one of the best community detection algorithms in the literature. | {
"cite_N": [
"@cite_24",
"@cite_15",
"@cite_20"
],
"mid": [
"1995996823",
"2033590892",
"2164998314"
],
"abstract": [
"Uncovering the community structure exhibited by real networks is a crucial step toward an understanding of complex systems that goes beyond the local organization of their constituents. Many algorithms have been proposed so far, but none of them has been subjected to strict tests to evaluate their performance. Most of the sporadic tests performed so far involved small networks with known community structure and or artificial graphs with a simplified structure, which is very uncommon in real systems. Here we test several methods against a recently introduced class of benchmark graphs, with heterogeneous distributions of degree and community size. The methods are also tested against the benchmark by Girvan and Newman [Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)] and on random graphs. As a result of our analysis, three recent algorithms introduced by Rosvall and Bergstrom [Proc. Natl. Acad. Sci. U.S.A. 104, 7327 (2007); Proc. Natl. Acad. Sci. U.S.A. 105, 1118 (2008)], [J. Stat. Mech.: Theory Exp. (2008), P10008], and Ronhovde and Nussinov [Phys. Rev. E 80, 016109 (2009)] have an excellent performance, with the additional advantage of low computational complexity, which enables one to analyze large systems.",
"In a representative embodiment of the invention described herein, a well logging system for investigating subsurface formations is controlled by a general purpose computer programmed for real-time operation. The system is cooperatively arranged to provide for all aspects of a well logging operation, such as data acquisition and processing, tool control, information or data storage, and data presentation as a well logging tool is moved through a wellbore. The computer controlling the system is programmed to provide for data acquisition and tool control commands in direct response to asynchronous real-time external events. Such real-time external events may occur, for example, as a result of movement of the logging tool over a selected depth interval, or in response to requests or commands directed to the system by the well logging engineer by means of keyboard input.",
"To comprehend the multipartite organization of large-scale biological and social systems, we introduce an information theoretic approach that reveals community structure in weighted and directed networks. We use the probability flow of random walks on a network as a proxy for information flows in the real system and decompose the network into modules by compressing a description of the probability flow. The result is a map that both simplifies and highlights the regularities in the structure and their relationships. We illustrate the method by making a map of scientific communication as captured in the citation patterns of >6,000 journals. We discover a multicentric organization with fields that vary dramatically in size and degree of integration into the network of science. Along the backbone of the network—including physics, chemistry, molecular biology, and medicine—information flows bidirectionally, but the map reveals a directional pattern of citation from the applied fields to the basic sciences."
]
} |
1411.0557 | 2951408080 | Community detection has become an extremely active area of research in recent years, with researchers proposing various new metrics and algorithms to address the problem. Recently, the Weighted Community Clustering (WCC) metric was proposed as a novel way to judge the quality of a community partitioning based on the distribution of triangles in the graph, and was demonstrated to yield superior results over other commonly used metrics like modularity. The same authors later presented a parallel algorithm for optimizing WCC on large graphs. In this paper, we propose a new distributed, vertex-centric algorithm for community detection using the WCC metric. Results are presented that demonstrate the algorithm's performance and scalability on up to 32 worker machines and real graphs of up to 1.8 billion vertices. The algorithm scales best with the largest graphs, and to our knowledge, it is the first distributed algorithm for optimizing the WCC metric. | Most of the work regarding the exploitation of parallelism for community detection has the form of multithreaded algorithms for SMP machines. @cite_14 , authors propose a parallel version of the method, which achieves an speedup of 16x using 32 threads. Similarly, in @cite_11 propose an agglomerative modularity optimization algorithm for the Cray XMT and Intel based machines, capable of analyzing a graph with 100 million nodes and 3.3 billion edges in 500 seconds. Finally, in @cite_1 the authors propose a parallel version of Infomap, called RelaxMap that relaxes concurrency assumptions of the original method, achieving a parallel efficiency of about 70 There has been little work regarding distributed algorithms for community detection. One family of algorithms that fit well into the vertex-centric model are those based on label propagation @cite_19 @cite_18 . In label propagation, each vertex is initialized with a unique label, and then, they define rules that simulate the spread of these labels in the network similarly to infections. Label propagation has the advantage of being asymptotically efficient, but no theoretical guarantees are given regarding the quality of the results, especially in networks where communities are not well-defined. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_1",
"@cite_19",
"@cite_11"
],
"mid": [
"1774824195",
"2949478393",
"",
"2132202037",
"1981871860"
],
"abstract": [
"Membership diversity is a characteristic aspect of social networks in which a person may belong to more than one social group. For this reason, discovering overlapping structures is necessary for realistic social analysis. In this paper, we present a fast algorithm, called SLPA, for overlapping community detection in large-scale networks. SLPA spreads labels according to dynamic interaction rules. It can be applied to both unipartite and bipartite networks. It is also able to uncover overlapping nested hierarchy . The time complexity of SLPA scales linearly with the number of edges in the network. Experiments in both synthetic and real-world networks show that SLPA has an excellent performance in identifying both node and community level overlapping structures.",
"Community detection has become a fundamental operation in numerous graph-theoretic applications. It is used to reveal natural divisions that exist within real world networks without imposing prior size or cardinality constraints on the set of communities. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed by in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method is also inherently sequential, thereby limiting its scalability. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose heuristics that are designed to break the sequential barrier. For evaluation purposes, we implemented our heuristics using OpenMP multithreading, and tested them over real world graphs derived from multiple application domains (e.g., internet, citation, biological). Compared to the serial Louvain implementation, our parallel implementation is able to produce community outputs with a higher modularity for most of the inputs tested, in comparable number or fewer iterations, while providing absolute speedups of up to 16x using 32 threads.",
"",
"Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.",
"The volume of existing graph-structured data requires improved parallel tools and algorithms. Finding communities, smaller sub graphs densely connected within the sub graph than to the rest of the graph, plays a role both in developing new parallel algorithms as well as opening smaller portions of the data to current analysis tools. We improve performance of our parallel community detection algorithm by 20 on the massively multithreaded Cray XMT, evaluate its performance on the next-generation Cray XMT2, and extend its reach to Intel-based platforms with OpenMP. To our knowledge, not only is this the first massively parallel community detection algorithm but also the only such algorithm that achieves excellent performance and good parallel scalability across all these platforms. Our implementation analyzes a moderate sized graph with 105 million vertices and 3.3 billion edges in around 500 seconds on a four processor, 80-logical-core Intel-based system and 1100 seconds on a 64-processor Cray XMT2."
]
} |
1411.0275 | 1582814713 | In this paper, we examine the evolution of the impact of older scholarly articles. We attempt to answer four questions. First, how often are older articles cited and how has this changed over time. Second, how does the impact of older articles vary across different research fields. Third, is the change in the impact of older articles accelerating or slowing down. Fourth, are these trends different for much older articles. To answer these questions, we studied citations from articles published in 1990-2013. We computed the fraction of citations to older articles from articles published each year as the measure of impact. We considered articles that were published at least 10 years before the citing article as older articles. We computed these numbers for 261 subject categories and 9 broad areas of research. Finally, we repeated the computation for two other definitions of older articles, 15 years and older and 20 years and older. There are three conclusions from our study. First, the impact of older articles has grown substantially over 1990-2013. In 2013, 36 of citations were to articles that are at least 10 years old; this fraction has grown 28 since 1990. The fraction of older citations increased over 1990-2013 for 7 out of 9 broad areas and 231 out of 261 subject categories. Second, the increase over the second half (2002-2013) was double the increase in the first half (1990-2001). Third, the trend of a growing impact of older articles also holds for even older articles. In 2013, 21 of citations were to articles >= 15 years old with an increase of 30 since 1990 and 13 of citations were to articles >= 20 years old with an increase of 36 . Now that finding and reading relevant older articles is about as easy as finding and reading recently published articles, significant advances aren't getting lost on the shelves and are influencing work worldwide for years after. | Exploring the notion of obsolescence from a usage perspective, rather than a citation perspective, Sandison @cite_3 found that, after an initial period, the usage of older issues of Physics journals at MIT didn't decrease with age. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2023507139"
],
"abstract": [
"Chen's data for the raw frequency of use of 138 physics journals in the science library at M I T are re‐examined and converted to densities of use‐per‐metre of shelf. Other units of size for obtaining densities, and their measurement, are discussed. There is no evidence for synchronous obsolescence in the 1955 to 1968 volumes of these journals: instead there is some statistically significant evidence of greater density of use with greater age. Similar evidence elsewhere is cited. The ranking order for heaviness of use is also radically altered by converting raw frequencies to densities of use. It is suggested that, for comparing the relative values of different journals, or age groups, in library use or citation studies, analyses of raw frequencies are valueless, and indeed potentially dangerously misleading, until they are converted to allow for the numbers of available items in each group examined."
]
} |
1411.0275 | 1582814713 | In this paper, we examine the evolution of the impact of older scholarly articles. We attempt to answer four questions. First, how often are older articles cited and how has this changed over time. Second, how does the impact of older articles vary across different research fields. Third, is the change in the impact of older articles accelerating or slowing down. Fourth, are these trends different for much older articles. To answer these questions, we studied citations from articles published in 1990-2013. We computed the fraction of citations to older articles from articles published each year as the measure of impact. We considered articles that were published at least 10 years before the citing article as older articles. We computed these numbers for 261 subject categories and 9 broad areas of research. Finally, we repeated the computation for two other definitions of older articles, 15 years and older and 20 years and older. There are three conclusions from our study. First, the impact of older articles has grown substantially over 1990-2013. In 2013, 36 of citations were to articles that are at least 10 years old; this fraction has grown 28 since 1990. The fraction of older citations increased over 1990-2013 for 7 out of 9 broad areas and 231 out of 261 subject categories. Second, the increase over the second half (2002-2013) was double the increase in the first half (1990-2001). Third, the trend of a growing impact of older articles also holds for even older articles. In 2013, 21 of citations were to articles >= 15 years old with an increase of 30 since 1990 and 13 of citations were to articles >= 20 years old with an increase of 36 . Now that finding and reading relevant older articles is about as easy as finding and reading recently published articles, significant advances aren't getting lost on the shelves and are influencing work worldwide for years after. | In an early paper exploring the potential impact of online access on scholarly communication, Odlyzko @cite_2 reported that after an initial period, frequency of access to online articles from several collections did not vary with age of articles. Based on this, he speculated that easy online access to digitized collections, as they become available, would lead to much wider usage of older materials. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2162763177"
],
"abstract": [
"Traditional journals, even those available electronically, are changing slowly. However, there is rapid evolution in scholarly communication. Usage is moving to electronic formats. In some areas, it appears that electronic versions of papers are being read about as often as the printed journal versions. Although there are serious difficulties in comparing figures from different media, the growth rates in usage of electronic scholarly information are sufficiently high that if they continue for a few years, there will be no doubt that print versions will be eclipsed. Further, much of the electronic information that is accessed is outside the formal scholarly publication process. There is also vigorous growth in forms of electronic communication that take advantage of the unique capabilities of the web, and which simply do not fit into the traditional journal publishing format. This paper presents some statistics on usage of print and electronic information. It also discusses some preliminary evidence about the changing patterns of usage. It appears that much of the online usage comes from new readers (esoteric research papers assigned in undergraduate classes, for example) and often from places that do not have access to print journals. Also, the reactions to even slight barriers to usage suggest that even high-quality scholarly papers are not irreplaceable. Readers are faced with a ‘river of knowledge’ that allows them to select among a multitude of sources, and to find near substitutes when necessary. To stay relevant, scholars, publishers and librarians will have to make even greater efforts to make their material easily accessible."
]
} |
1411.0275 | 1582814713 | In this paper, we examine the evolution of the impact of older scholarly articles. We attempt to answer four questions. First, how often are older articles cited and how has this changed over time. Second, how does the impact of older articles vary across different research fields. Third, is the change in the impact of older articles accelerating or slowing down. Fourth, are these trends different for much older articles. To answer these questions, we studied citations from articles published in 1990-2013. We computed the fraction of citations to older articles from articles published each year as the measure of impact. We considered articles that were published at least 10 years before the citing article as older articles. We computed these numbers for 261 subject categories and 9 broad areas of research. Finally, we repeated the computation for two other definitions of older articles, 15 years and older and 20 years and older. There are three conclusions from our study. First, the impact of older articles has grown substantially over 1990-2013. In 2013, 36 of citations were to articles that are at least 10 years old; this fraction has grown 28 since 1990. The fraction of older citations increased over 1990-2013 for 7 out of 9 broad areas and 231 out of 261 subject categories. Second, the increase over the second half (2002-2013) was double the increase in the first half (1990-2001). Third, the trend of a growing impact of older articles also holds for even older articles. In 2013, 21 of citations were to articles >= 15 years old with an increase of 30 since 1990 and 13 of citations were to articles >= 20 years old with an increase of 36 . Now that finding and reading relevant older articles is about as easy as finding and reading recently published articles, significant advances aren't getting lost on the shelves and are influencing work worldwide for years after. | More recently, Evans @cite_8 studied the impact of online availability of journal articles on the age of citations. Based on an analysis of citation indices from Thomson Reuters and a database of online availability of journal articles from Information Today Inc., he concluded that as more journal issues came online, the articles referenced tended to be more recent. He speculated that the shift from browsing print collections to searching online collections facilitated avoidance of older literature. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2125660293"
],
"abstract": [
"Online journals promise to serve more information to more dispersed audiences and are more efficiently searched and recalled. But because they are used differently than print—scientists and scholars tend to search electronically and follow hyperlinks rather than browse or peruse—electronically available journals may portend an ironic change for science. Using a database of 34 million articles, their citations (1945 to 2005), and online availability (1998 to 2005), I show that as more journal issues came online, the articles referenced tended to be more recent, fewer journals and articles were cited, and more of those citations were to fewer journals and articles. The forced browsing of print archives may have stretched scientists and scholars to anchor findings deeply into past and present scholarship. Searching online is more efficient and following hyperlinks quickly puts researchers in touch with prevailing opinion, but this may accelerate consensus and narrow the range of findings and ideas built upon."
]
} |
1411.0275 | 1582814713 | In this paper, we examine the evolution of the impact of older scholarly articles. We attempt to answer four questions. First, how often are older articles cited and how has this changed over time. Second, how does the impact of older articles vary across different research fields. Third, is the change in the impact of older articles accelerating or slowing down. Fourth, are these trends different for much older articles. To answer these questions, we studied citations from articles published in 1990-2013. We computed the fraction of citations to older articles from articles published each year as the measure of impact. We considered articles that were published at least 10 years before the citing article as older articles. We computed these numbers for 261 subject categories and 9 broad areas of research. Finally, we repeated the computation for two other definitions of older articles, 15 years and older and 20 years and older. There are three conclusions from our study. First, the impact of older articles has grown substantially over 1990-2013. In 2013, 36 of citations were to articles that are at least 10 years old; this fraction has grown 28 since 1990. The fraction of older citations increased over 1990-2013 for 7 out of 9 broad areas and 231 out of 261 subject categories. Second, the increase over the second half (2002-2013) was double the increase in the first half (1990-2001). Third, the trend of a growing impact of older articles also holds for even older articles. In 2013, 21 of citations were to articles >= 15 years old with an increase of 30 since 1990 and 13 of citations were to articles >= 20 years old with an increase of 36 . Now that finding and reading relevant older articles is about as easy as finding and reading recently published articles, significant advances aren't getting lost on the shelves and are influencing work worldwide for years after. | These results are also contradicted by two other studies that were published around the same time as @cite_8 and that took different analysis approaches. Huntington et al @cite_5 studied article usage patterns based on web access logs for OhioLink's journal collections. They found that there were two stages in access history of scholarly articles. The first stage spanned the first 8 to 9 years from publication date. Usage often declined over this period, the decline being sharpest in the first 2 to 3 years (a third over the first year and by about 60 stable level of usage. Analyzing HTTP Referer headers for journal article requests, they found that users arriving from search services were far more likely to view older articles than users arriving from a browse environment. They speculated that this difference occurred due to the relevance ranking approach used by web search services. | {
"cite_N": [
"@cite_5",
"@cite_8"
],
"mid": [
"1970614016",
"2125660293"
],
"abstract": [
"The article presents the early findings of an exploratory deep log analysis of journal usage on OhioLINK, conducted as part of the MaxData project, funded by the U.S. Institute of Museum and Library Services. OhioLINK, the original “Big Deal,” provides a single digital platform of nearly 6,000 full-text journals for more than 600,000 people; for the purposes of the analysis, the raw logs were obtained from OhioLINK for the period June 2004 to December 2004. During this period approximately 1,215,000 items were viewed on campus in October 2004 and 1,894,000 items viewed off campus between June and December 2004. This article provides an analysis of the age of material that users consulted. From a methodological point of view OhioLINK offered an attractive platform to conduct age of publication usage studies because it is one of the oldest e-journal libraries and thus offered a relatively long archive and stable platform to conduct the studies. The project sought to determine whether the subject, the search approach adopted, and the type of journal item viewed (contents page, abstract, full-text article, etc.) was a factor in regard to the age of articles used. © 2006 Wiley Periodicals, Inc.",
"Online journals promise to serve more information to more dispersed audiences and are more efficiently searched and recalled. But because they are used differently than print—scientists and scholars tend to search electronically and follow hyperlinks rather than browse or peruse—electronically available journals may portend an ironic change for science. Using a database of 34 million articles, their citations (1945 to 2005), and online availability (1998 to 2005), I show that as more journal issues came online, the articles referenced tended to be more recent, fewer journals and articles were cited, and more of those citations were to fewer journals and articles. The forced browsing of print archives may have stretched scientists and scholars to anchor findings deeply into past and present scholarship. Searching online is more efficient and following hyperlinks quickly puts researchers in touch with prevailing opinion, but this may accelerate consensus and narrow the range of findings and ideas built upon."
]
} |
1410.8844 | 2202095630 | Many scientific-software projects test their codes inadequately, or not at all. Despite its well-known benefits, adopting routine testing is often not easy. Development teams may have doubts about establishing effective test procedures, writing test software, or handling the ever-growing complexity of test cases. They may need to run (and test) on restrictive HPC platforms. They almost certainly face time and budget pressures that can keep testing languishing near the bottom of their to-do lists. This paper presents DDTS, a framework for building test suite applications, designed to fit scientific-software projects' requirements. DDTS aims to simplify introduction of rigorous testing, and to ease growing pains as needs mature. It decomposes the testing problem into practical, intuitive phases; makes configuration and extension easy; is portable and suitable to HPC platforms; and exploits parallelism. DDTS is currently used for automated regression and developer pre-commit testing for several scientific-software projects with disparate testing requirements. | DDTS follows a approach. Unlike unit testing, which focuses on software's basic units -- functions and subroutines -- system testing is concerned with the behavior of the program-under-test as a whole. While unit testing is a powerful and desirable technique, it can in practice be difficult to apply to legacy science codes @cite_3 (e.g. due to programming habits common in languages like Fortran, like long subroutines that mutate global data), and in cases where it is difficult to establish passing'' criteria for floating-point implementations of some complex algorithms. The detailed knowledge of a routine required to write an effective unit test may only be available to domain experts, limiting collaboration with other developers (e.g. software engineers) in building up test suites. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2079139063"
],
"abstract": [
"Over the past 30 years, most climate models have grown from relatively simple representations of a few atmospheric processes to complex multidisciplinary systems. Computer infrastructure over that period has gone from punchcard mainframes to modern parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Verification processes for model implementations rely almost exclusively on some combination of detailed analyses of output from full climate simulations and system-level regression tests. Besides being costly in terms of developer time and computing resources, these testing methodologies are limited in the types of defects they can detect, isolate, and diagnose. Mitigating these weaknesses of coarse-grained testing with finer-grained unit tests has been perceived as cumbersome and counterproductive. Recent advances in commercial software tools and methodologies have led to a renaissance of systematic fine-grained testing. This opens new possibilities for testing climate-modeling-software methodologies."
]
} |
1410.8776 | 2949263540 | In a smart grid environment, we study coalition formation of prosumers that aim at entering the energy market. It is paramount for the grid operation that the energy producers are able to sustain the grid demand in terms of stability and minimum production requirement. We design an algorithm that seeks to form coalitions that will meet both of these requirements: a minimum energy level for the coalitions and a steady production level which leads to finding uncorrelated sources of energy to form a coalition. We propose an algorithm that uses graph tools such as correlation graphs or clique percolation to form coalitions that meet such complex constraints. We validate the algorithm against a random procedure and show that, it not only performs better in term of social welfare for the power grid, but also that it is more robust against unforeseen production variations due to changing weather conditions for instance. | Traditionally, forming coalitions in a pool of agents can be done either in a centralized way where a single central unit is responsible for all the computations or in a distributed way where agents have only local knowledges and take actions accordingly. It is of common use to represent the situation and assess the stability of the solution by using game theoretic tools. Some papers @cite_9 @cite_6 focus then on finding an optimal coalition structure giving a pool of autonomous self interested agents using distributed merge split algorithms. | {
"cite_N": [
"@cite_9",
"@cite_6"
],
"mid": [
"2140666908",
"2072248223"
],
"abstract": [
"Cooperation in wireless networks allows single antenna devices to improve their performance by forming virtual multiple antenna systems. However, performing a distributed and fair cooperation constitutes a major challenge. In this work, we model cooperation in wireless networks through a game theoretical algorithm derived from a novel concept from coalitional game theory. A simple and distributed merge-and-split algorithm is constructed to form coalition groups among single antenna devices and to allow them to maximize their utilities in terms of rate while accounting for the cost of cooperation in terms of power. The proposed algorithm enables the users to self-organize into independent disjoint coalitions and the resulting clustered network structure is characterized through novel stability notions. In addition, we prove the convergence of the algorithm and we investigate how the network structure changes when different fairness criteria are chosen for apportioning the coalition worth among its members. Simulation results show that the proposed algorithm can improve the individual user's payoff up to 40.42 as well as efficiently cope with the mobility of the distributed users.",
"The power consumption schemes of consumers is an important issue in energy management process in smart grid. The non-cooperative methods which are always considered cannot achieve the maximized performance for consumers and networks. In this paper, we propose a cooperative power consumption scheme for consumers based on coalition formation game, which is suitable for the general electricity markets in smart grid. The advantage is that it can utilize the cooperative relationships among each other for payment savings and meanwhile take the social welfare into consideration. It is realized according to the pricing model used by power provider, the welfare function of the consumer coalitions, as well as the coalition formation algorithm based on the modified Pareto order which are proposed in this paper. Simulation results show that a stable consumers' partition can be formed in the concerned area and the higher utility for consumers and social welfare can be obtained comparing with the non-cooperative methods."
]
} |
1410.8776 | 2949263540 | In a smart grid environment, we study coalition formation of prosumers that aim at entering the energy market. It is paramount for the grid operation that the energy producers are able to sustain the grid demand in terms of stability and minimum production requirement. We design an algorithm that seeks to form coalitions that will meet both of these requirements: a minimum energy level for the coalitions and a steady production level which leads to finding uncorrelated sources of energy to form a coalition. We propose an algorithm that uses graph tools such as correlation graphs or clique percolation to form coalitions that meet such complex constraints. We validate the algorithm against a random procedure and show that, it not only performs better in term of social welfare for the power grid, but also that it is more robust against unforeseen production variations due to changing weather conditions for instance. | On a narrower scale, @cite_7 study the formation, in a game theoretic setting, of virtual power plants (VPP) composed of multiple self-interested DER. Two requirements for the formation of virtual power plants are considered : the reliability of supply and the minimization of entities the grid has to deal with. From this, @cite_7 builds a pricing mechanism that encourages VPP to report true estimates of their aggregated production and penalizes prediction errors. A redistribution scheme of the VPP to the DER is also constructed such that the payoff allocation lies in the core of the game, meaning that no DER has an incentive to leave the coalition. | {
"cite_N": [
"@cite_7"
],
"mid": [
"156819711"
],
"abstract": [
"The creation of Virtual Power Plants (VPPs) has been suggested in recent years as the means for achieving the cost-efficient integration of the many distributed energy resources (DERs) that are starting to emerge in the electricity network. In this work, we contribute to the development of VPPs by offering a game-theoretic perspective to the problem. Specifically, we design cooperatives (or \"cooperative VPPs\"---CVPPs) of rational autonomous DER-agents representing small-to-medium size renewable electricity producers, which coalesce to profitably sell their energy to the electricity grid. By so doing, we help to counter the fact that individual DERs are often excluded from the wholesale energy market due to their perceived inefficiency and unreliability. We discuss the issues surrounding the emergence of such cooperatives, and propose a pricing mechanism with certain desirable properties. Specifically, our mechanism guarantees that CVPPs have the incentive to truthfully report to the grid accurate estimates of their electricity production, and that larger rather than smaller CVPPs form; this promotes CVPP efficiency and reliability. In addition, we propose a scheme to allocate payments within the cooperative, and show that, given this scheme and the pricing mechanism, the allocation is in the core and, as such, no subset of members has a financial incentive to break away from the CVPP. Moreover, we develop an analytical tool for quantifying the uncertainty about DER production estimates, and distinguishing among different types of errors regarding such estimates. We then utilize this tool to devise protocols to manage CVPP membership. Finally, we demonstrate these ideas through a simulation that uses real-world data."
]
} |
1410.8776 | 2949263540 | In a smart grid environment, we study coalition formation of prosumers that aim at entering the energy market. It is paramount for the grid operation that the energy producers are able to sustain the grid demand in terms of stability and minimum production requirement. We design an algorithm that seeks to form coalitions that will meet both of these requirements: a minimum energy level for the coalitions and a steady production level which leads to finding uncorrelated sources of energy to form a coalition. We propose an algorithm that uses graph tools such as correlation graphs or clique percolation to form coalitions that meet such complex constraints. We validate the algorithm against a random procedure and show that, it not only performs better in term of social welfare for the power grid, but also that it is more robust against unforeseen production variations due to changing weather conditions for instance. | Presented in this way, the time-series clustering task seems very close to graph community detection. Communities in networks are indeed often seen as groups of nodes exhibiting high internal densities of links as well as a low density across communities, and several topology oriented techniques for finding communities are present in the litterature ( @cite_12 @cite_13 @cite_18 ). For our purpose where decorrelation is the closeness notion, such algorithms tend to have some difficulties because only a few inclusions of very high correlations can strongly affect the stability of a coalition. However, local cliques, where all nodes are linked, provide uncorrelated groups of nodes. There exists greedy heuristics based on cliques in the litterature such as clique percolation @cite_17 which realizes local optimizations of a fitness function and results in overlaping community structures. Detection of overlapping communities is actually a very active field of research, especially in social networks where a person might belong to several communities. | {
"cite_N": [
"@cite_18",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"1971421925",
"2217748804",
"2118608338"
],
"abstract": [
"",
"A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.",
"We consider three distinct and well-studied problems concerning network structure: community detection by modularity maximization, community detection by statistical inference, and normalized-cut graph partitioning. Each of these problems can be tackled using spectral algorithms that make use of the eigenvectors of matrix representations of the network. We show that with certain choices of the free parameters appearing in these spectral algorithms the algorithms for all three problems are, in fact, identical, and hence that, at least within the spectral approximations used here, there is no difference between the modularity- and inference-based community detection methods, or between either and graph partitioning.",
"Many networks in nature, society and technology are characterized by a mesoscopic level of organization, with groups of nodes forming tightly connected units, called communities or modules, that are only weakly linked to each other. Uncovering this community structure is one of the most important problems in the field of complex networks. Networks often show a hierarchical organization, with communities embedded within other communities; moreover, nodes can be shared between different communities. Here, we present the first algorithm that finds both overlapping communities and the hierarchical structure. The method is based on the local optimization of a fitness function. Community structure is revealed by peaks in the fitness histogram. The resolution can be tuned by a parameter enabling different hierarchical levels of organization to be investigated. Tests on real and artificial networks give excellent results."
]
} |
1410.8747 | 2137606168 | Botnets represent a global problem and are responsible for causing large financial and operational damage to their victims. They are implemented with evasion in mind, and aim at hiding their architecture and authors, making them difficult to detect in general. These kinds of networks are mainly used for identity theft, virtual extortion, spam campaigns and malware dissemination. Botnets have a great potential in warfare and terrorist activities, making it of utmost importance to take action against. We present CONDENSER, a method for identifying data generated by botnet activity. We start by selecting appropriate the features from several data feeds, namely DNS non-existent domain responses and live communication packages directed to command and control servers that we previously sinkholed. By using machine learning algorithms and a graph based representation of data, then allows one to identify botnet activity, helps identifying anomalous traffic, quickly detect new botnets and improve activities of tracking known botnets. Our main contributions are threefold: first, the use of a machine learning classifier for classifying domain names as being generated by domain generation algorithms (DGA); second, a clustering algorithm using the set of selected features that groups network communication with similar patterns; third, a graph based knowledge representation framework where we store processed data, allowing us to perform queries. | The Davies-Bouldin @cite_7 index -- represented as @math -- identifies that are compact and distant from each other, in Equation the diameter for @math is obtained, where @math represents the number of points belonging to @math . Symbol @math corresponds to the centroid of @math and @math is a point belonging to @math . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2051224630"
],
"abstract": [
"A measure is presented which indicates the similarity of clusters which are assumed to have a data density which is a decreasing function of distance from a vector characteristic of the cluster. The measure can be used to infer the appropriateness of data partitions and can therefore be used to compare relative appropriateness of various divisions of the data. The measure does not depend on either the number of clusters analyzed nor the method of partitioning of the data and can be used to guide a cluster seeking algorithm."
]
} |
1410.8747 | 2137606168 | Botnets represent a global problem and are responsible for causing large financial and operational damage to their victims. They are implemented with evasion in mind, and aim at hiding their architecture and authors, making them difficult to detect in general. These kinds of networks are mainly used for identity theft, virtual extortion, spam campaigns and malware dissemination. Botnets have a great potential in warfare and terrorist activities, making it of utmost importance to take action against. We present CONDENSER, a method for identifying data generated by botnet activity. We start by selecting appropriate the features from several data feeds, namely DNS non-existent domain responses and live communication packages directed to command and control servers that we previously sinkholed. By using machine learning algorithms and a graph based representation of data, then allows one to identify botnet activity, helps identifying anomalous traffic, quickly detect new botnets and improve activities of tracking known botnets. Our main contributions are threefold: first, the use of a machine learning classifier for classifying domain names as being generated by domain generation algorithms (DGA); second, a clustering algorithm using the set of selected features that groups network communication with similar patterns; third, a graph based knowledge representation framework where we store processed data, allowing us to perform queries. | The Silhouette @cite_0 index -- represented as @math -- identifies the average membership of each point to all @math clusters, where @math is the total number of existing samples in the dataset, @math is the average distance between point @math and all points of its cluster; @math is the minimum average dissamilarity between point @math and all the formed clusters. In this metric, the partition with bigger @math value is considered optimal. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2129869052"
],
"abstract": [
"1. Introduction. 2. Partitioning Around Medoids (Program PAM). 3. Clustering large Applications (Program CLARA). 4. Fuzzy Analysis. 5. Agglomerative Nesting (Program AGNES). 6. Divisive Analysis (Program DIANA). 7. Monothetic Analysis (Program MONA). Appendix 1. Implementation and Structure of the Programs. Appendix 2. Running the Programs. Appendix 3. Adapting the Programs to Your Needs. Appendix 4. The Program CLUSPLOT. References. Author Index. Subject Index."
]
} |
1410.8675 | 2949433648 | Region-specific linear models are widely used in practical applications because of their non-linear but highly interpretable model representations. One of the key challenges in their use is non-convexity in simultaneous optimization of regions and region-specific models. This paper proposes novel convex region-specific linear models, which we refer to as partition-wise linear models. Our key ideas are 1) assigning linear models not to regions but to partitions (region-specifiers) and representing region-specific linear models by linear combinations of partition-specific models, and 2) optimizing regions via partition selection from a large number of given partition candidates by means of convex structured regularizations. In addition to providing initialization-free globally-optimal solutions, our convex formulation makes it possible to derive a generalization bound and to use such advanced optimization techniques as proximal methods and decomposition of the proximal maps for sparsity-inducing regularizations. Experimental results demonstrate that our partition-wise linear models perform better than or are at least competitive with state-of-the-art region-specific or locally linear models. | Fast Local Kernel Support Vector Machines (FaLK-SVMs) represent state-of-the-art locally linear models. FaLK-SVMs produce test-point-specific weight vectors by learning local predictive models from the neighborhoods of individual test points @cite_13 . It aims to reduce prediction time cost by pre-processing for nearest-neighbor calculations and local model sharing, at the cost of initialization-independency. Another advanced locally linear model is that of Locally Linear Support Vector Machines (LLSVMs) @cite_18 . LLSVMs assign linear SVMs to multiple anchor points produced by manifold learning @cite_19 @cite_20 and construct test-point-specific linear predictors according to the weights of anchor points with respect to individual test points. When the manifold learning procedure is initialization-independent, LLSVMs become initial-value-independent because of the convexity of the optimization problem. Similarly, clustered SVMs (CSVMs) @cite_1 assume given data clusters and learn multiple SVMs for individual clusters simultaneously. Although CSVMs are convex and generalization bound analysis has been provided, they cannot optimize regions (clusters). | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_19",
"@cite_13",
"@cite_20"
],
"mid": [
"2189508540",
"2621615146",
"2134563198",
"2096613134",
""
],
"abstract": [
"Linear support vector machines (SVMs) have become popular for solving classification tasks due to their fast and simple online application to large scale data sets. However, many problems are not linearly separable. For these problems kernel-based SVMs are often used, but unlike their linear variant they suffer from various drawbacks in terms of computational and memory efficiency. Their response can be represented only as a function of the set of support vectors, which has been experimentally shown to grow linearly with the size of the training set. In this paper we propose a novel locally linear svm classifier with smooth decision boundary and bounded curvature. We show how the functions defining the classifier can be approximated using local codings and show how this model can be optimized in an online fashion by performing stochastic gradient descent with the same convergence guarantees as standard gradient descent method for linear svm. Our method achieves comparable performance to the state-of-the-art whilst being significantly faster than competing kernel SVMs. We generalise this model to locally finite dimensional kernel SVM.",
"In many problems of machine learning, the data are distributed nonlinearly. One way to address this kind of data is training a nonlinear classifier such as kernel support vector machine (kernel SVM). However, the computational burden of kernel SVM limits its application to large scale datasets. In this paper, we propose a Clustered Support Vector Machine (CSVM), which tackles the data in a divide and conquer manner. More specifically, CSVM groups the data into several clusters, followed which it trains a linear support vector machine in each cluster to separate the data locally. Meanwhile, CSVM has an additional global regularization, which requires the weight vector of each local linear SVM aligning with a global weight vector. The global regularization leverages the information from one cluster to another, and avoids over-fitting in each cluster. We derive a data-dependent generalization error bound for CSVM, which explains the advantage of CSVM over linear SVM. Experiments on several benchmark datasets show that the proposed method outperforms linear SVM and some other related locally linear classifiers. It is also comparable to a fine-tuned kernel SVM in terms of prediction performance, while it is more efficient than kernel SVM.",
"This paper introduces a new method for semi-supervised learning on high dimensional nonlinear manifolds, which includes a phase of unsupervised basis learning and a phase of supervised function learning. The learned bases provide a set of anchor points to form a local coordinate system, such that each data point x on the manifold can be locally approximated by a linear combination of its nearby anchor points, and the linear weights become its local coordinate coding. We show that a high dimensional nonlinear function can be approximated by a global linear function with respect to this coding scheme, and the approximation quality is ensured by the locality of such coding. The method turns a difficult nonlinear learning problem into a simple global linear learning problem, which overcomes some drawbacks of traditional local learning methods.",
"A computationally efficient approach to local learning with kernel methods is presented. The Fast Local Kernel Support Vector Machine (FaLK-SVM) trains a set of local SVMs on redundant neighbourhoods in the training set and an appropriate model for each query point is selected at testing time according to a proximity strategy. Supported by a recent result by Zakai and Ritov (2009) relating consistency and localizability, our approach achieves high classification accuracies by dividing the separation function in local optimisation problems that can be handled very efficiently from the computational viewpoint. The introduction of a fast local model selection further speeds-up the learning process. Learning and complexity bounds are derived for FaLK-SVM, and the empirical evaluation of the approach (with data sets up to 3 million points) showed that it is much faster and more accurate and scalable than state-of-the-art accurate and approximated SVM solvers at least for non high-dimensional data sets. More generally, we show that locality can be an important factor to sensibly speed-up learning approaches and kernel methods, differently from other recent techniques that tend to dismiss local information in order to improve scalability.",
""
]
} |
1410.8506 | 2190142144 | ABSTRACTGiven a permutation , we say an index i is a peak if πi − 1 πi + 1. Let P(π) denote the set of peaks of π. Given any set S of positive integers, define . Billey–Burdzy–Sagan showed that for all fixed subsets of positive integers S and sufficiently large n, for some polynomial pS(x) depending on S. They conjectured that the coefficients of pS(x) expanded in a binomial coefficient basis centered at max (S) are all positive. We show that this is a consequence of a stronger conjecture that bounds the modulus of the roots of pS(x). Furthermore, we give an efficient explicit formula for peak polynomials in the binomial basis centered at 0, which we use to identify many integer roots of peak polynomials along with certain inequalities and identities. | Another new result in @cite_7 shows that the number of permutations with the same peak set for signed permutations can be enumerated using the peak polynomial @math for unsigned permutations. We present an alternate proof that can be used to reduce many signed permutation statistic problems to unsigned permutation statistic problems. We denote the group of signed permutations as @math . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2240013452"
],
"abstract": [
"A signed permutation = in the hyperoctahedral group B_n is a word such that each -n, , -1, 1, , n and | |, | |, , | | = 1,2, ,n . An index i is a peak of if i-1 i+1 and P_B( ) denotes the set of all peaks of . Given any set S, we define P_B(S,n) to be the set of signed permutations B_n with P_B( ) = S. In this paper we are interested in the cardinality of the set P_B(S,n). In 2012, Billey, Burdzy and Sagan investigated the analogous problem for permutations in the symmetric group, S_n. In this paper we extend their results to the hyperoctahedral group; in particular we show that #P_B(S,n) = p(n)2^ 2n-|S|-1 where p(n) is the same polynomial found in by Billey, Burdzy and Sagan which leads to the explicit computation of interesting special cases of the polynomial p(n). In addition we have extended these results to the case where we add =0 at the beginning of the permutations, which gives rise to the possibility of a peak at position 1, for both the symmetric and the hyperoctahedral groups."
]
} |
1410.8479 | 2338808711 | Recently, several convergence rate results for Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) have been presented in the literature. In this paper, we show global linear convergence rate bounds for Douglas-Rachford splitting and ADMM under strong convexity and smoothness assumptions. We further show that the rate bounds are tight for the class of problems under consideration for all feasible algorithm parameters. For problems that satisfy the assumptions, we show how to select step-size and metric for the algorithm that optimize the derived convergence rate bounds. For problems with a similar structure that do not satisfy the assumptions, we present heuristic step-size and metric selection methods. | Specifically, we will compare our results in Proposition and Corollary to the previously known linear convergence rate results in @cite_29 @cite_34 @cite_14 @cite_25 @cite_0 and the linear convergence rate @cite_30 that appeared online during the submission procedure of this paper. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_29",
"@cite_0",
"@cite_34",
"@cite_25"
],
"mid": [
"1922736325",
"2037603831",
"",
"2019569173",
"1549918636",
"2951090613"
],
"abstract": [
"We provide a new proof of the linear convergence of the alternating direction method of multipliers (ADMM) when one of the objective terms is strongly convex. Our proof is based on a framework for analyzing optimization algorithms introduced in (2014), reducing algorithm convergence to verifying the stability of a dynamical system. This approach generalizes a number of existing results and obviates any assumptions about specific choices of algorithm parameters. On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence.",
"The alternating direction method of multipliers (ADMM) has emerged as a powerful technique for large-scale structured optimization. Despite many recent results on the convergence properties of ADMM, a quantitative characterization of the impact of the algorithm parameters on the convergence times of the method is still lacking. In this paper we find the optimal algorithm parameters that minimize the convergence factor of the ADMM iterates in the context of l 2 -regularized minimization and constrained quadratic programming. Numerical examples show that our parameter selection rules significantly outperform existing alternatives in the literature.",
"",
"Splitting algorithms for the sum of two monotone operators.We study two splitting algorithms for (stationary and evolution) problems involving the sum of two monotone operators. These algorithms ar...",
"The formulation @math minx,yf(x)+g(y),subjecttoAx+By=b,where f and g are extended-value convex functions, arises in many application areas such as signal processing, imaging and image processing, statistics, and machine learning either naturally or after variable splitting. In many common problems, one of the two objective functions is strictly convex and has Lipschitz continuous gradient. On this kind of problem, a very effective approach is the alternating direction method of multipliers (ADM or ADMM), which solves a sequence of f g-decoupled subproblems. However, its effectiveness has not been matched by a provably fast rate of convergence; only sublinear rates such as O(1 k) and @math O(1 k2) were recently established in the literature, though the O(1 k) rates do not require strong convexity. This paper shows that global linear convergence can be guaranteed under the assumptions of strong convexity and Lipschitz gradient on one of the two functions, along with certain rank assumptions on A and B. The result applies to various generalizations of ADM that allow the subproblems to be solved faster and less exactly in certain manners. The derived rate of convergence also provides some theoretical guidance for optimizing the ADM parameters. In addition, this paper makes meaningful extensions to the existing global convergence theory of ADM generalizations.",
"We propose a new approach for analyzing convergence of the Douglas-Rachford splitting method for solving convex composite optimization problems. The approach is based on a continuously differentiable function, the Douglas-Rachford Envelope (DRE), whose stationary points correspond to the solutions of the original (possibly nonsmooth) problem. By proving the equivalence between the Douglas-Rachford splitting method and a scaled gradient method applied to the DRE, results from smooth unconstrained optimization are employed to analyze convergence properties of DRS, to tune the method and to derive an accelerated version of it."
]
} |
1410.8479 | 2338808711 | Recently, several convergence rate results for Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) have been presented in the literature. In this paper, we show global linear convergence rate bounds for Douglas-Rachford splitting and ADMM under strong convexity and smoothness assumptions. We further show that the rate bounds are tight for the class of problems under consideration for all feasible algorithm parameters. For problems that satisfy the assumptions, we show how to select step-size and metric for the algorithm that optimize the derived convergence rate bounds. For problems with a similar structure that do not satisfy the assumptions, we present heuristic step-size and metric selection methods. | In [Theorem 6] Davis_Yin_2014 , dual Douglas-Rachford splitting with @math is shown to converge linearly at least with rate @math . For @math they recover the bound in @cite_0 . In Figure , the former (better) of these rate bounds is plotted. We see that it is more conservative than the one in Corollary . | {
"cite_N": [
"@cite_0"
],
"mid": [
"2019569173"
],
"abstract": [
"Splitting algorithms for the sum of two monotone operators.We study two splitting algorithms for (stationary and evolution) problems involving the sum of two monotone operators. These algorithms ar..."
]
} |
1410.8479 | 2338808711 | Recently, several convergence rate results for Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) have been presented in the literature. In this paper, we show global linear convergence rate bounds for Douglas-Rachford splitting and ADMM under strong convexity and smoothness assumptions. We further show that the rate bounds are tight for the class of problems under consideration for all feasible algorithm parameters. For problems that satisfy the assumptions, we show how to select step-size and metric for the algorithm that optimize the derived convergence rate bounds. For problems with a similar structure that do not satisfy the assumptions, we present heuristic step-size and metric selection methods. | In @cite_25 , the authors show that if the @math parameter is small enough and if @math is a quadratic function, then Douglas-Rachford splitting is equivalent to a gradient method applied to a smooth convex function named the Douglas-Rachford envelope. Convergence rate estimates then follow from the gradient method rate estimates. Also, accelerated variants of Douglas-Rachford splitting are proposed, based on fast gradient methods. In Figure , we compare to the rate bound of the fast Douglas-Rachford splitting in [Theorem 6] Panos_acc_DR_2014 when applied to the dual. This rate bound is better than the rate bound for standard Douglas-Rachford splitting in [Theorem 4] Panos_acc_DR_2014 . We note that Corollary gives better rate bounds. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2951090613"
],
"abstract": [
"We propose a new approach for analyzing convergence of the Douglas-Rachford splitting method for solving convex composite optimization problems. The approach is based on a continuously differentiable function, the Douglas-Rachford Envelope (DRE), whose stationary points correspond to the solutions of the original (possibly nonsmooth) problem. By proving the equivalence between the Douglas-Rachford splitting method and a scaled gradient method applied to the DRE, results from smooth unconstrained optimization are employed to analyze convergence properties of DRS, to tune the method and to derive an accelerated version of it."
]
} |
1410.8479 | 2338808711 | Recently, several convergence rate results for Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) have been presented in the literature. In this paper, we show global linear convergence rate bounds for Douglas-Rachford splitting and ADMM under strong convexity and smoothness assumptions. We further show that the rate bounds are tight for the class of problems under consideration for all feasible algorithm parameters. For problems that satisfy the assumptions, we show how to select step-size and metric for the algorithm that optimize the derived convergence rate bounds. For problems with a similar structure that do not satisfy the assumptions, we present heuristic step-size and metric selection methods. | The convergence rate bound provided in @cite_14 coincides with the bound provided in Corollary . The rate bound in @cite_14 holds for ADMM applied to Euclidean quadratic problems with linear inequality constraints. We generalize these results, using a different machinery, to arbitrary real Hilbert spaces (also infinite dimensional), to both Douglas-Rachford splitting and ADMM, to general smooth and strongly convex functions @math , and, perhaps most importantly, to any proper, closed, and convex function @math . | {
"cite_N": [
"@cite_14"
],
"mid": [
"2037603831"
],
"abstract": [
"The alternating direction method of multipliers (ADMM) has emerged as a powerful technique for large-scale structured optimization. Despite many recent results on the convergence properties of ADMM, a quantitative characterization of the impact of the algorithm parameters on the convergence times of the method is still lacking. In this paper we find the optimal algorithm parameters that minimize the convergence factor of the ADMM iterates in the context of l 2 -regularized minimization and constrained quadratic programming. Numerical examples show that our parameter selection rules significantly outperform existing alternatives in the literature."
]
} |
1410.8479 | 2338808711 | Recently, several convergence rate results for Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) have been presented in the literature. In this paper, we show global linear convergence rate bounds for Douglas-Rachford splitting and ADMM under strong convexity and smoothness assumptions. We further show that the rate bounds are tight for the class of problems under consideration for all feasible algorithm parameters. For problems that satisfy the assumptions, we show how to select step-size and metric for the algorithm that optimize the derived convergence rate bounds. For problems with a similar structure that do not satisfy the assumptions, we present heuristic step-size and metric selection methods. | Finally, we compare our rate bound to the rate bound in @cite_30 . Figure shows that our bound is tighter. As opposed to all the other rate bounds in this comparison, the rate bound in @cite_30 is not explicit. Rather, a sweep over different rate bound factors is needed. For each guess, a small semi-definite program is solved to assess whether the algorithm is guaranteed to converge with that rate. The quantization level of this sweep is the cause of the steps in the rate curve in Figure . | {
"cite_N": [
"@cite_30"
],
"mid": [
"1922736325"
],
"abstract": [
"We provide a new proof of the linear convergence of the alternating direction method of multipliers (ADMM) when one of the objective terms is strongly convex. Our proof is based on a framework for analyzing optimization algorithms introduced in (2014), reducing algorithm convergence to verifying the stability of a dynamical system. This approach generalizes a number of existing results and obviates any assumptions about specific choices of algorithm parameters. On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence."
]
} |
1410.8668 | 2950075826 | Social media texts are significant information sources for several application areas including trend analysis, event monitoring, and opinion mining. Unfortunately, existing solutions for tasks such as named entity recognition that perform well on formal texts usually perform poorly when applied to social media texts. In this paper, we report on experiments that have the purpose of improving named entity recognition on Turkish tweets, using two different annotated data sets. In these experiments, starting with a baseline named entity recognition system, we adapt its recognition rules and resources to better fit Twitter language by relaxing its capitalization constraint and by diacritics-based expansion of its lexical resources, and we employ a simplistic normalization scheme on tweets to observe the effects of these on the overall named entity recognition performance on Turkish tweets. The evaluation results of the system with these different settings are provided with discussions of these results. | There are several recent studies presenting approaches for NER on microblog texts, especially on tweets in English. Among these studies, in @cite_1 , a NER system tailored to tweets, called T-NER, is presented which employs Conditional Random Fields (CRF) for named entity segmentation and labelled topic modelling for subsequent classification, using Freebase dictionaries. A hybrid approach to NER on tweets is presented in @cite_14 where k-Nearest Neighbor and CRF based classifiers are sequentially applied. In @cite_13 , a factor graph based approach is proposed that jointly performs NER and named entity normalization on tweets. An unsupervised approach that performs only named entity extraction on tweets using resources like Wikipedia is described in @cite_8 . A clustering-based approach for NER on microtexts is presented in @cite_4 , a lightweight filter based approach for NER on tweets is described in @cite_11 , and a series of NER experiments on targeted tweets in Polish is presented in @cite_9 . Finally, an adaptation of the ANNIE component of GATE framework to microblog texts, called TwitIE, is described in @cite_19 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_19",
"@cite_13",
"@cite_11"
],
"mid": [
"2119759918",
"2013563212",
"2046240631",
"2250439311",
"2153848201",
"1521125545",
"2100581574",
"2096744070"
],
"abstract": [
"The challenges of Named Entities Recognition (NER) for tweets lie in the insufficient information in a tweet and the unavailability of training data. We propose to combine a K-Nearest Neighbors (KNN) classifier with a linear Conditional Random Fields (CRF) model under a semi-supervised learning framework to tackle these challenges. The KNN based classifier conducts pre-labeling to collect global coarse evidence across tweets while the CRF model conducts sequential labeling to capture fine-grained information encoded in a tweet. The semi-supervised learning plus the gazetteers alleviate the lack of training data. Extensive experiments show the advantages of our method over the baselines as well as the effectiveness of KNN and semi-supervised learning.",
"Named entity recognition (NER) methods have been regarded as an efficient strategy to extract relevant entities for answering a given query. The aim of this work is to exploit the conventional NER methods for analyzing a large set of microtexts of which lengths are short. Particularly, the microtexts are streaming on online social media, e.g., Twitter. To do so, this paper proposes three properties of contextual association among the microtexts to discover contextual clusters of the microtexts, which can be expected to improve the performance of NER tasks. As a case study, we have applied the proposed NER system to Twitter. Experimental results demonstrate the feasibility of the proposed method (around 90.3 of precision) for extracting relevant information in online social network applications.",
"Many private and or public organizations have been reported to create and monitor targeted Twitter streams to collect and understand users' opinions about the organizations. Targeted Twitter stream is usually constructed by filtering tweets with user-defined selection criteria e.g. tweets published by users from a selected region, or tweets that match one or more predefined keywords. Targeted Twitter stream is then monitored to collect and understand users' opinions about the organizations. There is an emerging need for early crisis detection and response with such target stream. Such applications require a good named entity recognition (NER) system for Twitter, which is able to automatically discover emerging named entities that is potentially linked to the crisis. In this paper, we present a novel 2-step unsupervised NER system for targeted Twitter stream, called TwiNER. In the first step, it leverages on the global context obtained from Wikipedia and Web N-Gram corpus to partition tweets into valid segments (phrases) using a dynamic programming algorithm. Each such tweet segment is a candidate named entity. It is observed that the named entities in the targeted stream usually exhibit a gregarious property, due to the way the targeted stream is constructed. In the second step, TwiNER constructs a random walk model to exploit the gregarious property in the local context derived from the Twitter stream. The highly-ranked segments have a higher chance of being true named entities. We evaluated TwiNER on two sets of real-life tweets simulating two targeted streams. Evaluated using labeled ground truth, TwiNER achieves comparable performance as with conventional approaches in both streams. Various settings of TwiNER have also been examined to verify our global context + local context combo idea.",
"This paper reports on some experiments aiming at tuning a rule-based NER system designed for detecting names in Polish online news to the processing of targeted Twitter streams. In particular, one explores whether the performance of the baseline NER system can be improved through the incremental application of knowledge-poor methods for name matching and guessing. We study various settings and combinations of the methods and present evaluation results on five corpora gathered from Twitter, centred around major events and known individuals.",
"People tweet more than 100 Million times daily, yielding a noisy, informal, but sometimes informative corpus of 140-character messages that mirrors the zeitgeist in an unprecedented manner. The performance of standard NLP tools is severely degraded on tweets. This paper addresses this issue by re-building the NLP pipeline beginning with part-of-speech tagging, through chunking, to named-entity recognition. Our novel T-ner system doubles F1 score compared with the Stanford NER system. T-ner leverages the redundancy inherent in tweets to achieve this performance, using LabeledLDA to exploit Freebase dictionaries as a source of distant supervision. LabeledLDA outperforms co-training, increasing F1 by 25 over ten common entity types. Our NLP tools are available at: http: github.com aritter twitter_nlp",
"Twitter is the largest source of microblog text, responsible for gigabytes of human discourse every day. Processing microblog text is difficult: the genre is noisy, documents have little context, and utterances are very short. As such, conventional NLP tools fail when faced with tweets and other microblog text. We present TwitIE, an open-source NLP pipeline customised to microblog text at every stage. Additionally, it includes Twitter-specific data import and metadata handling. This paper introduces each stage of the TwitIE pipeline, which is a modification of the GATE ANNIE open-source pipeline for news text. An evaluation against some state-of-the-art systems is also presented.",
"Tweets represent a critical source of fresh information, in which named entities occur frequently with rich variations. We study the problem of named entity normalization (NEN) for tweets. Two main challenges are the errors propagated from named entity recognition (NER) and the dearth of information in a single tweet. We propose a novel graphical model to simultaneously conduct NER and NEN on multiple tweets to address these challenges. Particularly, our model introduces a binary random variable for each pair of words with the same lemma across similar tweets, whose value indicates whether the two related words are mentions of the same entity. We evaluate our method on a manually annotated data set, and show that our method outperforms the baseline that handles these two tasks separately, boosting the F1 from 80.2 to 83.6 for NER, and the Accuracy from 79.4 to 82.6 for NEN, respectively.",
"Microblog platforms such as Twitter are being increasingly adopted by Web users, yielding an important source of data for web search and mining applications. Tasks such as Named Entity Recognition are at the core of many of these applications, but the effectiveness of existing tools is seriously compromised when applied to Twitter data, since messages are terse, poorly worded and posted in many different languages. Also, Twitter follows a streaming paradigm, imposing that entities must be recognized in real-time. In view of these challenges and the inappropriateness of existing tools, we propose a novel approach for Named Entity Recognition on Twitter data called FS-NER (Filter-Stream Named Entity Recognition). FS-NER is characterized by the use of filters that process unlabeled Twitter messages, being much more practical than existing supervised CRF-based approaches. Such filters can be combined either in sequence or in parallel in a flexible way. Moreover, because these filters are not language dependent, FS-NER can be applied to different languages without requiring a laborious adaptation. Through a systematic evaluation using three Twitter collections and considering seven types of entity, we show that FS-NER performs 3 better than a CRF-based baseline, besides being orders of magnitude faster and much more practical."
]
} |
1410.8668 | 2950075826 | Social media texts are significant information sources for several application areas including trend analysis, event monitoring, and opinion mining. Unfortunately, existing solutions for tasks such as named entity recognition that perform well on formal texts usually perform poorly when applied to social media texts. In this paper, we report on experiments that have the purpose of improving named entity recognition on Turkish tweets, using two different annotated data sets. In these experiments, starting with a baseline named entity recognition system, we adapt its recognition rules and resources to better fit Twitter language by relaxing its capitalization constraint and by diacritics-based expansion of its lexical resources, and we employ a simplistic normalization scheme on tweets to observe the effects of these on the overall named entity recognition performance on Turkish tweets. The evaluation results of the system with these different settings are provided with discussions of these results. | Considering NER research on Turkish texts, various approaches have been employed so far including those based on using Hidden Markov Models (HMM) @cite_12 , on manually engineered recognition rules @cite_17 @cite_3 , on rule learning @cite_18 , and on CRFs @cite_16 @cite_10 . All of these approaches have been proposed for news texts and the CRF-based approach @cite_10 is reported to outperform the previous proposals with a balanced F-Measure of about 91 To the best of our knowledge, there are only two studies on NER from Turkish tweets. In @cite_7 , the CRF-based NER system @cite_10 is evaluated on informal text types and is reported to achieve an F-Measure of 19 | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_3",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2111082276",
"1982611008",
"2059387088",
"2145075080",
"2402817887",
"2147777361",
"1530375956"
],
"abstract": [
"Named entity recognition NER is one of the basic tasks in automatic extraction of information from natural language texts. In this paper, we describe an automatic rule learning method that exploits different features of the input text to identify the named entities located in the natural language texts. Moreover, we explore the use of morphological features for extracting named entities from Turkish texts. We believe that the developed system can also be used for other agglutinative languages. The paper also provides a comprehensive overview of the field by reviewing the NER research literature. We conducted our experiments on the TurkIE dataset, a corpus of articles collected from different Turkish newspapers. Our method achieved an average F-score of 91.08 on the dataset. The results of the comparative experiments demonstrate that the developed technique is successfully applicable to the task of automatic NER and exploiting morphological features can significantly improve the NER from Turkish, an agglutinative language.",
"Named Entity Recognition (NER) is a well-studied area in natural language processing (NLP) and the reported results in the literature are generally very high ( > 95) for most of the languages. Today, the focus area of most practical natural language applications (i.e. web mining, sentiment analysis, machine translation) is real natural language data such as Web2.0 or speech data. Nevertheless, the NER task is rarely investigated on this type of data which differs severely from formal written text. In this paper, we present 3 new Turkish data sets from different domains (on this focused area; namely from Twitter, a Speech-to-Text Interface and a Hardware Forum) annotated specifically for NER and report our first results on them. We believe, the paper draws light to the difficulty of these new domains for NER and the possible future work.",
"Highlights? First hybrid named entity recognizer for Turkish addressing the porting problem. ? The recognizer achieves considerably better results compared to its rule based predecessor. ? The proposed recognizer is successfully applied to video texts for automatic video indexing. Named entity recognition is an important subfield of the broader research area of information extraction from textual data. Yet, named entity recognition research conducted on Turkish texts is still rare as compared to related research carried out on other languages such as English, Spanish, Chinese, and Japanese. In this study, we present a hybrid named entity recognizer for Turkish, which is based on a manually engineered rule based recognizer that we have proposed. Since rule based systems for specific domains require their knowledge sources to be manually revised when ported to other domains, we enrich our rule based recognizer and turn it into a hybrid recognizer so that it learns from annotated data when available and improves its knowledge sources accordingly. The hybrid recognizer is originally engineered for generic news texts, but with its learning capability, it is improved to be applicable to that of financial news texts, historical texts, and child stories as well, without human intervention. Both the hybrid recognizer and its rule based predecessor are evaluated on the same corpora and the hybrid recognizer achieves better results as compared to its predecessor. The proposed hybrid named entity recognizer is significant since it is the first hybrid recognizer proposal for Turkish addressing the above porting problem considering that Turkish possesses different structural properties compared to widely studied languages such as English and there is very limited information extraction research conducted on Turkish texts. Moreover, the employment of the proposed hybrid recognizer for semantic video indexing is shown as a case study on Turkish news videos. The genuine textual and video corpora utilized throughout the paper are compiled and annotated by the authors due to the lack of publicly available annotated corpora for information extraction research on Turkish texts.",
"Turkish is an agglutinative language with complex morphological structures, therefore using only word forms is not enough for many computational tasks. In this paper we analyze the effect of morphology in a Named Entity Recognition system for Turkish. We start with the standard word-level representation and incrementally explore the effect of capturing syntactic and contextual properties of tokens. Furthermore, we also explore a new representation in which roots and morphological features are represented as separate tokens instead of representing only words as tokens. Using syntactic and contextual properties with the new representation provide an 7.6 relative improvement over the baseline.",
"This paper reports the highest results (95 in MUC and 92 in CoNLL metric) in the literature for Turkish named entity recognition; more specifically for the task of detecting person, location and organization entities in general news texts. We give an in depth analysis of the previous reported results and make comparisons with them whenever possible. We use conditional random fields (CRFs) as our statistical model. The paper presents initial explorations on the usage of rich morphological structure of the Turkish language as features to CRFs together with the use of some basic and generative gazetteers.",
"This paper presents the results of a study on information extraction from unrestricted Turkish text using statistical language processing methods. In languages like English, there is a very small number of possible word forms with a given root word. However, languages like Turkish have very productive agglutinative morphology. Thus, it is an issue to build statistical models for specific tasks using the surface forms of the words, mainly because of the data sparseness problem. In order to alleviate this problem, we used additional syntactic information, i.e. the morphological structure of the words. We have successfully applied statistical methods using both the lexical and morphological information to sentence segmentation, topic segmentation, and name tagging tasks. For sentence segmentation, we have modeled the final inflectional groups of the words and combined it with the lexical model, and decreased the error rate to 4.34 , which is 21 better than the result obtained using only the surface forms of the words. For topic segmentation, stems of the words (especially nouns) have been found to be more effective than using the surface forms of the words and we have achieved 10.90 segmentation error rate on our test set according to the weighted TDT-2 segmentation cost metric. This is 32 better than the word-based baseline model. For name tagging, we used four different information sources to model names. Our first information source is based on the surface forms of the words. Then we combined the contextual cues with the lexical model, and obtained some improvement. After this, we modeled the morphological analyses of the words, and finally we modeled the tag sequence, and reached an F-Measure of 91.56 , according to the MUC evaluation criteria. Our results are important in the sense that, using linguistic information, i.e. morphological analyses of the words, and a corpus large enough to train a statistical model significantly improves these basic information extraction tasks for Turkish.",
"Named entity recognition (NER) is one of the main information extraction tasks and research on NER from Turkish texts is known to be rare. In this study, we present a rule-based NER system for Turkish which employs a set of lexical resources and pattern bases for the extraction of named entities including the names of people, locations, organizations together with time date and money percentage expressions. The domain of the system is news texts and it does not utilize important clues of capitalization and punctuation since they may be missing in texts obtained from the Web or the output of automatic speech recognition tools. The evaluation of the system is performed on news texts along with other genres encompassing child stories and historical texts, but as expected in case of manually engineered rule-based systems, it suffers from performance degradation on these latter genres of texts since they are distinct from the target domain of news texts. Furthermore, the system is evaluated on transcriptions of news videos leading to satisfactory results which is an important step towards the employment of NER during automatic semantic annotation of videos in Turkish. The current study is significant for its being the first rule-based approach to the NER task on Turkish texts with its evaluation on diverse text types."
]
} |
1410.8205 | 2949542618 | We investigate the problem of constructing planar drawings with few bends for two related problems, the partially embedded graph problem---to extend a straight-line planar drawing of a subgraph to a planar drawing of the whole graph---and the simultaneous planarity problem---to find planar drawings of two graphs that coincide on shared vertices and edges. In both cases we show that if the required planar drawings exist, then there are planar drawings with a linear number of bends per edge and, in the case of simultaneous planarity, a constant number of crossings between every pair of edges. Our proofs provide efficient algorithms if the combinatorial embedding of the drawing is given. Our result on partially embedded graph drawing generalizes a classic result by Pach and Wenger which shows that any planar graph can be drawn with a linear number of bends per edge if the location of each vertex is fixed. | The decision version of simultaneous planarity generalizes partially embedded planarity: given an instance @math of the latter problem, we can augment @math to a drawing of a @math -connected graph @math and let @math . Then @math and @math are simultaneously planar if and only if @math has a planar embedding extending @math . In the other direction, the algorithm @cite_7 for testing planarity of partially embedded graphs solves the special case of the simultaneous planarity problem in which the embedding of the common graph @math is fixed (which happens, e.g., if @math or one of the two graphs is @math -connected). | {
"cite_N": [
"@cite_7"
],
"mid": [
"2097498760"
],
"abstract": [
"We study the following problem: Given a planar graph G and a planar drawing (embedding) of a subgraph of G, can such a drawing be extended to a planar drawing of the entire graph G? This problem fits the paradigm of extending a partial solution to a complete one, which has been studied before in many different settings. Unlike many cases, in which the presence of a partial solution in the input makes hard an otherwise easy problem, we show that the planarity question remains polynomial-time solvable. Our algorithm is based on several combinatorial lemmata which show that the planarity of partially embedded graphs meets the \"on-cas\" behaviour -- obvious necessary conditions for planarity are also sufficient. These conditions are expressed in terms of the interplay between (a) rotation schemes and containment relationships between cycles and (b) the decomposition of a graph into its connected, biconnected, and triconnected components. This implies that no dynamic programming is needed for a decision algorithm and that the elements of the decomposition can be processed independently. Further, by equipping the components of the decomposition with suitable data structures and by carefully splitting the problem into simpler subproblems, we improve our algorithm to reach linear-time complexity. Finally, we consider several generalizations of the problem, e.g. minimizing the number of edges of the partial embedding that need to be rerouted to extend it, and argue that they are NP-hard. Also, we show how our algorithm can be applied to solve related Graph Drawing problems."
]
} |
1410.8205 | 2949542618 | We investigate the problem of constructing planar drawings with few bends for two related problems, the partially embedded graph problem---to extend a straight-line planar drawing of a subgraph to a planar drawing of the whole graph---and the simultaneous planarity problem---to find planar drawings of two graphs that coincide on shared vertices and edges. In both cases we show that if the required planar drawings exist, then there are planar drawings with a linear number of bends per edge and, in the case of simultaneous planarity, a constant number of crossings between every pair of edges. Our proofs provide efficient algorithms if the combinatorial embedding of the drawing is given. Our result on partially embedded graph drawing generalizes a classic result by Pach and Wenger which shows that any planar graph can be drawn with a linear number of bends per edge if the location of each vertex is fixed. | Several optimization versions of partially embedded planarity and simultaneous planarity are -hard. Patrignani showed that testing whether there is a straight-line drawing of a planar graph @math extending a given drawing of a subgraph of @math is -complete @cite_9 , so bend minimization in partial embedding extensions is -complete; Patrignani's result holds even if a combinatorial embedding of @math is given. Patrignani does not explicitly claim -completeness in the case in which the embedding of @math is fixed, but that can be concluded by checking his construction; only the variable gadget, pictured in his Figure 3, needs minor adjustments. Bend minimization in simultaneous planar drawings is -hard, since it is -hard to decide whether there is a straight-line simultaneous drawing @cite_24 . Crossing minimization in simultaneous planar drawings is also -hard, as follows from an -hardness result on by Cabello and Mohar @cite_14 ; see Theorem in for a slightly stronger result. | {
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_14"
],
"mid": [
"1483285511",
"2040465345",
"2964235807"
],
"abstract": [
"We consider the following problem known as simultaneous geometric graph embedding (SGE). Given a set of planar graphs on a shared vertex set, decide whether the vertices can be placed in the plane in such a way that for each graph the straight-line drawing is planar. We partially settle an open problem of Erten and Kobourov [5] by showing that even for two graphs the problem is NP-hard. We also show that the problem of computing the rectilinear crossing number of a graph can be reduced to a simultaneous geometric graph embedding problem; this implies that placing SGE in NP will be hard, since the corresponding question for rectilinear crossing number is a long-standing open problem. However, rather like rectilinear crossing number, SGE can be decided in PSPACE.",
"We investigate the computational complexity of the following problem. Given a planar graph in which some vertices have already been placed in the plane, place the remaining vertices to form a planar straight-line drawing of the whole graph. We show that this extensibility problem, proposed in the 2003 \"Selected Open Problems in Graph Drawing\" [1], is NP-hard.",
"A graph is near-planar if it can be obtained from a planar graph by adding an edge. We show the surprising fact that it is NP-hard to compute the crossing number of near-planar graphs. A graph is 1-planar if it has a drawing where every edge is crossed by at most one other edge. We show that it is NP-hard to decide whether a given near-planar graph is 1-planar. The main idea in both reductions is to consider the problem of simultaneously drawing two planar graphs inside a disk, with some of its vertices fixed at the boundary of the disk. This leads to the concept of anchored embedding, which is of independent interest. As an interesting consequence we obtain a new, geometric proof of NP-completeness of the crossing number problem, even when restricted to cubic graphs. This resolves a question of Hliněný."
]
} |
1410.8205 | 2949542618 | We investigate the problem of constructing planar drawings with few bends for two related problems, the partially embedded graph problem---to extend a straight-line planar drawing of a subgraph to a planar drawing of the whole graph---and the simultaneous planarity problem---to find planar drawings of two graphs that coincide on shared vertices and edges. In both cases we show that if the required planar drawings exist, then there are planar drawings with a linear number of bends per edge and, in the case of simultaneous planarity, a constant number of crossings between every pair of edges. Our proofs provide efficient algorithms if the combinatorial embedding of the drawing is given. Our result on partially embedded graph drawing generalizes a classic result by Pach and Wenger which shows that any planar graph can be drawn with a linear number of bends per edge if the location of each vertex is fixed. | Di @cite_1 studied the special case of PEG in which the @math -vertex graph @math to be drawn is a tree. They showed that, given a drawing @math of a subtree @math of @math , a drawing of @math extending @math can be computed in @math time so that each edge of @math has at most @math bends. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2048913517"
],
"abstract": [
"Given a graph G with n vertices and a set S of n points in the plane, a point-set embedding of G on S is a planar drawing such that each vertex of G is mapped to a distinct point of S. A geometric point-set embedding is a point-set embedding with no edge bends. This paper studies the following problem: The input is a set S of n points, a planar graph G with n vertices, and a geometric point-set embedding of a subgraph G^'@?G on a subset of S. The desired output is a point-set embedding of G on S that includes the given partial drawing of G^'. We concentrate on trees and show how to compute the output in O(n^2logn) time in a real-RAM model and with at most n-k edges with at most [email protected]?k [email protected]? bends, where k is the number of vertices of the given subdrawing. We also prove that there are instances of the problem which require at least k-3 bends on n-k edges."
]
} |
1410.8205 | 2949542618 | We investigate the problem of constructing planar drawings with few bends for two related problems, the partially embedded graph problem---to extend a straight-line planar drawing of a subgraph to a planar drawing of the whole graph---and the simultaneous planarity problem---to find planar drawings of two graphs that coincide on shared vertices and edges. In both cases we show that if the required planar drawings exist, then there are planar drawings with a linear number of bends per edge and, in the case of simultaneous planarity, a constant number of crossings between every pair of edges. Our proofs provide efficient algorithms if the combinatorial embedding of the drawing is given. Our result on partially embedded graph drawing generalizes a classic result by Pach and Wenger which shows that any planar graph can be drawn with a linear number of bends per edge if the location of each vertex is fixed. | Concerning PEG, Pach and Wenger @cite_12 proved the following result: given an @math -vertex planar graph @math with fixed vertex locations, a planar drawing of @math in which each edge has at most @math bends can be constructed in @math time. They also proved that such a bound is asymptotically tight in the worst case. Regarding the constant, @cite_3 improved the bound to @math bends per edge. Biedl and Floderus @cite_18 considered the more general problem of drawing an @math -vertex planar graph on fixed vertex locations where the drawing is constrained to lie inside a @math -vertex polygon. They show that there is a drawing with @math bends per edge. | {
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_12"
],
"mid": [
"98352059",
"2063120809",
""
],
"abstract": [
"In this paper, we study the problem of drawing a given planar graph such that vertices are at pre-specified points and the entire drawing is inside a given polygon. We give a method that shows that for an n-vertex graph and a k-sided polygon, Θ(kn2) bends are always sufficient. We also give an example of a graph where Θ(kn2) bends are necessary for such a drawing.",
"Let G be a planar graph with n vertices and with a partition of the vertex set into subsets V\"0,...,V\"k\"-\"1 for some positive integer [email protected][email protected]?n. Let S be a set of n distinct points in the plane with a partition into subsets S\"0,...,S\"k\"-\"1 with |V\"i|=|S\"i| ([email protected][email protected]?k-1). This paper studies the problem of computing a planar polyline drawing of G, such that each vertex of V\"i is mapped to a distinct point of S\"i. Lower and upper bounds on the number of bends per edge are proved for any [email protected][email protected]?n. In the special case k=n, we improve the upper and lower bounds presented in a paper by Pach and Wenger [J. Pach, R. Wenger, Embedding planar graphs at fixed vertex locations, Graphs and Combinatorics 17 (2001) 717-728]. The upper bound is based on an algorithm for computing a topological book embedding of a planar graph, such that the vertices follow a given left-to-right order and the number of crossings between every edge and the spine is asymptotically optimal, which can be regarded as a result of independent interest.",
""
]
} |
1410.8205 | 2949542618 | We investigate the problem of constructing planar drawings with few bends for two related problems, the partially embedded graph problem---to extend a straight-line planar drawing of a subgraph to a planar drawing of the whole graph---and the simultaneous planarity problem---to find planar drawings of two graphs that coincide on shared vertices and edges. In both cases we show that if the required planar drawings exist, then there are planar drawings with a linear number of bends per edge and, in the case of simultaneous planarity, a constant number of crossings between every pair of edges. Our proofs provide efficient algorithms if the combinatorial embedding of the drawing is given. Our result on partially embedded graph drawing generalizes a classic result by Pach and Wenger which shows that any planar graph can be drawn with a linear number of bends per edge if the location of each vertex is fixed. | Concerning SEFE, Di Giacomo and Liotta @cite_10 and independently Kammer @cite_21 proved the following result: given two planar graphs @math and @math sharing some vertices and no edge with a total number of @math vertices, there exists an @math -time algorithm to construct a simultaneous planar drawing of @math and @math on a grid of size @math , where each edge has at most @math bends, hence there are at most @math crossings between any edge of @math and any edge of @math . This improves upon a previous result of Erten and Kobourov @cite_8 . The algorithms in @cite_10 @cite_8 @cite_21 make use of a drawing technique introduced by Kaufmann and Wiese @cite_17 . | {
"cite_N": [
"@cite_21",
"@cite_10",
"@cite_17",
"@cite_8"
],
"mid": [
"1575929063",
"2108683956",
"2099520181",
"2108115761"
],
"abstract": [
"The simultaneous embedding problem is, given two planar graphs G1=(V,E1) and G2=(V,E2), to find planar embeddings ϕ(G1) and ϕ(G2) such that each vertex v∈V is mapped to the same point in ϕ(G1) and in ϕ(G2). This article presents a linear-time algorithm for the simultaneous embedding problem such that edges are drawn as polygonal chains with at most two bends and all vertices and all bends of the edges are placed on a grid of polynomial size. An extension of this problem with so-called fixed edges is also considered A further linear-time algorithm of this article solves the following problem: Given a planar graph G and a set of distinct points, find a planar embedding for G that maps each vertex to one of the given points. The solution presented also uses at most two bends per edge and a grid whose size is polynomial in the size of the grid that includes all given points. An example shows two bends per edge to be optimal",
"Let G1 and G2 be two planar graphs having some vertices in common. A simultaneous embedding of G1 and G2 is a pair of crossing-free drawings of G1 and G2 such that each vertex in common is represented by the same point in both drawings. In this paper we show that an outerplanar graph and a simple path can be simultaneously embedded with fixed edges such that the edges in common are straight-line segments while the other edges of the outerplanar graph can have at most one bend per edge. We then exploit the technique for outerplanar graphs and paths to study simultaneous embeddings of other pairs of graphs. Namely, we study simultaneous embedding with fixed edges of: (i) two outerplanar graphs sharing a forest of paths and (ii) an outerplanar graph and a cycle.",
"A new and distinct cultivar of Impatiens plant named Merengue, characterized by its large, light pink flowers with dark red-purple eye at base of pet als; compact growth habit, excellent self-branching, dark green leaves infused with purple, pleasantly contrasting foliage and flower colors, floriferous habit, and its suitability for use both as a bedding plant and in its hanging basket culture.",
""
]
} |
1410.7659 | 2054050596 | In this paper we consider the problem of learning undirected graphical models from data generated according to the Glauber dynamics. The Glauber dynamics is a Markov chain that sequentially updates individual nodes (variables) in a graphical model and it is frequently used to sample from the stationary distribution (to which it converges given sufficient time). Additionally, the Glauber dynamics is a natural dynamical model in a variety of settings. This work deviates from the standard formulation of graphical model learning in the literature, where one assumes access to i.i.d. samples from the distribution. Much of the research on graphical model learning has been directed towards finding algorithms with low computational cost. As the main result of this work, we establish that the problem of reconstructing binary pairwise graphical models is computationally tractable when we observe the Glauber dynamics. Specifically, we show that a binary pairwise graphical model on @math nodes with maximum degree @math can be learned in time @math , for a function @math , using nearly the information-theoretic minimum number of samples. | Several works have studied the problem of learning the graph underlying a random process for various processes. These include learning from epidemic cascades @cite_1 @cite_0 @cite_34 and learning from delay measurements @cite_38 . Another line of research asks to find the source of infection of an epidemic by observing the current state, where the graph is known @cite_36 @cite_42 . | {
"cite_N": [
"@cite_38",
"@cite_36",
"@cite_42",
"@cite_1",
"@cite_0",
"@cite_34"
],
"mid": [
"2568950526",
"2111772797",
"",
"2092418988",
"2952347589",
"2949064044"
],
"abstract": [
"We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013",
"We provide a systematic study of the problem of finding the source of a rumor in a network. We model rumor spreading in a network with the popular susceptible-infected (SI) model and then construct an estimator for the rumor source. This estimator is based upon a novel topological quantity which we term rumor centrality. We establish that this is a maximum likelihood (ML) estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has nontrivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops of the true source across different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding rumor sources in networks which are not tree-like.",
"",
"We consider the problem of finding the graph on which an epidemic spreads, given only the times when each node gets infected. While this is a problem of central importance in several contexts -- offline and online social networks, e-commerce, epidemiology -- there has been very little work, analytical or empirical, on finding the graph. Clearly, it is impossible to do so from just one epidemic; our interest is in learning the graph from a small number of independent epidemics. For the classic and popular \"independent cascade\" epidemics, we analytically establish sufficient conditions on the number of epidemics for both the global maximum-likelihood (ML) estimator, and a natural greedy algorithm to succeed with high probability. Both results are based on a key observation: the global graph learning problem decouples into n local problems -- one for each node. For a node of degree d, we show that its neighborhood can be reliably found once it has been infected O(d2 log n) times (for ML on general graphs) or O(d log n) times (for greedy on trees). We also provide a corresponding information-theoretic lower bound of Ω(d log n); thus our bounds are essentially tight. Furthermore, if we are given side-information in the form of a super-graph of the actual graph (as is often the case), then the number of epidemic samples required -- in all cases -- becomes independent of the network size n.",
"Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected -- but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data -- observed infection times of nodes -- we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data.",
"In many real-world scenarios, it is nearly impossible to collect explicit social network data. In such cases, whole networks must be inferred from underlying observations. Here, we formulate the problem of inferring latent social networks based on network diffusion or disease propagation data. We consider contagions propagating over the edges of an unobserved social network, where we only observe the times when nodes became infected, but not who infected them. Given such node infection times, we then identify the optimal network that best explains the observed data. We present a maximum likelihood approach based on convex programming with a l1-like penalty term that encourages sparsity. Experiments on real and synthetic data reveal that our method near-perfectly recovers the underlying network structure as well as the parameters of the contagion propagation model. Moreover, our approach scales well as it can infer optimal networks of thousands of nodes in a matter of minutes."
]
} |
1410.7659 | 2054050596 | In this paper we consider the problem of learning undirected graphical models from data generated according to the Glauber dynamics. The Glauber dynamics is a Markov chain that sequentially updates individual nodes (variables) in a graphical model and it is frequently used to sample from the stationary distribution (to which it converges given sufficient time). Additionally, the Glauber dynamics is a natural dynamical model in a variety of settings. This work deviates from the standard formulation of graphical model learning in the literature, where one assumes access to i.i.d. samples from the distribution. Much of the research on graphical model learning has been directed towards finding algorithms with low computational cost. As the main result of this work, we establish that the problem of reconstructing binary pairwise graphical models is computationally tractable when we observe the Glauber dynamics. Specifically, we show that a binary pairwise graphical model on @math nodes with maximum degree @math can be learned in time @math , for a function @math , using nearly the information-theoretic minimum number of samples. | More broadly, a number of papers in the learning theory community have considered learning functions (or concepts) from examples generated by Markov chains, including @cite_41 @cite_15 @cite_20 @cite_22 . The present paper is similar in spirit to that of @cite_20 showing that it is relatively easy to learn DNF formulas from examples generated according to a random walk as compared to i.i.d. samples. | {
"cite_N": [
"@cite_41",
"@cite_15",
"@cite_22",
"@cite_20"
],
"mid": [
"2033493462",
"2004036579",
"",
"2119243017"
],
"abstract": [
"An \"Occam algorithm\" learning model maintains a tentative hypothesis consistent with past observations and, when a new observation is inconsistent with the current hypothesis, updates to the next-simplest hypothesis consistent with all observations. In previous work, observations were assumed to be stochastically independent. This paper initiates study of such models under weaker Markovian assumptions on the observations, In the special case where the sequence of hypotheses satisfies a monotonicity condition, it is shown that the number of mistakes in classifying the first t observations is O(?t log 1 ?i), where ?i is the stationary probability of the initial state, i, of the Markov chain.",
"In this paper we consider an approach to passive learning. In contrast to the classical PAC model we do not assume that the examples are independently drawn according to an underlying distribution, but that they are generated by a time-driven process. We define deterministic and probabilistic learning models of this sort and investigate the relationships between them and with other models. The fact that successive examples are related can often be used to gain additional information similar to the information gained by membership queries. We show that this can be used to design on-line prediction algorithms. In particular, we present efficient algorithms for exactly identifying Boolean threshold functions, 2-term RSE, and 2-term-DNF, when the examples are generated by a random walk on 0,1 n .",
"",
"We consider a model of learning Boolean functions from examples generated by a uniform random walk on 0, 1 sup n . We give a polynomial time algorithm for learning decision trees and DNF formulas in this model. This is the first efficient algorithm for learning these classes in a natural passive learning model where the learner has no influence over the choice of examples used for learning."
]
} |
1410.7414 | 2952089254 | We analyze the problem of regression when both input covariates and output responses are functions from a nonparametric function class. Function to function regression (FFR) covers a large range of interesting applications including time-series prediction problems, and also more general tasks like studying a mapping between two separate types of distributions. However, previous nonparametric estimators for FFR type problems scale badly computationally with the number of input output pairs in a data-set. Given the complexity of a mapping between general functions it may be necessary to consider large data-sets in order to achieve a low estimation risk. To address this issue, we develop a novel scalable nonparametric estimator, the Triple-Basis Estimator (3BE), which is capable of operating over datasets with many instances. To the best of our knowledge, the 3BE is the first nonparametric FFR estimator that can scale to massive datasets. We analyze the 3BE's risk and derive an upperbound rate. Furthermore, we show an improvement of several orders of magnitude in terms of prediction speed and a reduction in error over previous estimators in various real-world data-sets. | A previous nonparametric FFR estimator was proposed in @cite_9 . @cite_9 attempts to perform FFR on a functional RKHS. That is, if we consider @math as a functional Hilbert space, where @math is such that @math , then @math is estimated by @math However, when each function is observed though @math noisy function evaluations this estimator will require the inversion of a @math matrix, which will be computationally infeasible for data-sets of even a modest size. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2127128771"
],
"abstract": [
"This paper deals with functional regression, in which the input attributes as well as the response are functions. To deal with this problem, we develop a functional reproducing kernel Hilbert space approach; here, a kernel is an operator acting on a function and yielding a function. We demonstrate basic properties of these functional RKHS, as well as a representer theorem for this setting; we investigate the construction of kernels; we provide some experimental insight."
]
} |
1410.7414 | 2952089254 | We analyze the problem of regression when both input covariates and output responses are functions from a nonparametric function class. Function to function regression (FFR) covers a large range of interesting applications including time-series prediction problems, and also more general tasks like studying a mapping between two separate types of distributions. However, previous nonparametric estimators for FFR type problems scale badly computationally with the number of input output pairs in a data-set. Given the complexity of a mapping between general functions it may be necessary to consider large data-sets in order to achieve a low estimation risk. To address this issue, we develop a novel scalable nonparametric estimator, the Triple-Basis Estimator (3BE), which is capable of operating over datasets with many instances. To the best of our knowledge, the 3BE is the first nonparametric FFR estimator that can scale to massive datasets. We analyze the 3BE's risk and derive an upperbound rate. Furthermore, we show an improvement of several orders of magnitude in terms of prediction speed and a reduction in error over previous estimators in various real-world data-sets. | In addition, @cite_14 provides an estimator for doing FFR for the special case where both input and output functions are probability distribution functions. The estimator, henceforth referred to as the linear smoother estimator (LSE), works as follows when given a training data-sets of @math of empirical functional observations and @math of function estimates and a function estimate @math of a new query input function: Here @math is taken to be a symmetric kernel with bounded support, and @math is some metric over functions. However, while such an estimator is useful for smaller FFR problems, it may not be used in larger data-sets. Clearly, the LSE must perform a kernel evaluation with all input distributions in one's data-set to produce a prediction, leading to a total computational cost of @math when considering the cost of computing metrics @math when @math . This implies, for example, that obtaining estimates for each training instance scales as @math , which will be prohibitive for big data-sets. | {
"cite_N": [
"@cite_14"
],
"mid": [
"1750591676"
],
"abstract": [
"We analyze 'Distribution to Distribution regression' where one is regressing a mapping where both the covariate (inputs) and response (outputs) are distributions. No parameters on the input or output distributions are assumed, nor are any strong assumptions made on the measure from which input distributions are drawn from. We develop an estimator and derive an upper bound for the L2 risk; also, we show that when the effective dimension is small enough (as measured by the doubling dimension), then the risk converges to zero with a polynomial rate."
]
} |
1410.7367 | 2952776460 | Long-term sensor network deployments demand careful power management. While managing power requires understanding the amount of energy harvestable from the local environment, current solar prediction methods rely only on recent local history, which makes them susceptible to high variability. In this paper, we present a model and algorithms for distributed solar current prediction, based on multiple linear regression to predict future solar current based on local, in-situ climatic and solar measurements. These algorithms leverage spatial information from neighbors and adapt to the changing local conditions not captured by global climatic information. We implement these algorithms on our Fleck platform and run a 7-week-long experiment validating our work. In analyzing our results from this experiment, we determined that computing our model requires an increased energy expenditure of 4.5mJ over simpler models (on the order of 10^ -7 of the harvested energy) to gain a prediction improvement of 39.7 . | Energy Prediction Past research projects into energy prediction focus on predicting the future harvestable energy to input into power management models. @cite_12 perform offline linear programming techniques to predict future energy in addition to control methods; the paper describes a simulation of this with no field instantiation. @cite_8 compute offline predictions of sunlight for daylight harvesting. Their approach combines regression analysis with similarity identification and ensemble predictions; the work focuses on finer-grained predictions than ours in addition to its online focus. @cite_21 include National Weather Service forecasts in an offline model to predict solar panel output; as described in their paper, the work is specific to their solar panel system with a focus on data analysis and offline simulation. | {
"cite_N": [
"@cite_21",
"@cite_12",
"@cite_8"
],
"mid": [
"1987307249",
"2103113731",
"2128459757"
],
"abstract": [
"To sustain perpetual operation, systems that harvest environmental energy must carefully regulate their usage to satisfy their demand. Regulating energy usage is challenging if a system's demands are not elastic and its hardware components are not energy-proportional, since it cannot precisely scale its usage to match its supply. Instead, the system must choose when to satisfy its energy demands based on its current energy reserves and predictions of its future energy supply. In this paper, we explore the use of weather forecasts to improve a system's ability to satisfy demand by improving its predictions. We analyze weather forecast, observational, and energy harvesting data to formulate a model that translates a weather forecast to a wind or solar energy harvesting prediction, and quantify its accuracy. We evaluate our model for both energy sources in the context of two different energy harvesting sensor systems with inelastic demands: a sensor testbed that leases sensors to external users and a lexicographically fair sensor network that maintains steady node sensing rates. We show that using weather forecasts in both wind- and solar-powered sensor systems increases each system's ability to satisfy its demands compared with existing prediction strategies.",
"This paper is concerned with solar driven sensors deployed in an outdoor environment. We present feedback controllers which adapt parameters of the application such that a maximal utility is obtained while respecting the time-varying amount of available energy. We show that already simple applications lead to complex optimization problems, involving unacceptable running times and energy consumptions for resource constrained nodes. In addition, naive designs are highly susceptible to energy prediction errors. We address both issues by proposing a hierarchical control approach which both reduces complexity and increases robustness towards prediction uncertainty. As a key component of this hierarchical approach, we propose a new worst-case energy prediction algorithm which guarantees sustainable operation. All methods are evaluated using long-term measurements of solar energy in an outdoor setting. Furthermore, we measured the implementation overhead on a real sensor node.",
"ABSTRACT Daylight harvesting is the use of natural sunlight to reduce the need for artificial lighting in buildings. The key challenge of daylight harvesting is to provide stable indoor lighting levels even though natural sunlight is not a stable light source. In this paper, we present a new technique called SunCast that improves lighting stability by predicting changes in future sunlight levels. The system has two parts: 1) it learns predictable sunlight patterns due to trees, nearby buildings, or other environmental factors, and 2) it controls the window transparency based on a quadratic optimization over predicted sunlight levels. To evaluate the system, we record daylight levels at 39 different windows for up to 12 weeks at a time, and apply our control algorithm on the data traces. Our results indicate that SunCast can reduce glare by 59 over a baseline approach with only a marginal increase in artificial lighting energy."
]
} |
1410.7367 | 2952776460 | Long-term sensor network deployments demand careful power management. While managing power requires understanding the amount of energy harvestable from the local environment, current solar prediction methods rely only on recent local history, which makes them susceptible to high variability. In this paper, we present a model and algorithms for distributed solar current prediction, based on multiple linear regression to predict future solar current based on local, in-situ climatic and solar measurements. These algorithms leverage spatial information from neighbors and adapt to the changing local conditions not captured by global climatic information. We implement these algorithms on our Fleck platform and run a 7-week-long experiment validating our work. In analyzing our results from this experiment, we determined that computing our model requires an increased energy expenditure of 4.5mJ over simpler models (on the order of 10^ -7 of the harvested energy) to gain a prediction improvement of 39.7 . | @cite_3 and @cite_15 examine a prediction method using an Exponentially Weighted Moving Average Model; we discuss this approach in our simulation results. Extensions to EWMA expand the set of data included in the moving average computation as described in @cite_4 , @cite_0 , and @cite_19 . @cite_18 explore EWMA, expansions to EWMA, and additional models, including neural networks. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_15"
],
"mid": [
"2049209435",
"2123022663",
"2122021409",
"2073984308",
"2163900315",
"2104580599"
],
"abstract": [
"Small size photovoltaic modules can harvest enough energy to power many personal devices and wireless sensor nodes. The prediction of solar energy intake is possible thanks to the periodical availability of the sunlight and its cyclic behavior. Thus, smart and innovative power management strategies can take advantage from intake prediction algorithms to optimize the energy usage by keeping the system in low power state as long as possible. On the other hand, very accurate predictions need time and energy because of complex calculations, thus an algorithm that can provide the optimal trade-off between computational effort and accuracy is a breakthrough for systems with tight power constraints. In this paper we introduce an innovative, efficient and reliable solar prediction algorithm, the weather conditioned moving average (WCMA). The algorithm has been further enhanced to increase performance using a phase displacement regulator (PDR) which reduces the average error to less than 9.2 at a minimum energy cost. The proposed new algorithm compares favorably with several competing approaches.",
"Recent advances in energy harvesting materials and ultra-low-power communications will soon enable the realization of networks composed of energy harvesting devices. These devices will operate using very low ambient energy, such as indoor light energy. We focus on characterizing the energy availability in indoor environments and on developing energy allocation algorithms for energy harvesting devices. First, we present results of our long-term indoor radiant energy measurements, which provide important inputs required for algorithm and system design (e.g., determining the required battery sizes). Then, we focus on algorithm development, which requires nontraditional approaches, since energy harvesting shifts the nature of energy-aware protocols from minimizing energy expenditure to optimizing it. Moreover, in many cases, different energy storage types (rechargeable battery and a capacitor) require different algorithms. We develop algorithms for determining time fair energy allocation in systems with predictable energy inputs, as well as in systems where energy inputs are stochastic.",
"Harvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks. In this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment. The algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energy-neutrality constraint, and (c) adapting to the dynamics of the energy source at run-time. We present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data. We also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source. Our methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58 more environmental energy compared to the case when harvesting-aware power management is not used.",
"Recently, Wireless Sensor Networks (WSNs) begin to use the solar energy. But the energy from the environment is usually unstable, An efficient solar prediction algorithm should be studied. In this paper, a novel solar energy prediction algorithm, namely Weather-Conditioned Selective Moving Average (WCSMA), is proposed by using the trend similarity of energy harvesting and the classification of sunny and cloudy days. The simulation results show that the relative mean error of WCSMA algorithm only around 10 , and it is much lower than the Exponential Weighted Moving Average (EWMA) algorithm which is widely used now.",
"Solar panels are frequently used in wireless sensor nodes because they can theoretically provide quite a bit of harvested energy. However, they are not a reliable, consistent source of energy because of the Sun's cycles and the everchanging weather conditions. Thus, in this paper we present a fast, efficient and reliable solar prediction algorithm, namely, Weather-Conditioned Moving Average (WCMA) that is capable of exploiting the solar energy more efficiently than state-of-the-art energy prediction algorithms (e.g. Exponential Weighted Moving Average EWMA). In particular, WCMA is able to effectively take into account both the current and past-days weather conditions, obtaining a relative mean error of only 10 . When coupled with energy management algorithm, it can achieve gains of more than 90 in energy utilization with respect to EWMA under the real working conditions of the Shimmer node, an active sensing platform for structural health monitoring.",
"Energy harvesting offers a promising alternative to solve the sustainability limitations arising from battery size constraints in sensor networks. Several considerations in using an environmental energy source are fundamentally different from using batteries. Rather than a limit on the total energy, harvesting transducers impose a limit on the instantaneous power available. Further, environmental energy availability is often highly variable and a deterministic metric such as residual battery capacity is not available to characterize the energy source. The different nodes in a sensor network may also have different energy harvesting opportunities. Since the same end-user performance may be achieved using different workload allocations at multiple nodes, it is important to adapt the workload allocation to the spatio-temporal energy availability profile in order to enable energy-neutral operation of the network. This paper describes power management techniques for such energy harvesting sensor networks. Platform design considerations as well as power scaling techniques at the node-level and network-level are described."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.