aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1409.4695 | 2071348697 | The massive presence of silent members in online communities, the so-called lurkers, has long attracted the attention of researchers in social science, cognitive psychology, and computer–human interaction. However, the study of lurking phenomena represents an unexplored opportunity of research in data mining, information retrieval and related fields. In this paper, we take a first step towards the formal specification and analysis of lurking in social networks. We address the new problem of lurker ranking and propose the first centrality methods specifically conceived for ranking lurkers in social networks. Our approach utilizes only the network topology without probing into text contents or user relationships related to media. Using Twitter, Flickr, FriendFeed and GooglePlus as cases in point, our methods’ performance was evaluated against data-driven rankings as well as existing centrality methods, including the classic PageRank and alpha-centrality. Empirical evidence has shown the significance of our lurker ranking approach, and its uniqueness in effectively identifying and ranking lurkers in an online social network. | Our definition of lurking is substantially consistent with the various existing perspectives on lurking, previously mentioned in the Introduction. It can in general recognize and measure behaviors that rely on phenomena of lack of information production (i.e., inactivity or occasional activity) as well as on phenomena of information hoarding or overconsumption, like free-riding and leeching. It is worth emphasizing that taking into account the authoritativeness of the information received as well as the non-authoritativeness of the information produced by lurkers is essential to the correct scoring of lurkers. Therefore, our definition of lurking can also explain more complex perspectives, such as legitimate peripheral participation. In this case, a lurker is regarded as a novice, for which it's legitimate to learn from experts as a form of cognitive apprenticeship. Indeed, by applying our LurkerRank methods, in @cite_52 we have addressed an exemplary form of legitimate peripheral participation, known as vicariously learning, in the context of research collaboration networks. | {
"cite_N": [
"@cite_52"
],
"mid": [
"104101700"
],
"abstract": [
"Despite being a topic of growing interest in social learning theory, vicarious learning has not been well-studied so far in digital library related tasks. In this paper, we address a novel ranking problem in research collaboration networks, which focuses on the role of vicarious learner. We introduce a topology-driven vicarious learning definition and propose the first centrality method for ranking vicarious learners. Results obtained on DBLP networks support the significance and uniqueness of the proposed approach."
]
} |
1409.5165 | 2950899690 | A survey of existing methods for stopping active learning (AL) reveals the needs for methods that are: more widely applicable; more aggressive in saving annotations; and more stable across changing datasets. A new method for stopping AL based on stabilizing predictions is presented that addresses these needs. Furthermore, stopping methods are required to handle a broad range of different annotation performance tradeoff valuations. Despite this, the existing body of work is dominated by conservative methods with little (if any) attention paid to providing users with control over the behavior of stopping methods. The proposed method is shown to fill a gap in the level of aggressiveness available for stopping AL and supports providing users with control over stopping behavior. | The confidence-based stopping criterion (hereafter, V2008) in @cite_11 says to stop when model confidence consistently drops. As pointed out by @cite_11 , this stopping criterion is based on the assumption that the learner feature representation is incapable of fully explaining all the examples. However, this assumption is often violated and then the performance of the method suffers (see ). | {
"cite_N": [
"@cite_11"
],
"mid": [
"2065383075"
],
"abstract": [
"Active learning (AL) is a framework that attempts to reduce the cost of annotating training material for statistical learning methods. While a lot of papers have been presented on applying AL to natural language processing tasks reporting impressive savings, little work has been done on defining a stopping criterion. In this work, we present a stopping criterion for active learning based on the way instances are selected during uncertainty-based sampling and verify its applicability in a variety of settings. The statistical learning models used in our study are support vector machines (SVMs), maximum entropy models and Bayesian logistic regression and the tasks performed are text classification, named entity recognition and shallow parsing. In addition, we present a method for multiclass mutually exclusive SVM active learning."
]
} |
1409.4043 | 1915603210 | This paper presents the development of a new algorithm for Gaussian based color image enhancement system. The algorithm has been designed into architecture suitable for FPGA ASIC implementation. The color image enhancement is achieved by first convolving an original image with a Gaussian kernel since Gaussian distribution is a point spread function which smoothes the image. Further, logarithm-domain processing and gain offset corrections are employed in order to enhance and translate pixels into the display range of 0 to 255. The proposed algorithm not only provides better dynamic range compression and color rendition effect but also achieves color constancy in an image. The design exploits high degrees of pipelining and parallel processing to achieve real time performance. The design has been realized by RTL compliant Verilog coding and fits into a single FPGA with a gate count utilization of 321,804. The proposed method is implemented using Xilinx Virtex-II Pro XC2VP40-7FF1148 FPGA device and is capable of processing high resolution color motion pictures of sizes of up to 1600×1200 pixels at the real time video rate of 116 frames per second. This shows that the proposed design would work for not only still images but also for high resolution video sequences. | Digital Signal Processors (DSPs) @cite_3 @cite_0 have been employed for enhancement of images which provides some improvement compared to general purpose computers. Only marginal improvement has been achieved since parallelism and pipelining incorporated in the design are inadequate. This scheme uses optimized DSP libraries for complex operations and does not take full advantage of inherent parallelism of image enhancement algorithm. The neural network based learning algorithm @cite_13 provides an excellent solution for the color image enhancement with color restoration. The hardware implementation of these algorithms parallelizes the computation and delivers real time throughput for color image enhancement. However, its window related operations such as convolution, summation and matrix dot products in an image enhancement architecture demands enormous amount of hardware resources. | {
"cite_N": [
"@cite_0",
"@cite_13",
"@cite_3"
],
"mid": [
"2045870168",
"2065229269",
"1759950926"
],
"abstract": [
"The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame-grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.",
"",
"The Retinex is an image enhancement algorithm that improves the brightness, contrast and sharpness of an image. It performs a non-linear spatial spectral transform that provides simultaneous dynamic range compression and color constancy. It has been used for a wide variety of applications ranging from aviation safety to general purpose photography. Many potential applications require the use of Retinex processing at video frame rates. This is difficult to achieve with general purpose processors because the algorithm contains a large number of complex computations and data transfers. In addition, many of these applications also constrain the potential architectures to embedded processors to save power, weight and cost. Thus we have focused on digital signal processors (DSPs) and field programmable gate arrays (FPGAs) as potential solutions for real-time Retinex processing. In previous efforts we attained a 21 (full) frame per second (fps) processing rate for the single-scale monochromatic Retinex with a TMS320C6711 DSP operating at 150 MHz. This was achieved after several significant code improvements and optimizations. Since then we have migrated our design to the slightly more powerful TMS320C6713 DSP and the fixed point TMS320DM642 DSP. In this paper we briefly discuss the Retinex algorithm, the performance of the algorithm executing on the TMS320C6713 and the TMS320DM642, and compare the results with the TMS320C6711."
]
} |
1409.4043 | 1915603210 | This paper presents the development of a new algorithm for Gaussian based color image enhancement system. The algorithm has been designed into architecture suitable for FPGA ASIC implementation. The color image enhancement is achieved by first convolving an original image with a Gaussian kernel since Gaussian distribution is a point spread function which smoothes the image. Further, logarithm-domain processing and gain offset corrections are employed in order to enhance and translate pixels into the display range of 0 to 255. The proposed algorithm not only provides better dynamic range compression and color rendition effect but also achieves color constancy in an image. The design exploits high degrees of pipelining and parallel processing to achieve real time performance. The design has been realized by RTL compliant Verilog coding and fits into a single FPGA with a gate count utilization of 321,804. The proposed method is implemented using Xilinx Virtex-II Pro XC2VP40-7FF1148 FPGA device and is capable of processing high resolution color motion pictures of sizes of up to 1600×1200 pixels at the real time video rate of 116 frames per second. This shows that the proposed design would work for not only still images but also for high resolution video sequences. | Hiroshi @cite_2 proposed an FPGA implementation of adaptive real-time video image enhancement based on variational model of the Retinex theory. The authors have claimed that the architectures developed in this scheme are efficient and can handle color picture of size @math pixels at the real time video rate of 60 frames per sec. The authors have not justified how high throughput has been achieved in spite of time consuming iterations to the tune of 30. Abdullah M. @cite_8 proposed a new approach for histogram equalization using FPGAs. Although efficient architectures were developed for histogram equalization, the restored images using this scheme are generally not satisfactory. | {
"cite_N": [
"@cite_8",
"@cite_2"
],
"mid": [
"2533972219",
"1533497594"
],
"abstract": [
"This paper presents a novel design for real-time histogram equalization based on field programmable gate arrays (FPGAs). The design is implemented using non-conventional schemes to compute the histogram statistics and equalization in parallel. Counters are used in conjunction with a dedicated decoder specially designed for this purpose. The hardware is fast, simple, and flexible with reasonable development cost. The proposed system is implemented using Stratix II family chip type EP2S15F484C3. The maximum clock frequency can reach up to 250 MHz. In this case, the total time required to perform histogram equalization for an image of size 256 spl times 256 is 0.262 ms.",
"In this paper, we present an FPGA implementation of real-time Retinex video image enhancement. Our implementation is based on the previously proposed architecture, which can handle the variational approach of the Retinex theory. In order to efficiently reduce the enormous computational cost required for image enhancement, processing layers and repeat counts of iterations are determined in accordance with software evaluation result. As for processing architecture, our pipelining architecture can handle high resolution pictures in real-time. Our FPGA implementation supports WUXGA (1,920×1,200) 60 fps as well as 1080p60."
]
} |
1409.4043 | 1915603210 | This paper presents the development of a new algorithm for Gaussian based color image enhancement system. The algorithm has been designed into architecture suitable for FPGA ASIC implementation. The color image enhancement is achieved by first convolving an original image with a Gaussian kernel since Gaussian distribution is a point spread function which smoothes the image. Further, logarithm-domain processing and gain offset corrections are employed in order to enhance and translate pixels into the display range of 0 to 255. The proposed algorithm not only provides better dynamic range compression and color rendition effect but also achieves color constancy in an image. The design exploits high degrees of pipelining and parallel processing to achieve real time performance. The design has been realized by RTL compliant Verilog coding and fits into a single FPGA with a gate count utilization of 321,804. The proposed method is implemented using Xilinx Virtex-II Pro XC2VP40-7FF1148 FPGA device and is capable of processing high resolution color motion pictures of sizes of up to 1600×1200 pixels at the real time video rate of 116 frames per second. This shows that the proposed design would work for not only still images but also for high resolution video sequences. | An efficient architecture for enhancement of video stream captured in non-uniform lighting conditions was proposed by Ming Z. @cite_6 . The new architecture processes images and streaming video in the HSV domain with the homomorphic filter and converts the result back to HSV. This leads to an additional computational cost and, the error rate is high for the RGB to HSV conversion process. Digital architecture for real time video enhancement based on illumination reflection model was proposed by Hau T. @cite_5 . This scheme improves visual quality of digital images and video captured under insufficient and non-uniform lighting conditions. @cite_11 proposed spatial-based adaptive and reusable hardware architecture for image enhancement. However, the histogram modification used in this scheme treats all regions of the image equally and often results in poor local performance, which in turn affects the image details. The modified luminance based multiscale retinex algorithm proposed by @cite_1 achieves optimal enhancement result with minimal complexity of hardware implementation. However, the algorithm works fine so long as the background is dark and the object is bright. | {
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_6",
"@cite_11"
],
"mid": [
"2041794337",
"2147730744",
"1969128393",
"1660105869"
],
"abstract": [
"A design of a high performance digital architecture for a nonlinear image enhancement technique is presented in this paper. The image enhancement is based on illuminance-reflectance model which improves the visual quality of digital images and video captured under insufficient or non-uniform lighting conditions [1]. Systolic, pipelined and parallel design techniques are utilized effectively in the proposed FPGA-based architectural design to achieve real-time performance. Estimation and folding techniques are used in the hardware algorithmic design to achieve faster, simpler and more efficient architecture. The video enhancement system is implemented using Xilinx's multimedia development board that contains a VirtexII-X2000 FPGA and it is capable of processing approximately 66 Mega-pixels (Mpixels) per second.",
"A luminance based multi scale retinex (LB spl I.bar MSR) algorithm for the enhancement of darker images is proposed in this paper. The new technique consists only the addition of the convolution results of 3 different scales. In this way, the color noise in the shadow dark areas can be suppressed and the convolutions with different scales can be calculated simultaneously to save CPU time. Color saturation adjustment for producing more natural colors is implemented. Each spectral band can be adjusted based on the enhancement of the intensity of the band and by using a color saturation parameter. The color saturation degree can be automatically adjusted according to different types of images by compensating the original color saturation in each band. Luminance control is applied to prevent the unwanted luminance drop at the uniform luminance areas by automatically detecting the luminance drop and keeping the luminance up to certain level that is evaluated from the original image. Down-sized convolution is used for fast processing and then the result is re-sized back to the original size. Performance of the new enhancement algorithm is tested in various images captured at different lighting conditions. It is observed that the new technique outperforms the conventional MSR technique in terms of the quality of the enhanced images and computational speed.",
"A novel architecture for performing hue-saturation-value (HSV) domain enhancement of digital color images with non-uniform lighting conditions is proposed in this paper for video streaming applications. The approach promotes log-domain computation to eliminate all multiplications, divisions and exponentiations utilizing the effective logarithmic estimation modules. An optimized quadrant symmetric architecture is incorporated into the design of homomorphic filter for the enhancement of intensity value. Efficient modules are also presented for conversion between RGB and HSV color spaces. The design is able to bring out details hidden in shadow regions of the image. It is capable of producing 187.86 million outputs per second (MOPs) on Xilinx's Virtex II XC2V2000-4ff896 field programmable gate array (FPGA) at a clock frequency of 187.86 MHz. It can process over 179.1 (1024 X 1024) frames per second and consumes approximately 70.7 and 76.8 less hardware resource with 127 and 280 performance boost when compared to the designs with machine learning algorithm [10], and with separated dynamic and contrast enhancements [11], respectively.",
"The aim of this work is to design a real-time adaptive and reusable image enhancement architecture for video signals, based on a statistical processing of the video sequence. The VHDL hardware description language has been used in order to make possible a top-down design methodology. Generic design methodology has been followed by means of two features of the VHDL: global packages and generic pass. Image processing systems like this one require specific simulation tools in order to reduce the development time. A VHDL test bench has been designed specifically for image processing applications to facilitate the simulation process. It was necessary to define a new image file format with special characteristics for this purpose. A physical realization has been carried out on a FPGA to prove the validation of the design."
]
} |
1409.4043 | 1915603210 | This paper presents the development of a new algorithm for Gaussian based color image enhancement system. The algorithm has been designed into architecture suitable for FPGA ASIC implementation. The color image enhancement is achieved by first convolving an original image with a Gaussian kernel since Gaussian distribution is a point spread function which smoothes the image. Further, logarithm-domain processing and gain offset corrections are employed in order to enhance and translate pixels into the display range of 0 to 255. The proposed algorithm not only provides better dynamic range compression and color rendition effect but also achieves color constancy in an image. The design exploits high degrees of pipelining and parallel processing to achieve real time performance. The design has been realized by RTL compliant Verilog coding and fits into a single FPGA with a gate count utilization of 321,804. The proposed method is implemented using Xilinx Virtex-II Pro XC2VP40-7FF1148 FPGA device and is capable of processing high resolution color motion pictures of sizes of up to 1600×1200 pixels at the real time video rate of 116 frames per second. This shows that the proposed design would work for not only still images but also for high resolution video sequences. | The limitations mentioned earlier are overcome in the proposed method in an efficient way. To start with, the input image is convolved with @math Gaussian kernel in order to smooth the image. Further, the dynamic range of an image is compressed by replacing each pixel with its logarithm. In the proposed method, the image enhancement operations are arranged in an efficient way adding true color constancy at every step. It has less number of parameters to specify and provides true color fidelity. In addition, the proposed algorithm is computationally inexpensive. In the proposed scheme, an additional step is necessary to solve the gray world violation problem as is the case with the implementation reported in Ref. @cite_1 . | {
"cite_N": [
"@cite_1"
],
"mid": [
"2147730744"
],
"abstract": [
"A luminance based multi scale retinex (LB spl I.bar MSR) algorithm for the enhancement of darker images is proposed in this paper. The new technique consists only the addition of the convolution results of 3 different scales. In this way, the color noise in the shadow dark areas can be suppressed and the convolutions with different scales can be calculated simultaneously to save CPU time. Color saturation adjustment for producing more natural colors is implemented. Each spectral band can be adjusted based on the enhancement of the intensity of the band and by using a color saturation parameter. The color saturation degree can be automatically adjusted according to different types of images by compensating the original color saturation in each band. Luminance control is applied to prevent the unwanted luminance drop at the uniform luminance areas by automatically detecting the luminance drop and keeping the luminance up to certain level that is evaluated from the original image. Down-sized convolution is used for fast processing and then the result is re-sized back to the original size. Performance of the new enhancement algorithm is tested in various images captured at different lighting conditions. It is observed that the new technique outperforms the conventional MSR technique in terms of the quality of the enhanced images and computational speed."
]
} |
1409.4276 | 2950337035 | The Minimum Quartet Tree Cost problem is to construct an optimal weight tree from the @math weighted quartet topologies on @math objects, where optimality means that the summed weight of the embedded quartet topologies is optimal (so it can be the case that the optimal tree embeds all quartets as nonoptimal topologies). We present a Monte Carlo heuristic, based on randomized hill climbing, for approximating the optimal weight tree, given the quartet topology weights. The method repeatedly transforms a dendrogram, with all objects involved as leaves, achieving a monotonic approximation to the exact single globally optimal tree. The problem and the solution heuristic has been extensively used for general hierarchical clustering of nontree-like (non-phylogeny) data in various domains and across domains with heterogeneous data. We also present a greatly improved heuristic, reducing the running time by a factor of order a thousand to ten thousand. All this is implemented and available, as part of the CompLearn package. We compare performance and running time of the original and improved versions with those of UPGMA, BioNJ, and NJ, as implemented in the SplitsTree package on genomic data for which the latter are optimized. Keywords: Data and knowledge visualization, Pattern matching--Clustering--Algorithms Similarity measures, Hierarchical clustering, Global optimization, Quartet tree, Randomized hill-climbing, | (i) Incrementally grow the tree in random order by stepwise addition of objects in the locally optimal way, repeat this for different object orders, and add agreement values on the branches, like DNAML @cite_11 , or Quartet Puzzling @cite_17 . These methods are fast, but suffer from the usual bottom-up problem: a wrong decision early on cannot be corrected later. Another possible problem is as follows. Suppose we have just 32 items. With Quartet Puzzling we incrementally construct a quartet tree from a randomly ordered list of elements, where each next element is optimally connected to the current tree comprising the previous elements. We repeat this process for, say, 1000 permutations. Subsequently, we look for percentage agreement of subtrees common to all such trees. But the number of permutations is about @math , so why would the incrementally locally optimal trees derived from 1000 random permutations be a representative sample from which we can conclude anything about the globally optimal tree? | {
"cite_N": [
"@cite_17",
"@cite_11"
],
"mid": [
"2164997158",
"2102424972"
],
"abstract": [
"A versatile method, quartet puzzling, is introduced to reconstruct the topology (branching pattern) of a phylogenetic tree based on DNA or amino acid sequence data. This method applies maximum-likelihood tree reconstruction to all possible quartets that can be formed from n sequences. The quartet trees serve as starting points to reconstruct a set of optimal n-taxon trees. The majority rule consensus of these trees defines the quartet puzzling tree and shows groupings that are well supported. Computer simulations show that the performance of quartet puzzling to reconstruct the true tree is always equal to or better than that of neighbor joining. For some cases with high transition transversion bias quartet puzzling outperforms neighbor joining by a factor of 10. The application of quartet puzzling to mitochondrial RNA and tRNAVd’ sequences from amniotes demonstrates the power of the approach. A PHYLIP-compatible ANSI C program, PUZZLE, for analyzing nucleotide or amino acid sequence data is available.",
"The application of maximum likelihood techniques to the estimation of evolutionary trees from nucleic acid sequence data is discussed. A computationally feasible method for finding such maximum likelihood estimates is developed, and a computer program is available. This method has advantages over the traditional parsimony algorithms, which can give misleading results if rates of evolution differ in different lineages. It also allows the testing of hypotheses about the constancy of evolutionary rates by likelihood ratio tests, and gives rough indication of the error of the estimate of the tree."
]
} |
1409.4155 | 1893474868 | This work focuses on active learning of distance metrics from relative comparison information. A relative comparison specifies, for a data point triplet @math , that instance @math is more similar to @math than to @math . Such constraints, when available, have been shown to be useful toward defining appropriate distance metrics. In real-world applications, acquiring constraints often require considerable human effort. This motivates us to study how to select and query the most useful relative comparisons to achieve effective metric learning with minimum user effort. Given an underlying class concept that is employed by the user to provide such constraints, we present an information-theoretic criterion that selects the triplet whose answer leads to the highest expected gain in information about the classes of a set of examples. Directly applying the proposed criterion requires examining @math triplets with @math instances, which is prohibitive even for datasets of moderate size. We show that a randomized selection strategy can be used to reduce the selection pool from @math to @math , allowing us to scale up to larger-size problems. Experiments show that the proposed method consistently outperforms two baseline policies. | proposed one of the first formal approaches for distance metric learning with side information @cite_12 . In this study, they considered pairwise information indicating whether two instances are similar or dissimilar. A distance metric is learned by minimizing the distances between the instances in the similar pairs while keeping the distances between dissimilar pairs large. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2117154949"
],
"abstract": [
"Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \"plausible\" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \"similar.\" For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ℝn, learns a distance metric over ℝn that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance."
]
} |
1409.4155 | 1893474868 | This work focuses on active learning of distance metrics from relative comparison information. A relative comparison specifies, for a data point triplet @math , that instance @math is more similar to @math than to @math . Such constraints, when available, have been shown to be useful toward defining appropriate distance metrics. In real-world applications, acquiring constraints often require considerable human effort. This motivates us to study how to select and query the most useful relative comparisons to achieve effective metric learning with minimum user effort. Given an underlying class concept that is employed by the user to provide such constraints, we present an information-theoretic criterion that selects the triplet whose answer leads to the highest expected gain in information about the classes of a set of examples. Directly applying the proposed criterion requires examining @math triplets with @math instances, which is prohibitive even for datasets of moderate size. We show that a randomized selection strategy can be used to reduce the selection pool from @math to @math , allowing us to scale up to larger-size problems. Experiments show that the proposed method consistently outperforms two baseline policies. | Distance metric learning with relative comparisons has also been studied in different contexts @cite_5 @cite_7 . formulated a constrained optimization problem where the constraints are defined by relative comparisons and the objective is to learn a distance metric that remains as close to an un-weighted Euclidean metric as possible @cite_7 . @cite_5 proposed to learn a projection matrix from relative comparisons. This approach also employed relative comparisons to create constraints on the solution space, but optimized a different objective to encourage sparsity of the learned projection matrix. Both studies assumed that the relative comparisons are given and the constraints are a set of random or otherwise non-optimized set of pre-selected triplets. That is, the algorithm is not allowed to request comparisons outside of the given set. | {
"cite_N": [
"@cite_5",
"@cite_7"
],
"mid": [
"2093357278",
"2118393783"
],
"abstract": [
"Calculation of object similarity, for example through a distance function, is a common part of data mining and machine learning algorithms. This calculation is crucial for efficiency since distances are usually evaluated a large number of times, the classical example being query-by-example (find objects that are similar to a given query object). Moreover, the performance of these algorithms depends critically on choosing a good distance function. However, it is often the case that (1) the correct distance is unknown or chosen by hand, and (2) its calculation is computationally expensive (e.g., such as for large dimensional objects). In this paper, we propose a method for constructing relative-distance preserving low-dimensional mapping (sparse mappings). This method allows learning unknown distance functions (or approximating known functions) with the additional property of reducing distance computation time. We present an algorithm that given examples of proximity comparisons among triples of objects (object i is more like object j than object k), learns a distance function, in as few dimensions as possible, that preserves these distance relationships. The formulation is based on solving a linear programming optimization problem that finds an optimal mapping for the given dataset and distance relationships. Unlike other popular embedding algorithms, this method can easily generalize to new points, does not have local minima, and explicitly models computational efficiency by finding a mapping that is sparse, i.e. one that depends on a small subset of features or dimensions. Experimental evaluation shows that the proposed formulation compares favorably with a state-of-the art method in several publicly available datasets.",
"This paper presents a method for learning a distance metric from relative comparison such as \"A is closer to B than A is to C\". Taking a Support Vector Machine (SVM) approach, we develop an algorithm that provides a flexible way of describing qualitative training data as a set of constraints. We show that such constraints lead to a convex quadratic programming problem that can be solved by adapting standard methods for SVM training. We empirically evaluate the performance and the modelling flexibility of the algorithm on a collection of text documents."
]
} |
1409.4155 | 1893474868 | This work focuses on active learning of distance metrics from relative comparison information. A relative comparison specifies, for a data point triplet @math , that instance @math is more similar to @math than to @math . Such constraints, when available, have been shown to be useful toward defining appropriate distance metrics. In real-world applications, acquiring constraints often require considerable human effort. This motivates us to study how to select and query the most useful relative comparisons to achieve effective metric learning with minimum user effort. Given an underlying class concept that is employed by the user to provide such constraints, we present an information-theoretic criterion that selects the triplet whose answer leads to the highest expected gain in information about the classes of a set of examples. Directly applying the proposed criterion requires examining @math triplets with @math instances, which is prohibitive even for datasets of moderate size. We show that a randomized selection strategy can be used to reduce the selection pool from @math to @math , allowing us to scale up to larger-size problems. Experiments show that the proposed method consistently outperforms two baseline policies. | Active learning has also been studied for semi-supervised clustering. In relation to our work, most previous approaches concentrate on active selection of pairwise constraints @cite_16 @cite_14 @cite_19 @cite_1 . While the goal in these approaches is semi-supervised learning (not distance metric learning), we partially share their motivation. In the context of distance metric learning, an active learning strategy was proposed in @cite_3 whithin the larger context of a Bayesian metric learning formulation. This is the most closely related work as it addresses active learning. However, like all of the above formulations, it uses constraints of the form must-link and cannot-link , specifying that two instances must or must not fall into the same cluster respectively. As discussed previously, answering pairwise queries as either must-link or cannot-link constraints requires the user to make absolute judgements, making it less practical more demanding for the user and also more prone to human error. In addition, none of the above formulations considers the effect of don't know answers (to a triplet relationship query) despite its importance in real, practical applications. These relevant factors motivated us to study active learning in the current setting. This is a problem that has not been studied previously. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_16"
],
"mid": [
"1481689911",
"1508464538",
"1523884323",
"2135914502",
"2153839362"
],
"abstract": [
"A number of clustering algorithms have been proposed for use in tasks where a limited degree of supervision is available. This prior knowledge is frequently provided in the form of pairwise must-link and cannot-link constraints. While the incorporation of pairwise supervision has the potential to improve clustering accuracy, the composition and cardinality of the constraint sets can significantly impact upon the level of improvement. We demonstrate that it is often possible to correctly \"guess\" a large number of constraints without supervision from the co-associations between pairs of objects in an ensemble of clusterings. Along the same lines, we establish that constraints based on pairs with uncertain co-associations are particularly informative, if known. An evaluation on text data shows that this provides an effective criterion for identifying constraints, leading to a reduction in the level of supervision required to direct a clustering algorithm to an accurate solution.",
"This work focuses on the active selection of pairwise constraints for spectral clustering. We develop and analyze a technique for Active Constrained Clustering by Examining Spectral eigenvectorS (ACCESS) derived from a similarity matrix. The ACCESS method uses an analysis based on the theoretical properties of spectral decomposition to identify data items that are likely to be located on the boundaries of clusters, and for which providing constraints can resolve ambiguity in the cluster descriptions. Empirical results on three synthetic and five real data sets show that ACCESS significantly outperforms constrained spectral clustering using randomly selected constraints.",
"Distance metric learning is an important component for many tasks, such as statistical classification and content-based image retrieval. Existing approaches for learning distance metrics from pairwise constraints typically suffer from two major problems. First, most algorithms only offer point estimation of the distance metric and can therefore be unreliable when the number of training examples is small. Second, since these algorithms generally select their training examples at random, they can be inefficient if labeling effort is limited. This paper presents a Bayesian framework for distance metric learning that estimates a posterior distribution for the distance metric from labeled pair-wise constraints. We describe an efficient algorithm based on the variational method for the proposed Bayesian approach. Furthermore, we apply the proposed Bayesian framework to active distance metric learning by selecting those unlabeled example pairs with the greatest uncertainty in relative distance. Experiments in classification demonstrate that the proposed framework achieves higher classification accuracy and identifies more informative training examples than the non-Bayesian approach and state-of-the-art distance metric learning algorithms.",
"Semi-supervised clustering allows a user to specify available prior knowledge about the data to improve the clustering performance. A common way to express this information is in the form of pair-wise constraints. A number of studies have shown that, in general, these constraints improve the resulting data partition. However, the choice of constraints is critical since improperly chosen constraints might actually degrade the clustering performance. We focus on constraint (also known as query) selection for improving the performance of semi-supervised clustering algorithms. We present an active query selection mechanism, where the queries are selected using a min-max criterion. Experimental results on a variety of datasets, using MPCK-means as the underlying semi-clustering algorithm, demonstrate the superior performance of the proposed query selection procedure.",
"Semi-supervised clustering uses a small amount of supervised data to aid unsupervised learning. One typical approach specifies a limited number of must-link and cannotlink constraints between pairs of examples. This paper presents a pairwise constrained clustering framework and a new method for actively selecting informative pairwise constraints to get improved clustering performance. The clustering and active learning methods are both easily scalable to large datasets, and can handle very high dimensional data. Experimental and theoretical results confirm that this active querying of pairwise constraints significantly improves the accuracy of clustering when given a relatively small amount of supervision."
]
} |
1409.4565 | 1981657315 | The BitTorrent mechanism effectively spreads file fragments by copying the rarest fragments first. We propose to apply a mathematical model for the diffusion of fragments on a P2P in order to take into account both the effects of peer distances and the changing availability of peers while time goes on. Moreover, we manage to provide a forecast on the availability of a torrent thanks to a neural network that models the behaviour of peers on the P2P system. The combination of the mathematical model and the neural network provides a solution for choosing file fragments that need to be copied first, in order to ensure their continuous availability, counteracting possible disconnections by some peers. | Several studies have analysed the behaviour of BitTorrent systems from the point of view of fairness, i.e. how to have users contribute with contents that can be uploaded by other users, levelling the amount of downloads with that of upload. Fewer works have studied the problem of unavailability of contents in P2P BitTorrent networks. @cite_12 , authors proposed to order peers according to their uploading bandwidth, hence when providing contents the selection of peers is performed accordingly. One of the mechanism proposed to increase files availability has been to use multi-torrent, i.e. for ensuring fairness, instead of forcing users stay longer, they provide their contribution to uploaders for fragments belonging to different files @cite_18 . Similarly, in @cite_3 authors show that by using multi-torrent availability can be easily increased, and confirm that fast replication of rare fragments is essential. Furthermore, bundling, i.e. the dissemination of a number of related files together, has been proposed to increase availability @cite_6 . | {
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_6",
"@cite_12"
],
"mid": [
"2111346848",
"1981954478",
"2156737568",
"2166245380"
],
"abstract": [
"Existing studies on BitTorrent systems are single-torrent based, while more than 85 of all peers participate in multiple torrents according to our trace analysis. In addition, these studies are not sufficiently insightful and accurate even for single-torrent models, due to some unrealistic assumptions. Our analysis of representative Bit-Torrent traffic provides several new findings regarding the limitations of BitTorrent systems: (1) Due to the exponentially decreasing peer arrival rate in reality, service availability in such systems becomes poor quickly, after which it is difficult for the file to be located and downloaded. (2) Client performance in the BitTorrent-like systems is unstable, and fluctuates widely with the peer population. (3) Existing systems could provide unfair services to peers, where peers with high downloading speed tend to download more and upload less. In this paper, we study these limitations on torrent evolution in realistic environments. Motivated by the analysis and modeling results, we further build a graph based multi-torrent model to study inter-torrent collaboration. Our model quantitatively provides strong motivation for inter-torrent collaboration instead of directly stimulating seeds to stay longer. We also discuss a system design to show the feasibility of multi-torrent collaboration.",
"BitTorrent suffers from one fundamental problem: the long-term availability of content. This occurs on a massive-scale with 38 of torrents becoming unavailable within the first month. In this paper we explore this problem by performing two large-scale measurement studies including 46K torrents and 29M users. The studies go significantly beyond any previous work by combining per-node, per-torrent and system-wide observations to ascertain the causes, characteristics and repercussions of file unavailability. The study confirms the conclusion from previous works that seeders have a significant impact on both performance and availability. However, we also present some crucial new findings: (i) the presence of seeders is not the sole factor involved in file availability, (ii) 23.5 of nodes that operate in seedless torrents can finish their downloads, and (iii) BitTorrent availability is discontinuous, operating in cycles of temporary unavailability.",
"BitTorrent, the immensely popular file swarming system, suffers a fundamental problem: unavailability. Although swarming scales well to tolerate flash crowds for popular content, it is less useful for unpopular or rare files as peers arriving after the initial rush find the content unavailable. Our primary contribution is a model to quantify content availability in swarming systems. We use the model to analyze the availability and the performance implications of bundling, a strategy commonly adopted by many BitTorrent publishers today. We find that even a limited amount of bundling exponentially reduces content unavailability. Furthermore, for swarms with highly unavailable publishers, the availability gain of bundling can result in a net improvement in download time, i.e., peers obtain more content in less time. We empirically confirm the model's conclusions through experiments on PlanetLab using the mainline BitTorrent client.",
"In this paper, we develop simple models to study the performance of BitTorrent, a second generation peer-to-peer (P2P) application. We first present a simple fluid model and study the scalability, performance and efficiency of such a file-sharing mechanism. We then consider the built-in incentive mechanism of BitTorrent and study its effect on network performance. We also provide numerical results based on both simulations and real traces obtained from the Internet."
]
} |
1409.4297 | 61396054 | In 2013 Intel introduced the Xeon Phi, a new parallel co-processor board. The Xeon Phi is a cache-coherent many-core shared memory architecture claiming CPU-like versatility, programmability, high performance, and power efficiency. The first published micro-benchmark studies indicate that many of Intel's claims appear to be true. The current paper is the first study on the Phi of a complex artificial intelligence application. It contains an open source MCTS application for playing tournament quality Go (an oriental board game). We report the first speedup figures for up to 240 parallel threads on a real machine, allowing a direct comparison to previous simulation studies. After a substantial amount of work, we observed that performance scales well up to 32 threads, largely confirming previous simulation results of this Go program, although the performance surprisingly deteriorates between 32 and 240 threads. Furthermore, we report (1) unexpected performance anomalies between the Xeon Phi and Xeon CPU for small problem sizes and small numbers of threads, and (2) that performance is sensitive to scheduling choices. Achieving good performance on the Xeon Phi for complex programs is not straightforward; it requires a deep understanding of (1) search patterns, (2) of scheduling, and (3) of the architecture and its many cores and caches. In practice, the Xeon Phi is less straightforward to program for than originally envisioned by Intel. | Below we review related work on MCTS parallelizations. The four major parallelization methods for MCTS are leaf parallelization, root parallelization, tree parallelization @cite_16 , and transposition table driven work scheduling (TDS) based approaches @cite_19 . Of these, tree parallelization is the method most often used on shared memory machines. It is the method used in Fuego . In tree parallelization one MCTS tree is shared among several threads that are performing simultaneous searches @cite_16 . The main challenge in this method is using data locks to prevent data corruption. Figure shows the tree parallelization algorithm with local locks. A lock-free implementation of this algorithm addressed the aforementioned problem with better scaling than a locked approach @cite_8 . There is also a case study that shows a good performance of a (non-MCTS) Monte Carlo simulation on the Xeon Phi co-processor @cite_18 . | {
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_16",
"@cite_8"
],
"mid": [
"2163413133",
"",
"1573483709",
"1528097685"
],
"abstract": [
"Monte-Carlo Tree Search (MCTS) is remarkably successful in two-player games, but parallelizing MCTS has been notoriously difficult to scale well, especially in distributed environments. For a distributed parallel search, transposition-table driven scheduling (TDS) is known to be efficient in several domains. We present a massively parallel MCTS algorithm, that applies the TDS parallelism to the Upper Confidence bound Applied to Trees (UCT) algorithm, which is the most representative MCTS algorithm. To drastically decrease communication overhead, we introduce a reformulation of UCT called Depth-First UCT. The parallel performance of the algorithm is evaluated on clusters using up to 1,200 cores in artificial game-trees. We show that this approach scales well, achieving 740-fold speedups in the best case.",
"",
"Monte-Carlo Tree Search (MCTS) is a new best-first search method that started a revolution in the field of Computer Go. Parallelizing MCTS is an important way to increase the strength of any Go program. In this article, we discuss three parallelization methods for MCTS: leaf parallelization, root parallelization, and tree parallelization. To be effective tree parallelization requires two techniques: adequately handling of (1) local mutexesand (2) virtual loss. Experiments in 13×13 Go reveal that in the program Mango root parallelization may lead to the best results for a specific time setting and specific program parameters. However, as soon as the selection mechanism is able to handle more adequately the balance of exploitation and exploration, tree parallelization should have attention too and could become a second choice for parallelizing MCTS. Preliminary experiments on the smaller 9×9 board provide promising prospects for tree parallelization.",
"With the recent success of Monte-Carlo tree search algorithms in Go and other games, and the increasing number of cores in standard CPUs, the efficient parallelization of the search has become an important issue. We present a new lock-free parallel algorithm for Monte-Carlo tree search which takes advantage of the memory model of the IA-32 and Intel-64 CPU architectures and intentionally ignores rare faulty updates of node values. We show that this algorithm significantly improves the scalability of the Fuego Go program."
]
} |
1409.4297 | 61396054 | In 2013 Intel introduced the Xeon Phi, a new parallel co-processor board. The Xeon Phi is a cache-coherent many-core shared memory architecture claiming CPU-like versatility, programmability, high performance, and power efficiency. The first published micro-benchmark studies indicate that many of Intel's claims appear to be true. The current paper is the first study on the Phi of a complex artificial intelligence application. It contains an open source MCTS application for playing tournament quality Go (an oriental board game). We report the first speedup figures for up to 240 parallel threads on a real machine, allowing a direct comparison to previous simulation studies. After a substantial amount of work, we observed that performance scales well up to 32 threads, largely confirming previous simulation results of this Go program, although the performance surprisingly deteriorates between 32 and 240 threads. Furthermore, we report (1) unexpected performance anomalies between the Xeon Phi and Xeon CPU for small problem sizes and small numbers of threads, and (2) that performance is sensitive to scheduling choices. Achieving good performance on the Xeon Phi for complex programs is not straightforward; it requires a deep understanding of (1) search patterns, (2) of scheduling, and (3) of the architecture and its many cores and caches. In practice, the Xeon Phi is less straightforward to program for than originally envisioned by Intel. | @cite_6 propose a parallel MCTS method for distributed memory systems called . @cite_19 describe a parallelization approach based on TDS @cite_3 @cite_10 for MCTS called . There are some attempts to parallelize MCTS on accelerator processors including GPU @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_10"
],
"mid": [
"2148036086",
"1560689031",
"2038245069",
"2163413133",
"2157591264"
],
"abstract": [
"Monte Carlo Tree Search (MCTS) is a method for making optimal decisions in artificial intelligence (AI) problems, typically move planning in combinatorial games. It combines the generality of random simulation with the precision of tree search. The motivation behind this work is caused by the emerging GPU-based systems and their high computational potential combined with relatively low power usage compared to CPUs. As a problem to be solved I chose to develop an AI GPU(Graphics Processing Unit)-based agent in the game of Reversi (Othello) which provides a sufficiently complex problem for tree searching with non-uniform structure and an average branching factor of over 8. I present an efficient parallel GPU MCTS implementation based on the introduced 'block-parallelism' scheme which combines GPU SIMD thread groups and performs independent searches without any need of intra-GPU or inter-GPU communication. I compare it with a simple leaf parallel scheme which implies certain performance limitations. The obtained results show that using my GPU MCTS implementation on the TSUBAME 2.0 system one GPU can be compared to 100-200 CPU threads depending on factors such as the search time and other MCTS parameters in terms of obtained results. I propose and analyze simultaneous CPU GPU execution which improves the overall result.",
"This paper introduces a new scheduling algorithm for parallel single-agent search, transposition table driven work scheduling, that places the transposition table at the heart of the parallel work scheduling. The scheme results in less synchronization overhead, less processor idle time, and less redundant search effort. Measurements on a 128-processor parallel machine show that the scheme achieves nearly-optimal performance and scales well. The algorithm performs a factor of 2.0 to 13.7 times better than traditional work-stealing-based schemes.",
"Monte Carlo tree search (MCTS) has brought about great success regarding the evaluation of stochastic and deterministic games in recent years. We present and empirically analyze a data-driven parallelization approach for MCTS targeting large HPC clusters with Infiniband interconnect. Our implementation is based on OpenMPI and makes extensive use of its RDMA based asynchronous tiny message communication capabilities for effectively overlapping communication and computation. We integrate our parallel MCTS approach termed UCT-Treesplit in our state-of-the-art Go engine Gomorra and measure its strengths and limitations in a real-world setting. Our extensive experiments show that we can scale up to 128 compute nodes and 2048 cores in self-play experiments and, furthermore, give promising directions for additional improvement. The generality of our parallelization approach advocates its use to significantly improve the search quality of a huge number of current MCTS applications.",
"Monte-Carlo Tree Search (MCTS) is remarkably successful in two-player games, but parallelizing MCTS has been notoriously difficult to scale well, especially in distributed environments. For a distributed parallel search, transposition-table driven scheduling (TDS) is known to be efficient in several domains. We present a massively parallel MCTS algorithm, that applies the TDS parallelism to the Upper Confidence bound Applied to Trees (UCT) algorithm, which is the most representative MCTS algorithm. To drastically decrease communication overhead, we introduce a reformulation of UCT called Depth-First UCT. The parallel performance of the algorithm is evaluated on clusters using up to 1,200 cores in artificial game-trees. We show that this approach scales well, achieving 740-fold speedups in the best case.",
"This paper discusses a new work-scheduling algorithm for parallel search of single-agent state spaces, called transposition-table-driven work scheduling, that places the transposition table at the heart of the parallel work scheduling. The scheme results in less synchronization overhead, less processor idle time, and less redundant search effort. Measurements on a 128-processor parallel machine show that the scheme achieves close-to-linear speedups; for large problems the speedups are even superlinear due to better memory usage. On the same machine, the algorithm is 1.6 to 12.9 times faster than traditional work-stealing-based schemes."
]
} |
1409.4297 | 61396054 | In 2013 Intel introduced the Xeon Phi, a new parallel co-processor board. The Xeon Phi is a cache-coherent many-core shared memory architecture claiming CPU-like versatility, programmability, high performance, and power efficiency. The first published micro-benchmark studies indicate that many of Intel's claims appear to be true. The current paper is the first study on the Phi of a complex artificial intelligence application. It contains an open source MCTS application for playing tournament quality Go (an oriental board game). We report the first speedup figures for up to 240 parallel threads on a real machine, allowing a direct comparison to previous simulation studies. After a substantial amount of work, we observed that performance scales well up to 32 threads, largely confirming previous simulation results of this Go program, although the performance surprisingly deteriorates between 32 and 240 threads. Furthermore, we report (1) unexpected performance anomalies between the Xeon Phi and Xeon CPU for small problem sizes and small numbers of threads, and (2) that performance is sensitive to scheduling choices. Achieving good performance on the Xeon Phi for complex programs is not straightforward; it requires a deep understanding of (1) search patterns, (2) of scheduling, and (3) of the architecture and its many cores and caches. In practice, the Xeon Phi is less straightforward to program for than originally envisioned by Intel. | Segal reports the scaling of tree parallelization with virtual loss in Fuego for different number of threads and time controls on a simulated idealized shared-memory system @cite_23 . He finds that strength of play increases asymptotically with as resources increase (more time or more threads). A near-perfect speedup is reported for 64 threads and 60-minute per game. Segal suggests that speedup starts decreasing beyond 64 threads, although, with large time settings, further scaling to 512 threads still shows performance increases. | {
"cite_N": [
"@cite_23"
],
"mid": [
"1509593372"
],
"abstract": [
"The parallelization of MCTS across multiple-machines has proven surprisingly difficult. The limitations of existing algorithms were evident in the 2009 Computer Olympiad where ZEN using a single fourcore machine defeated both Fuego with ten eight-core machines, and Mogo with twenty thirty-two core machines. This paper investigates the limits of parallel MCTS in order to understand why distributed parallelism has proven so difficult and to pave the way towards future distributed algorithms with better scaling. We first analyze the single-threaded scaling of Fuego and find that there is an upper bound on the play-quality improvements which can come from additional search. We then analyze the scaling of an idealized N-core shared memory machine to determine the maximum amount of parallelism supported by MCTS. We show that parallel speedup depends critically on how much time is given to each player. We use this relationship to predict parallel scaling for time scales beyond what can be empirically evaluated due to the immense computation required. Our results show that MCTS can scale nearly perfectly to at least 64 threads when combined with virtual loss, but without virtual loss scaling is limited to just eight threads. We also find that for competition time controls scaling to thousands of threads is impossible not necessarily due to MCTS not scaling, but because high levels of parallelism can start to bump up against the upper performance bound of FUEGO itself."
]
} |
1409.4297 | 61396054 | In 2013 Intel introduced the Xeon Phi, a new parallel co-processor board. The Xeon Phi is a cache-coherent many-core shared memory architecture claiming CPU-like versatility, programmability, high performance, and power efficiency. The first published micro-benchmark studies indicate that many of Intel's claims appear to be true. The current paper is the first study on the Phi of a complex artificial intelligence application. It contains an open source MCTS application for playing tournament quality Go (an oriental board game). We report the first speedup figures for up to 240 parallel threads on a real machine, allowing a direct comparison to previous simulation studies. After a substantial amount of work, we observed that performance scales well up to 32 threads, largely confirming previous simulation results of this Go program, although the performance surprisingly deteriorates between 32 and 240 threads. Furthermore, we report (1) unexpected performance anomalies between the Xeon Phi and Xeon CPU for small problem sizes and small numbers of threads, and (2) that performance is sensitive to scheduling choices. Achieving good performance on the Xeon Phi for complex programs is not straightforward; it requires a deep understanding of (1) search patterns, (2) of scheduling, and (3) of the architecture and its many cores and caches. In practice, the Xeon Phi is less straightforward to program for than originally envisioned by Intel. | @cite_24 evaluate tree parallelization with virtual loss and local locks on a 16-core shared-memory system. The algorithm shows an eight-fold speedup with 16 threads. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2101101673"
],
"abstract": [
"FUEGO is both an open-source software framework and a state-of-the-art program that plays the game of Go. The framework supports developing game engines for full-information two-player board games, and is used successfully in a substantial number of projects. The FUEGO Go program became the first program to win a game against a top professional player in 9 × 9 Go. It has won a number of strong tournaments against other programs, and is competitive for 19 × 19 as well. This paper gives an overview of the development and current state of the FUEGO project. It describes the reusable components of the software framework and specific algorithms used in the Go engine."
]
} |
1409.4573 | 2950110473 | We provide theoretical and empirical evidence for a type of asymmetry between causes and effects that is present when these are related via linear models contaminated with additive non-Gaussian noise. Assuming that the causes and the effects have the same distribution, we show that the distribution of the residuals of a linear fit in the anti-causal direction is closer to a Gaussian than the distribution of the residuals in the causal direction. This Gaussianization effect is characterized by reduction of the magnitude of the high-order cumulants and by an increment of the differential entropy of the residuals. The problem of non-linear causal inference is addressed by performing an embedding in an expanded feature space, in which the relation between causes and effects can be assumed to be linear. The effectiveness of a method to discriminate between causes and effects based on this type of asymmetry is illustrated in a variety of experiments using different measures of Gaussianity. The proposed method is shown to be competitive with state-of-the-art techniques for causal inference. | The Gaussianity of residuals was first employed for causal inference by . These authors analyze auto-regressive (AR) processes and show that a similar asymmetry as the one described in this paper can be used to determine the temporal direction of a time series in the presence of non-Gaussian noise. Namely, when fitting an AR process to a reversed time series, the residuals obtained follow a distribution that is closer to a Gaussian distribution. Nevertheless, unlike the work described here, the method proposed by @cite_2 cannot be used to tackle multidimensional or non-linear causal inference problems. In their work, show some advantages of using statistical tests based on measures of Gaussianity to determine the temporal direction of a time series, as a practical alternative to statistical tests based on the independence of the cause and the residual. The motivation for these advantages is that the former tests are one-sample tests while the later ones are two-sample tests. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2142435179"
],
"abstract": [
"We conjecture that the distribution of the time-reversed residuals of a causal linear process is closer to a Gaussian than the distribution of the noise used to generate the process in the forward direction. This property is demonstrated for causal AR(1) processes assuming that all the cumulants of the distribution of the noise are defined. Based on this observation, it is possible to design a decision rule for detecting the direction of time series that can be described as linear processes: The correct direction (forward in time) is the one in which the residuals from a linear fit to the time series are less Gaussian. A series of experiments with simulated and real-world data illustrate the superior results of the proposed rule when compared with other state-of-the-art methods based on independence tests."
]
} |
1409.4573 | 2950110473 | We provide theoretical and empirical evidence for a type of asymmetry between causes and effects that is present when these are related via linear models contaminated with additive non-Gaussian noise. Assuming that the causes and the effects have the same distribution, we show that the distribution of the residuals of a linear fit in the anti-causal direction is closer to a Gaussian than the distribution of the residuals in the causal direction. This Gaussianization effect is characterized by reduction of the magnitude of the high-order cumulants and by an increment of the differential entropy of the residuals. The problem of non-linear causal inference is addressed by performing an embedding in an expanded feature space, in which the relation between causes and effects can be assumed to be linear. The effectiveness of a method to discriminate between causes and effects based on this type of asymmetry is illustrated in a variety of experiments using different measures of Gaussianity. The proposed method is shown to be competitive with state-of-the-art techniques for causal inference. | The problem of causal inference under continuous-valued data has also been analyzed by @cite_1 . The authors propose a method called LINGAM that can identify the causal order of several variables when assuming that (a) the data generating process is linear, (b) there are no unobserved co-founders, and (c) the disturbance variables have non-Gaussian distributions with non-zero variances. These assumptions are required because LINGAM relies on the use of Independent Component Analysis (ICA). More specifically, let @math denote a vector that contains the variables we would like to determine the causal order of. LINGAM assumes that @math , where @math is a matrix that can be permuted to strict lower triangularity if one knows the actual causal ordering in @math , and @math is a vector of non-Gaussian independent disturbance variables. Solving for @math , one gets @math , where @math . The @math matrix can be inferred using ICA. Furthermore, given an estimate of @math , @math can be obtained to find the corresponding connection strengths among the observed variables, which can then be used to determine the true causal ordering. LINGAM has been extended to consider linear relations among groups of variables in . | {
"cite_N": [
"@cite_1"
],
"mid": [
"1627085934"
],
"abstract": [
"Finding the structure of a graphical model has been received much attention in many fields. Recently, it is reported that the non-Gaussianity of data enables us to identify the structure of a directed acyclic graph without any prior knowledge on the structure. In this paper, we propose a novel non-Gaussianity based algorithm for more general type of models; chain graphs. The algorithm finds an ordering of the disjoint subsets of variables by iteratively evaluating the independence between the variable subset and the residuals when the remaining variables are regressed on those. However, its computational cost grows exponentially according to the number of variables. Therefore, we further discuss an efficient approximate approach for applying the algorithm to large sized graphs. We illustrate the algorithm with artificial and real-world datasets."
]
} |
1409.4573 | 2950110473 | We provide theoretical and empirical evidence for a type of asymmetry between causes and effects that is present when these are related via linear models contaminated with additive non-Gaussian noise. Assuming that the causes and the effects have the same distribution, we show that the distribution of the residuals of a linear fit in the anti-causal direction is closer to a Gaussian than the distribution of the residuals in the causal direction. This Gaussianization effect is characterized by reduction of the magnitude of the high-order cumulants and by an increment of the differential entropy of the residuals. The problem of non-linear causal inference is addressed by performing an embedding in an expanded feature space, in which the relation between causes and effects can be assumed to be linear. The effectiveness of a method to discriminate between causes and effects based on this type of asymmetry is illustrated in a variety of experiments using different measures of Gaussianity. The proposed method is shown to be competitive with state-of-the-art techniques for causal inference. | A similar method for causal inference to the last one is described by @cite_0 . These authors also consider that @math and @math fulfil some sort of independence condition, and that this independence condition does not hold for the anti-causal direction. Based on this, they define an uncorrelatedness criterion between @math and @math , and show an asymmetry between the cause and the effect in terms of a certain complexity metric on @math and @math , which is less than the same complexity metric on @math and @math . The complexity metric is calculated in terms of a reproducing kernel Hilbert space embedding (EMD) of probability distributions. Based on the complexity metric, the authors propose an efficient kernel-based algorithm for causal discovery. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2105766378"
],
"abstract": [
"Causal discovery via the asymmetry between the cause and the effect has proved to be a promising way to infer the causal direction from observations. The basic idea is to assume that the mechanism generating the cause distribution px and that generating the conditional distribution py|x correspond to two independent natural processes and thus px and py|x fulfill some sort of independence condition. However, in many situations, the independence condition does not hold for the anticausal direction; if we consider px, y as generated via pypx|y, then there are usually some contrived mutual adjustments between py and px|y. This kind of asymmetry can be exploited to identify the causal direction. Based on this postulate, in this letter, we define an uncorrelatedness criterion between px and py|x and, based on this uncorrelatedness, show asymmetry between the cause and the effect in terms that a certain complexity metric on px and py|x is less than the complexity metric on py and px|y. We propose a Hilbert space embedding-based method EMD an abbreviation for EMbeDding to calculate the complexity metric and show that this method preserves the relative magnitude of the complexity metric. Based on the complexity metric, we propose an efficient kernel-based algorithm for causal discovery. The contribution of this letter is threefold. It allows a general transformation from the cause to the effect involving the noise effect and is applicable to both one-dimensional and high-dimensional data. Furthermore it can be used to infer the causal ordering for multiple variables. Extensive experiments on simulated and real-world data are conducted to show the effectiveness of the proposed method."
]
} |
1409.4095 | 2952454744 | We introduce a method based on the deflectometry principle for the reconstruction of specular objects exhibiting significant size and geometric complexity. A key feature of our approach is the deployment of an Automatic Virtual Environment (CAVE) as pattern generator. To unfold the full power of this extraordinary experimental setup, an optical encoding scheme is developed which accounts for the distinctive topology of the CAVE. Furthermore, we devise an algorithm for detecting the object of interest in raw deflectometric images. The segmented foreground is used for single-view reconstruction, the background for estimation of the camera pose, necessary for calibrating the sensor system. Experiments suggest a significant gain of coverage in single measurements compared to previous methods. To facilitate research on specular surface reconstruction, we will make our data set publicly available. | Normals of surfaces with hybrid reflection properties can be recovered from purely radiometric considerations @cite_19 @cite_36 @cite_0 . Not before correspondences are available between pixels and the scene points they portray, tools from geometrical optics can be leveraged: identifying a single point light source renders the computation of the light map trivial; then, under known camera motion, an initial point can be expanded into a surface curve by tracking the location where the highlight first appeared @cite_11 . The standard structure-from-motion pipeline can also be enriched as to explicitly take into account specular reflections of a discrete set of scene points @cite_4 . Zheng and Murata @cite_26 progress from isolated point sources towards a one-dimensional concentric illuminant. The accuracy of their approach however remains limited as long as some points on the light source remain indistinguishable. Circular reflection lines suffice nevertheless for special applications such as measuring the cornea in human eyes @cite_24 or surface interrogation @cite_28 @cite_34 . Savarese and Perona @cite_37 study reflections of a two-dimensional checkerboard pattern. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_4",
"@cite_36",
"@cite_28",
"@cite_0",
"@cite_19",
"@cite_24",
"@cite_34",
"@cite_11"
],
"mid": [
"",
"2168633204",
"1698682053",
"217731264",
"1995309915",
"2162440529",
"2015578497",
"",
"2170554555",
"1999562192"
],
"abstract": [
"",
"We recover 3D models of objects with specular surfaces. An object is rotated and its continuous images are taken. Circular-shaped light sources that generate conic rays are used to illuminate the rotating object in such a way that highlighted stripes can be observed on most of the specular surfaces. Surface shapes can be computed from the motions of highlights in the continuous images; either specular motion stereo or single specular trace mode can be used. When the lights are properly set, each point on the object can be highlighted during the rotation. The shape for each rotation plane is measured independently using its corresponding epipolar plane image. A 3D shape model is subsequently reconstructed by combining shapes at different rotation planes. Computing a shape is simple and requires only the motion of highlight on each rotation plane. The novelty of this paper is the complete modeling of a general type of specular objects that has not been accomplished before.",
"Looking around in our every day environment, many of the encountered objects are specular to some degree. Actively using this fact when reconstructing objects from image sequences is the scope of the shape from specularities problem. One reason why this problem is important is that standard structure from motion techniques fail when the object surfaces are specular. Here this problem is addressed by estimating surface shape using information from the specular reflections. A specular reflection gives constraints on the surface normal. The approach differs significantly from many earlier shapes from specularities methods since the normal data used is sparse. The main contribution is to give a solid foundation for shape from specularities problems. Estimation of surface shape using reflections is formulated as a variational problem and the surface is represented implicitly using a level set formulation. A functional incorporating all surface constraints is proposed and the corresponding level set motion PDE is explicitly derived. This motion is then proven to minimize the functional. As a part of this functional a variational approach to normal alignment is proposed and analyzed. Also novel methods for implicit surface interpolation to sparse point sets are presented together with a variational analysis. Experiments on both real and synthetic data support the proposed method.",
"",
"We consider the problem of capturing shape characteristics on specular (refractive and reflective) surfaces that are nearly flat. These surfaces are difficult to model using traditional methods based on reconstructing the surface positions and normals. These lower-order shape attributes provide little information to identify important surface characteristics related to distortions. In this paper, we present a framework for recovering the higher-order geometry attributes of specular surfaces. Our method models local reflections and refractions in terms of a special class of multiperspective cameras called the general linear cameras (GLCs). We then develop a new theory that correlates the higher-order differential geometry attributes with the local GLCs. Specifically, we show that Gaussian and mean curvature can be directly derived from the camera intrinsics of the local GLCs. We validate this theory on both synthetic and real-world specular surfaces. Our method places a known pattern in front of a reflective surface or beneath a refractive surface and captures a distorted image on the surface. We then compute the optimal GLC using a sparse set of correspondences and recover the curvatures from the GLC. Experiments demonstrate that our methods are robust and highly accurate.",
"In many remote sensing and machine vision applications, the shape of a specular surface such as water, glass, or polished met al must be determined instantaneously and under natural lighting conditions. Most image analysis techniques, however, assume surface reflectance properties or lighting conditions that are incompatible with these situations. To retrieve the shape of smooth specular surfaces, a technique known as specular surface stereo was developed. The method analyzes multiple images of a surface and finds a surface shape that results in a set of synthetic images that match the observed ones. An image synthesis model is used to predict image irradiance values as a function of the shape and reflectance properties of the surface, camera geometry, and radiance distribution of the illumination. The specular surface stereo technique was tested by processing four numerical simulations-a water surface illuminated by a low- and high-contrast extended light source, and a mirrored surface illuminated by a low- and high-contrast extended light source. Under these controlled circumstances, the recovered surface shape showed good agreement with the known input. >",
"The orientation of patches on the surface of an object can be determined from multiple images taken with different illumination, but from the same viewing position. The method, referred to as photometric stereo, can be implemented using table lookup based on numerical inversion of reflectance maps. Here we concentrate on objects with specularly reflecting surfaces, since these are of importance in industrial applications. Previous methods, intended for diffusely reflecting surfaces, employed point source illumination, which is quite unsuitable in this case. Instead, we use a distributed light source obtained by uneven illumination of a diffusely reflecting planar surface. Experimental results are shown to verify analytic expressions obtained for a method employing three light source distributions.",
"",
"We present a new shape-from-distortion framework for recovering specular (reflective refractive) surfaces. While most existing approaches rely on accurate correspondences between 2D pixels and 3D points, we focus on analyzing the curved images of 3D lines which we call curved line images or CLIs. Our approach models CLIs of local reflections or refractions using the recently proposed general linear cameras (GLCs). We first characterize all possible CLIs in a GLC. We show that a 3D line will appear as a conic in any GLC. For a fixed GLC, the conic type is invariant to the position and orientation of the line and is determined by the GLC parameters. Furthermore, CLIs under single reflection refraction can only be lines or hyperbolas. Based on our new theory, we develop efficient algorithms to use multiple CLIs to recover the GLC camera parameters. We then apply the curvature-GLC theory to derive the Gaussian and mean curvatures from the GLC intrinsics. This leads to a complete distortion-based reconstruction framework. Unlike conventional correspondence-based approaches that are sensitive to image distortions, our approach benefits from the CLI distortions. Finally, we demonstrate applying our framework for recovering curvature fields on both synthetic and real specular surfaces.",
"Abstract This paper examines the information available from the motion of specularities (highlights) due to known movements by the viewer. In particular two new results are presented. First, it is shown that for local viewer movements the concave convex surface ambiguity can be resolved without knowledge of the light source position. Second, the authors investigate what further geometrical information is obtained under extended viewer movements, from tracked motion of a specularity. The reflecting surface is shown to be constrained to coincide with a certain curve. However, there is some ambiguity — the curve is a member of a one-parameter family. Fixing one point uniquely determines the curve."
]
} |
1409.4095 | 2952454744 | We introduce a method based on the deflectometry principle for the reconstruction of specular objects exhibiting significant size and geometric complexity. A key feature of our approach is the deployment of an Automatic Virtual Environment (CAVE) as pattern generator. To unfold the full power of this extraordinary experimental setup, an optical encoding scheme is developed which accounts for the distinctive topology of the CAVE. Furthermore, we devise an algorithm for detecting the object of interest in raw deflectometric images. The segmented foreground is used for single-view reconstruction, the background for estimation of the camera pose, necessary for calibrating the sensor system. Experiments suggest a significant gain of coverage in single measurements compared to previous methods. To facilitate research on specular surface reconstruction, we will make our data set publicly available. | Controllable illuminants were introduced to boost reliability and density of correspondences: The very first active deflectometry setup -- due to @cite_38 -- contained an array of light-emitting diodes (LEDs) which could be switched on sequentially to detect them in the camera image. The acquisition time can be reduced by showing binary codes in parallel @cite_25 . Both papers address the ambiguity by assuming quasi-parallel illumination. A large body of literature on the subject exists in the field of optical metrology starting with @cite_39 that suggests improvements if the LED array is replaced by a commodity computer monitor. @cite_10 describe the theoretic limits of the method in -vol -ving phase-shifted sine patterns as codes. Light map measurements can be further enhanced by color displays @cite_13 . An active variant of Savarese's method is presented in @cite_7 . | {
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_10",
"@cite_39",
"@cite_13",
"@cite_25"
],
"mid": [
"2042623992",
"1997831592",
"",
"",
"2010544530",
"2110598673"
],
"abstract": [
"An approach to illumination and imaging of specular surfaces that yields three-dimensional shape information is described. The structured highlight approach uses a scanned array of point sources and images of the resulting reflected highlights to compute local surface height and orientation. A prototype structured highlight inspection system, called SHINY, has been implemented. SHINY demonstrates the determination of surface shape for several test objects including solder joints. The current SHINY system makes the distant-source assumption and requires only one camera. A stereo structured highlight system using two cameras is proposed to determine surface-element orientation for objects in a much larger field of view. Analysis and description of the algorithms are included. The proposed structured highlight techniques are promising for many industrial tasks. >",
"In this work, we recover the 3D shape of mirrors, sunglasses, and stainless steel implements. A computer monitor displays several images of parallel stripes, each image at a different angle. Reflections of these stripes in a mirroring surface are captured by the camera. For every image point, the direction of the displayed stripes and their reflections in the image are related by a 1D homography matrix, computed with a robust version of the statistically accurate heteroscedastic approach. By focusing on a sparse set of image points for which monitor-image correspondence is computed, the depth and the local shape may be estimated from these homographies. The depth estimation relies on statistically correct minimization and provides accurate, reliable results. Even for the image points where the depth estimation process is inherently unstable, we are able to characterize this instability and develop an algorithm to detect and correct it. After correcting the instability, dense surface recovery of mirroring objects is performed using constrained interpolation, which does not simply interpolate the surface depth values but also uses the locally computed 1D homographies to solve for the depth, the correspondence, and the local surface shape. The method was implemented and the shape of several objects was densely recovered at submillimeter accuracy.",
"",
"",
"Objects with mirroring optical characteristics are left out of the scope of most 3D scanning methods. We present here a new automatic acquisition approach, shape-from-distortion, that focuses on that category of objects, requires only a still camera and a color monitor, and produces range scans (plus a normal and a reflectance map) of the target. Our technique consists of two steps: first, an improved environment matte is captured for the mirroring object, using the interference of patterns with different frequencies to obtain sub-pixel accuracy. Then, the matte is converted into a normal and a depth map by exploiting the self-coherence of a surface when integrating the normal map along different paths. The results show very high accuracy, capturing even smallest surface details. The acquired depth maps can be further processed using standard techniques to produce a complete 3D mesh of the object.",
"The structured highlight inspection method uses an array of point sources to illuminate a specular object surface. The point sources are scanned, and highlights on the object surface resulting from each source are used to derive local surface orientation information. The extended Gaussian image (EGI) is obtained by placing at each point on a Gaussian sphere a mass proportional to the area of elements on the object surface that have a specific orientation. The EGI summarizes shape properties of the object surface and can be efficiently calculated from structured highlight data without surface reconstruction. Features of the estimated EGI including areas, moments, principal axes, homogeneity measures, and polygonality can be used as the basis for classification and inspection. The structured highlight inspection system (SHINY) has been implemented using a hemisphere of 127 point sources. The SHINY system uses a binary coding scheme to make the scanning of point sources efficient. Experiments have used the SHINY system and EGI features for the inspection and classification of surface-mounted-solder joints. >"
]
} |
1409.4629 | 2952579041 | Arguments about the safety, security, and correctness of a complex system are often made in the form of an assurance case. An assurance case is a structured argument, often represented with a graphical interface, that presents and supports claims about a system's behavior. The argument may combine different kinds of evidence to justify its top level claim. While assurance cases deliver some level of guarantee of a system's correctness, they lack the rigor that proofs from formal methods typically provide. Furthermore, changes in the structure of a model during development may result in inconsistencies between a design and its assurance case. Our solution is a framework for automatically generating assurance cases based on 1) a system model specified in an architectural design language, 2) a set of logical rules expressed in a domain specific language that we have developed, and 3) the results of other formal analyses that have been run on the model. We argue that the rigor of these automatically generated assurance cases exceeds those of traditional assurance case arguments because of their more formal logical foundation and direct connection to the architectural model. | As discussed in , assurance cases have a large and well-developed literature. Patterns for assurance case argumentation have been considered in @cite_4 @cite_12 @cite_19 @cite_16 , and common fallacies in assurance cases are considered in @cite_11 . An approach to apply and evolve assurance cases as part of system design is found in @cite_5 , which is similar to the process we have used in applying the Resolute tools. A comparison of assurance cases to prescriptive standards such as DO178B C is provided by @cite_8 . Recent work on confidence cases as a means of assessing assurance case arguments is found in @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_8",
"@cite_19",
"@cite_5",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"1538014561",
"",
"177970991",
"2122770826",
"1564394353",
"2542997477",
"1513515524"
],
"abstract": [
"",
"This paper presents an approach to the reuse of common structures in safety case arguments through their documentation as ’Safety Case Patterns’. Problems with the existing, informal and ad-hoc approaches to safety case material reuse are highlighted. We argue that through explicit capture and documentation of reusable safety case elements as patterns, the process of safety case construction and reuse can be made more systematic. For the description of patterns a safety case pattern language and a graphical pattern notation (based on the Goal Structuring Notation) are presented. Using this framework we briefly describe a number of example argument patterns. A fully documented example pattern is included as an appendix to this paper.",
"",
"By capturing common structures of successful arguments, safety case patterns provide an approach for reusing strategies for reasoning about safety. In the current state of the practice, patterns exist as descriptive specifications with informal semantics, which not only offer little opportunity for more sophisticated usage such as automated instantiation, composition and manipulation, but also impede standardization efforts and tool interoperability. To address these concerns, this paper gives (i) a formal definition for safety case patterns, clarifying both restrictions on the usage of multiplicity and well-founded recursion in structural abstraction, (ii) formal semantics to patterns, and (iii) a generic data model and algorithm for pattern instantiation. We illustrate our contributions by application to a new pattern, the requirements breakdown pattern, which builds upon our previous work.",
"Assurance based development (ABD) is the synergistic construction of a critical computing system and an assurance case that sets out the dependability claims for the system and argues that the available evidence justifies those claims. Co-developing the system and its assurance case helps software developers to make technology choices that address the specific dependability goal of each component. This approach gives developers: (1) confidence that the technologies selected will support the system's dependability goal and (2) flexibility to deploy expensive technology, such as formal verification, only on components whose assurance needs demand it. ABD simplifies the detection - and thereby avoidance - of potential assurance difficulties as they arise, rather than after development is complete. In this paper, we present ABD together with a case study of its use.",
"Software safety cases encourage developers to carry out only those safety activities that actually reduce risk. In practice this is not always achieved. To help remedy this, the SSEI at the University of York has developed a set of software safety argument patterns. This paper reports on using the patterns in two real-world case studies, evaluating the patterns' use against criteria that includes flexibility, ability to reveal assurance decits and ability to focus the case on software contributions to hazards. The case studies demonstrated that the safety patterns can be applied to a range of system types regardless of the stage or type of development process, that they help limit safety case activities to those that are significant for achieving safety, and that they help developers nd assurance deficits in their safety case arguments. The case study reports discuss the difficulties of applying the patterns, particularly in the case of users who are unfamiliar with the approach, and the authors recognise in response the need for better instructional material. But the results show that as part of the development of best practice in safety, the patterns promise signicant benets to industrial safety case creators.",
"Safety analysis is an essential part of the development process of complex systems. However, decisions that are based on flawed safety assessment models, or models used beyond their envelope of validity can negatively impact safety design choices, the effectiveness of certification, and operational practice. Therefore, the justification of assumptions, data sources and analytical methods is necessary for appropriate use of these analysis results. Currently, most of the existing guidance on the evaluation or assessment of safety analysis is concerned with the human aspects of safety reviews. However, there are few recommendations as to how to justify a collection of safety assessment models as part of forming a coherent argument, especially for safety assessments performed using novel safety modelling techniques (such as Failure Logic Modelling). This paper examines the concerns for model validation activities in general and presents an exemplar safety case pattern for the adequacy of safely assessment models. The justification concerns of safely assessment models have been developed in order to provide inspiration and a starting point for future safety case developments utilising novel safety assessment models. (6 pages)",
"Safety cases are gaining acceptance as assurance vehicles for safety-related systems. A safety case documents the evidence and argument that a system is safe to operate; however, logical fallacies in the underlying argument might undermine a system’s safety claims. Removing these fallacies is essential to reduce the risk of safety-related system failure. We present a taxonomy of common fallacies in safety arguments that is intended to assist safety professionals in avoiding and detecting fallacious reasoning in the arguments they develop and review. The taxonomy derives from a survey of general argument fallacies and a separate survey of fallacies in real-world safety arguments. Our taxonomy is specific to safety argumentation, and it is targeted at professionals who work with safety arguments but may lack formal training in logic or argumentation. We discuss the rationale for the selection and categorization of fallacies in the taxonomy. In addition to its applications to the development and review of safety cases, our taxonomy could also support the analysis of system failures and promote the development of more robust safety case patterns."
]
} |
1409.4629 | 2952579041 | Arguments about the safety, security, and correctness of a complex system are often made in the form of an assurance case. An assurance case is a structured argument, often represented with a graphical interface, that presents and supports claims about a system's behavior. The argument may combine different kinds of evidence to justify its top level claim. While assurance cases deliver some level of guarantee of a system's correctness, they lack the rigor that proofs from formal methods typically provide. Furthermore, changes in the structure of a model during development may result in inconsistencies between a design and its assurance case. Our solution is a framework for automatically generating assurance cases based on 1) a system model specified in an architectural design language, 2) a set of logical rules expressed in a domain specific language that we have developed, and 3) the results of other formal analyses that have been run on the model. We argue that the rigor of these automatically generated assurance cases exceeds those of traditional assurance case arguments because of their more formal logical foundation and direct connection to the architectural model. | The Evidential Tool Bus (ETB) @cite_10 is very similar in syntax and semantics to Resolute. It is supported by a Datalog-style logic and is designed to combine evidence from a variety of sources. However, the focus of the ETB is on distribution and on provenance ---that is, to log the sequence of tool invocations that were performed to solve the query. It uses timestamps to determine which analyses are out of date with respect to the current development artifacts and to only re-run those analyses that are not synchronized with the current development artifacts. In addition, it is designed to perform distributed execution of analyses. Analysis tool plug-ins are used to execute the analysis tools within ETB. ETB is designed to be tool and model agnostic, and is therefore not integrated with a model of the system architecture. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2238711819"
],
"abstract": [
"Formal and semi-formal tools are now being used in large projects both for development and certification. A typical project integrates many diverse tools such as static analyzers, model checkers, test generators, and constraint solvers. These tools are usually integrated in an ad hoc manner. There is, however, a need for a tool integration framework that can be used to systematically create workflows, to generate claims along with supporting evidence, and to maintain the claims and evidence as the inputs change. We present the Evidential Tool Bus ETB as a tool integration framework for constructing claims supported by evidence. ETB employs a variant of Datalog as a met alanguage for representing claims, rules, and evidence, and as a scripting language for capturing distributed workflows. ETB can be used to develop assurance cases for certifying complex systems that are developed and assured using a range of tools. We describe the design and prototype implementation of the ETB architecture, and present examples of formal verification workflows defined using ETB."
]
} |
1409.4629 | 2952579041 | Arguments about the safety, security, and correctness of a complex system are often made in the form of an assurance case. An assurance case is a structured argument, often represented with a graphical interface, that presents and supports claims about a system's behavior. The argument may combine different kinds of evidence to justify its top level claim. While assurance cases deliver some level of guarantee of a system's correctness, they lack the rigor that proofs from formal methods typically provide. Furthermore, changes in the structure of a model during development may result in inconsistencies between a design and its assurance case. Our solution is a framework for automatically generating assurance cases based on 1) a system model specified in an architectural design language, 2) a set of logical rules expressed in a domain specific language that we have developed, and 3) the results of other formal analyses that have been run on the model. We argue that the rigor of these automatically generated assurance cases exceeds those of traditional assurance case arguments because of their more formal logical foundation and direct connection to the architectural model. | The work in @cite_31 ties together an assurance case with a model-based notation (Simulink) for the purpose of demonstrating that the Simulink-generated code meets its requirements. This work has many similarities to ours, in that the assurance case is closely tied to the hierarchical structure of the model. It is more rigorous (in that the assurance case is derived from a formal proof) but also much more narrow, corresponding to a component in the system assurance cases that we create. The two approaches could perhaps be integrated to provide more rigorous safety cases for a wider class of software developed in a model-based environment. | {
"cite_N": [
"@cite_31"
],
"mid": [
"2150543751"
],
"abstract": [
"Model-based development and automated code generation are increasingly used for actual production code, in particular in mathematical and engineering domains. However, since code generators are typically not qualified, there is no guarantee that their output satisfies the system requirements, or is even safe. Here we present an approach to systematically derive safety cases that argue along the hierarchical structure in model-based development. The safety cases are constructed mechanically using a formal analysis, based on automated theorem proving, of the automatically generated code. The analysis recovers the model structure and component hierarchy from the code, providing independent assurance of both code and model. It identifies how the given system safety requirements are broken down into component requirements, and where they are ultimately established, thus establishing a hierarchy of requirements that is aligned with the hierarchical model structure. The derived safety cases reflect the results of the analysis, and provide a high-level argument that traces the requirements on the model via the inferred model structure to the code. We illustrate our approach on flight code generated from hierarchical Simulink models by Real-Time Workshop."
]
} |
1409.4561 | 14769209 | Multi-Agent Reinforcement Learning (MARL) is a widely used technique for optimization in decentralised control problems. However, most applications of MARL are in static environments, and are not suitable when agent behaviour and environment conditions are dynamic and uncertain. Addressing uncertainty in such environments remains a challenging problem for MARL-based systems. The dynamic nature of the environment causes previous knowledge of how agents interact to become outdated. Advanced knowledge of potential changes through prediction significantly supports agents converging to near-optimal control solutions. In this paper we propose P-MARL, a decentralised MARL algorithm enhanced by a prediction mechanism that provides accurate information regarding up-coming changes in the environment. This prediction is achieved by employing an Artificial Neural Network combined with a Self-Organising Map that detects and matches changes in the environment. The proposed algorithm is validated in a realistic smart-grid scenario, and provides a 92 Pareto efficient solution to an electric vehicle charging problem. | The problem faced is essentially a more complex version of the Distributed Constraint Optimization Problem (DCOP) @cite_12 @cite_10 . While optimal solutions exist for DCOP, these are NP-complete and are not suitable for large scale problems. More than that, a DCOP involving uncertainty , precisely due to the uncertainty involved in the environment's next state, which does not pertain a fully defined problem. This particular type of uncertain DCOP has also been defined as a Distributed Coordination of Exploration and Exploitation (DCEE) problem by @cite_15 , as DCOP under Stochastic Uncertainty (StochDCOP) by @cite_9 or simply dubbed DCOP with uncertainty @cite_14 . Since the environment is uncertain and non-stationary, agent rewards will continuously change, therefore a trade-off between exploration and exploitation is necessary in order to lead to a sufficiently good solution. As the environment's state is not known ahead of time, and is continuously undergoing change, the design of agent behaviour becomes infeasible. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_15",
"@cite_10",
"@cite_12"
],
"mid": [
"2307562442",
"2141256287",
"2118826835",
"2951198288",
""
],
"abstract": [
"In this paper, we introduce DCOPs with uncertainty (U-DCOPs), a novel generalisation of the canonical DCOP framework where the outcomes of local functions are represented by random variables, and the global objective is to maximise the expectation of an arbitrary utility function (that represents the agents' risk-profile) applied over the sum of these local functions. We then develop U-GDL, a novel decentralised algorithm derived from Generalised Distributive Law (GDL) that optimally solves U-DCOPs. A key property of U-GDL that we show is necessary for optimality is that it keeps track of multiple non-dominated alternatives, and only discards those that are dominated (i.e. local partial solutions that can never turn into an expected global maximum regardless of the realisation of the random variables). As a direct consequence, we show that applying a standard DCOP algorithm to U-DCOP can result in arbitrarily poor solutions. We empirically evaluate U-GDL to determine its computational overhead and bandwidth requirements compared to a standard DCOP algorithm.",
"In many real-life optimization problems involving multiple agents, the rewards are not necessarily known exactly in advance, but rather depend on sources of exogenous uncertainty. For instance, delivery companies might have to coordinate to choose who should serve which foreseen customer, under uncertainty in the locations of the customers. The framework of Distributed Constraint Optimization under Stochastic Uncertainty was proposed to model such problems; in this paper, we generalize this formalism by introducing the concept of evaluation functions that model various optimization criteria. We take the example of three such evaluation functions, expectation, consensus, and robustness, and we adapt and generalize two previous algorithms accordingly. Our experimental results on a class of Vehicle Routing Problems show that incomplete algorithms are not only cheaper than complete ones (in terms of simulated time, Non-Concurrent Constraint Checks, and information exchange), but they are also often able to find the optimal solution. We also show that exchanging more information about the dependencies of their respective cost functions on the sources of uncertainty can help the agents discover higher-quality solutions.",
"Increasing teamwork between agents typically increases the performance of a multi-agent system, at the cost of increased communication and higher computational complexity. This work examines joint actions in the context of a multi-agent optimization problem where agents must cooperate to balance exploration and exploitation. Surprisingly, results show that increased teamwork can hurt agent performance, even when communication and computation costs are ignored, which we term the team uncertainty penalty. This paper introduces the above phenomena, analyzes it, and presents algorithms to reduce the effect of the penalty in our problem setting.",
"Many multi-agent coordination problems can be represented as DCOPs. Motivated by task allocation in disaster response, we extend standard DCOP models to consider uncertain task rewards where the outcome of completing a task depends on its current state, which is randomly drawn from unknown distributions. The goal of solving this problem is to find a solution for all agents that minimizes the overall worst-case loss. This is a challenging problem for centralized algorithms because the search space grows exponentially with the number of agents and is nontrivial for standard DCOP algorithms we have. To address this, we propose a novel decentralized algorithm that incorporates Max-Sum with iterative constraint generation to solve the problem by passing messages among agents. By so doing, our approach scales well and can solve instances of the task allocation problem with hundreds of agents and tasks.",
""
]
} |
1409.4236 | 2951869972 | We consider systems of @math parallel edge dislocations in a single slip system, represented by points in a two-dimensional domain; the elastic medium is modelled as a continuum. We formulate the energy of this system in terms of the empirical measure of the dislocations, and prove several convergence results in the limit @math . The main aim of the paper is to study the convergence of the evolution of the empirical measure as @math . We consider rate-independent, quasi-static evolutions, in which the motion of the dislocations is restricted to the same slip plane. This leads to a formulation of the quasi-static evolution problem in terms of a modified Wasserstein distance, which is only finite when the transport plan is slip-plane-confined. Since the focus is on interaction between dislocations, we renormalize the elastic energy to remove the potentially large self- or core energy. We prove Gamma-convergence of this renormalized energy, and we construct joint recovery sequences for which both the energies and the modified distances converge. With this augmented Gamma-convergence we prove the convergence of the quasi-static evolutions as @math . | The asymptotic behaviour of the quadratic dislocation energy for edge dislocations was already studied by Garroni, Leoni and Ponsiglione in @cite_12 . One of the main difference with our work is that we consider the reduced energy instead of , and use as main variable the dislocation density rather than the strain. Moreover, while in @cite_12 the authors focus on the self-energy term of in the case of edge dislocations with multiple Burgers vectors, we instead focus on the next term in the expansion, namely the interaction energy, and simplify by restricting to one Burgers vector. A similar analysis as in @cite_12 has been done in @cite_11 without the well-separation assumption that we make (see ). | {
"cite_N": [
"@cite_12",
"@cite_11"
],
"mid": [
"2019335252",
"1976044055"
],
"abstract": [
"We deduce a macroscopic strain gradient theory for plasticity from a model of discrete dislocations. We restrict our analysis to the case of a cylindrical symmetry for the crystal under study, so that the mathematical formulation will involve a two-dimensional variational problem. The dislocations are introduced as point topological defects of the strain fields, for which we compute the elastic energy stored outside the so-called core region.We show that the G-limit of this energy (suitably rescaled), as the core radius tends to zero and the number of dislocations tends to infinity, takes the form @PARASPLIT E = fO (W(se) + f (Curl se)) dx, @PARASPLIT where e represents the elastic part of the macroscopic strain, and Curl se represents the geometrically necessary dislocation density. The plastic energy density f is defined explicitly through an asymptotic cell formula, depending only on the elastic tensor and the class of the admissible Burgers vectors, accounting for the crystalline structure. It turns out to be positively 1-homogeneous, so that concentration on lines is permitted, accounting for the presence of pattern formations observed in crystals such as dislocation walls.",
"This paper deals with the elastic energy induced by systems of straight edge dislocations in the framework of linearized plane elasticity. The dislocations are introduced as point topological defects of the displacement-gradient fields. Following the core radius approach, we introduce a parameter ( > 0 ) representing the lattice spacing of the crystal, we remove a disc of radius ( ) around each dislocation and compute the elastic energy stored outside the union of such discs, namely outside the core region. Then, we analyze the asymptotic behaviour of the elastic energy as ( 0 ) , in terms of Γ-convergence. We focus on the self energy regime of order ( 1 ) ; we show that configurations with logarithmic diverging energy converge, up to a subsequence, to a finite number of multiple dislocations and we compute the corresponding Γ-limit."
]
} |
1409.4236 | 2951869972 | We consider systems of @math parallel edge dislocations in a single slip system, represented by points in a two-dimensional domain; the elastic medium is modelled as a continuum. We formulate the energy of this system in terms of the empirical measure of the dislocations, and prove several convergence results in the limit @math . The main aim of the paper is to study the convergence of the evolution of the empirical measure as @math . We consider rate-independent, quasi-static evolutions, in which the motion of the dislocations is restricted to the same slip plane. This leads to a formulation of the quasi-static evolution problem in terms of a modified Wasserstein distance, which is only finite when the transport plan is slip-plane-confined. Since the focus is on interaction between dislocations, we renormalize the elastic energy to remove the potentially large self- or core energy. We prove Gamma-convergence of this renormalized energy, and we construct joint recovery sequences for which both the energies and the modified distances converge. With this augmented Gamma-convergence we prove the convergence of the quasi-static evolutions as @math . | The focus on the interaction term of the energy for edge dislocations was already present in the work of Cermelli and Leoni @cite_0 . In @cite_0 the authors define the renormalised energy starting from a quadratic dislocation energy, and focus on the interaction term of the energy. They however consider the case of a finite number of dislocations with different Burgers vectors, and do not phrase their result in terms of @math -convergence, but instead they keep @math fixed, and therefore express their renormalised energy in terms of the positions of dislocations, rather than densities. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2046646196"
],
"abstract": [
"In this work we discuss, from a variational viewpoint, the equilibrium problem for a finite number of Volterra dislocations in a plane domain. For a given set of singularities at fixed locations, we characterize elastic equilibrium as the limit of the minimizers of a family of energy functionals, obtained by a finite-core regularization of the elastic-energy functional. We give a sharpasymptotic estimate of the minimum energy as the core radius tends to zero, which allows one to eliminate this internal length scale from the problem. The energy content of a set of dislocations is fully characterized by the regular part of the asymptotic expansion, the so-called renormalized energy, which contains all information regarding self- and mutual interactions between the defects. Thus our result may be considered as the analogue for dislocations of the classical result of Bethuel, Brezis and Helein for Ginzburg--Landau vortices. We view the renormalized energy as the basic tool for the study of the discrete-to-con..."
]
} |
1409.4236 | 2951869972 | We consider systems of @math parallel edge dislocations in a single slip system, represented by points in a two-dimensional domain; the elastic medium is modelled as a continuum. We formulate the energy of this system in terms of the empirical measure of the dislocations, and prove several convergence results in the limit @math . The main aim of the paper is to study the convergence of the evolution of the empirical measure as @math . We consider rate-independent, quasi-static evolutions, in which the motion of the dislocations is restricted to the same slip plane. This leads to a formulation of the quasi-static evolution problem in terms of a modified Wasserstein distance, which is only finite when the transport plan is slip-plane-confined. Since the focus is on interaction between dislocations, we renormalize the elastic energy to remove the potentially large self- or core energy. We prove Gamma-convergence of this renormalized energy, and we construct joint recovery sequences for which both the energies and the modified distances converge. With this augmented Gamma-convergence we prove the convergence of the quasi-static evolutions as @math . | For screw dislocations the interaction energy was derived from discrete models in @cite_8 . Moreover in @cite_8 the authors also considered the time-dependent case. More precisely they proved the convergence of the discrete gradient flow for the discrete dislocation energy with flat dissipation to the gradient flow of the renormalised energy. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1992225844"
],
"abstract": [
"This paper aims at building a variational approach to the dynamics of discrete topological singularities in two dimensions, based on Γ-convergence. We consider discrete systems, described by scalar functions defined on a square lattice and governed by periodic interaction potentials. Our main motivation comes from XY spin systems, described by the phase parameter, and screw dislocations, described by the displacement function. For these systems, we introduce a discrete notion of vorticity. As the lattice spacing tends to zero we derive the first order Γ-limit of the free energy which is referred to as renormalized energy and describes the interaction of vortices. As a byproduct of this analysis, we show that such systems exhibit increasingly many metastable configurations of singularities. Therefore, we propose a variational approach to the depinning and dynamics of discrete vortices, based on minimizing movements. We show that, letting first the lattice spacing and then the time step of the minimizing movements tend to zero, the vortices move according with the gradient flow of the renormalized energy, as in the continuous Ginzburg–Landau framework."
]
} |
1409.4236 | 2951869972 | We consider systems of @math parallel edge dislocations in a single slip system, represented by points in a two-dimensional domain; the elastic medium is modelled as a continuum. We formulate the energy of this system in terms of the empirical measure of the dislocations, and prove several convergence results in the limit @math . The main aim of the paper is to study the convergence of the evolution of the empirical measure as @math . We consider rate-independent, quasi-static evolutions, in which the motion of the dislocations is restricted to the same slip plane. This leads to a formulation of the quasi-static evolution problem in terms of a modified Wasserstein distance, which is only finite when the transport plan is slip-plane-confined. Since the focus is on interaction between dislocations, we renormalize the elastic energy to remove the potentially large self- or core energy. We prove Gamma-convergence of this renormalized energy, and we construct joint recovery sequences for which both the energies and the modified distances converge. With this augmented Gamma-convergence we prove the convergence of the quasi-static evolutions as @math . | As for the slip-plane-confined motion, the only related work in the mathematical domain that we know of is @cite_22 , where the authors consider screw dislocations that may move along a finite set of directions. Such a system presents different mathematical difficulties, since each dislocation can, in theory, reach each point in the plane, in contrast to the single-slip-plane confinement of this paper. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2105814348"
],
"abstract": [
"The goal of this paper is the analytical validation of a model of Cermelli and Gurtin [Arch. Ration. Mech. Anal., 148 (1999), pp. 3--52] for an evolution law for systems of screw dislocations under the assumption of antiplane shear. The motion of the dislocations is restricted to a discrete set of glide directions, which are properties of the material. The evolution law is given by a “maximal dissipation criterion,” leading to a system of differential inclusions. Short time existence, uniqueness, cross-slip, and fine cross-slip of solutions are proved."
]
} |
1409.4236 | 2951869972 | We consider systems of @math parallel edge dislocations in a single slip system, represented by points in a two-dimensional domain; the elastic medium is modelled as a continuum. We formulate the energy of this system in terms of the empirical measure of the dislocations, and prove several convergence results in the limit @math . The main aim of the paper is to study the convergence of the evolution of the empirical measure as @math . We consider rate-independent, quasi-static evolutions, in which the motion of the dislocations is restricted to the same slip plane. This leads to a formulation of the quasi-static evolution problem in terms of a modified Wasserstein distance, which is only finite when the transport plan is slip-plane-confined. Since the focus is on interaction between dislocations, we renormalize the elastic energy to remove the potentially large self- or core energy. We prove Gamma-convergence of this renormalized energy, and we construct joint recovery sequences for which both the energies and the modified distances converge. With this augmented Gamma-convergence we prove the convergence of the quasi-static evolutions as @math . | Finally, there is an intriguing question that arises from the comparison with current continuum-scale modelling of plasticity (as in e.g. @cite_4 @cite_5 ). The limiting energy of Theorem is non-local, with an interaction kernel that has no intrinsic length scale. However, defect energies' in the continuum-level modelling are usually assumed to be local (see e.g. [Eq. (8.8)] GurtinAnandLele07 or [Eq. (6.16)] GurtinAnand05 ). It is unclear to us how these two descriptions can be reconciled. | {
"cite_N": [
"@cite_5",
"@cite_4"
],
"mid": [
"2026868208",
"2047816899"
],
"abstract": [
"This study develops a one-dimensional theory of strain-gradient plasticity based on: (i) a system of microstresses consistent with a microforce balance; (ii) a mechanical version of the second law that includes, via microstresses, work performed during viscoplastic flow; (iii) a constitutive theory that allows • the free-energy to depend on the gradient of the plastic strain, and • the microstresses to depend on the gradient of the plastic strain-rate. The constitutive equations, whose rate-dependence is of power-law form, are endowed with energetic and dissipative gradient length-scales L and l, respectively, and allow for a gradient-dependent generalization of standard internal-variable hardening. The microforce balance when augmented by the constitutive relations for the microstresses results in a nonlocal flow rule in the form of a partial differential equation for the plastic strain. Typical macroscopic boundary conditions are supplemented by nonstandard microscopic boundary conditions associated with flow, and properties of the resulting boundary-value problem are studied both analytically and numerically. The resulting solutions are shown to exhibit three distinct physical phenomena: (i) standard (isotropic) internal-variable hardening; (ii) energetic hardening, with concomitant back stress, associated with plastic-strain gradients and resulting in boundary layer effects; (iii) dissipative strengthening associated with plastic strain-rate gradients and resulting in a size-dependent increase in yield strength.",
"This study develops a small-deformation theory of strain-gradient plasticity for isotropic materials in the absence of plastic rotation. The theory is based on a system of microstresses consistent with a microforce balance; a mechanical version of the second law that includes, via microstresses, work performed during viscoplastic flow; a constitutive theory that allows: • the microstresses to depend on ∇E˙p, the gradient of the plastic strain-rate, and • the free energy ψ to depend on the Burgers tensor G=curlEp. The microforce balance when augmented by constitutive relations for the microstresses results in a nonlocal flow rule in the form of a tensorial second-order partial differential equation for the plastic strain. The microstresses are strictly dissipative when ψ is independent of the Burgers tensor, but when ψ depends on G the gradient microstress is partially energetic, and this, in turn, leads to a back stress and (hence) to Bauschinger-effects in the flow rule. It is further shown that dependencies of the microstresses on ∇E˙p lead to strengthening and weakening effects in the flow rule. Typical macroscopic boundary conditions are supplemented by nonstandard microscopic boundary conditions associated with flow, and, as an aid to numerical solutions, a weak (virtual power) formulation of the nonlocal flow rule is derived."
]
} |
1409.3725 | 51750048 | Bringing together the ICT and the business layer of a service-oriented system (SoS) remains a great challenge. Few papers tackle the management of SoS from the business and organizational point of view. One solution is to use the well-known ITIL v.3 framework. The latter enables to transform the organization into a service-oriented organizational which focuses on the value provided to the service customers. In this paper, we align the steps of the service provisioning model with the ITIL v.3 processes. The alignment proposed should help organizations and IT teams to integrate their ICT layer, represented by the SoS, and their business layer, represented by ITIL v.3. One main advantage of this combined use of ITIL and a SoS is the full service orientation of the company. | @cite_4 , the authors propose a meta model of an enterprise service based on the service concept of itil v. 2 and of the service-oriented paradigm. They do not tackle the possible relations between the itil processes and the activities of the s o s implementation and provisioning. works such as @cite_17 use itil v. 3 concepts to build a service-oriented and organizational framework. But they do not align the processes of itil with processes or activities of an s o s implementation or provisioning methodology. | {
"cite_N": [
"@cite_4",
"@cite_17"
],
"mid": [
"2046870826",
"2027705422"
],
"abstract": [
"Enterprise architecture supports organizational engineering in many ways. Service orientation is regarded as dominant operations model for service providers -- within and beyond IT. As a consequence, it is important to integrate service management and service orientation into enterprise architecture. This paper proposes an enterprise architecture extension that achieves such an integration. IT service management is defined according to ITIL. Based on the integration of service management into enterprise architecture, the integration of Service Oriented Architecture is discussed as a further extension. The research is based on the Business Engineering approach and the guidelines of Method Engineering.",
": Enterprise architecture (EA) is a new approach that organizations should practice to align their business strategic objectives with information and communication technology (ICT). Enterprise Architecture encompasses a collection of different views and aspects of the enterprise which constitute a comprehensive overview when put together. Such an overview cannot be well-organized regardless of incorporating a logical structure called Enterprise Architecture Framework (EAF). EAF presents a transparent and comprehensive map of an organization that shows how all organization elements (business and IT) work together to achieve defined business objectives. Several distinctive EAF have been proposed, but many organizations are struggling with using these frameworks. This article try to eliminate the challenges of common and famous EAF by using Service Oriented (SO) paradigm. This service oriented EAF (SOEAF) named KASRA EAF. KASRA EAF involves a SO Roadmap that is compatible with ITIL and a Classification Schema comprises four rows and six columns."
]
} |
1409.3257 | 2950369225 | We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods. | Chambolle and Pock @cite_48 considered a class of convex optimization problems with the following saddle-point structure: where @math , @math and @math are proper closed convex functions, with @math itself being the conjugate of a convex function @math . They developed the following first-order primal-dual algorithm: When both @math and @math are strongly convex and the parameters @math , @math and @math are chosen appropriately, this algorithm obtains accelerated linear convergence rate [Theorem 3] ChambollePock11 . | {
"cite_N": [
"@cite_48"
],
"mid": [
"2092663520"
],
"abstract": [
"In this paper we study a first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1 N) in finite dimensions for the complete class of problems. We further show accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular we show that we can achieve O(1 N 2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(? N ) for some ??(0,1), on smooth problems. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and multi-label image segmentation."
]
} |
1409.3257 | 2950369225 | We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods. | The batch complexity of the Chambolle-Pock algorithm is @math , where the @math notation hides the @math factor. We can bound the spectral norm @math by the Frobenius norm @math and obtain [ |A |_2 |A |_F n |a_i |_2 = n R. ] (Note that the second inequality above would be an equality if the columns of @math are normalized.) So in the worst case, the batch complexity of the Chambolle-Pock algorithm becomes [ ( 1+R ) = ( 1+ ), where =R^2 ( ), ] which matches the worst-case complexity of the AFG methods @cite_18 @cite_19 (see and also the discussions in [Section 5] LinLuXiao14apcg ). This is also of the same order as the complexity of SPDC with @math (see ). When the condition number @math , they can be @math worse than the batch complexity of SPDC with @math , which is @math . | {
"cite_N": [
"@cite_19",
"@cite_18"
],
"mid": [
"2030161963",
"2124541940"
],
"abstract": [
"In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is a simple general convex function with known structure. Despite the absence of good properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the first part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (with convergence rate (O ( 1 k ) )), and an accelerated multistep version with convergence rate (O ( 1 k^2 ) ), where (k ) is the iteration counter. For nonconvex problems with this structure, we prove convergence to a point from which there is no descent direction. In contrast, we show that for general nonsmooth, nonconvex problems, even resolving the question of whether a descent direction exists from a point is NP-hard. For all methods, we suggest some efficient “line search” procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.",
"It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name \"polynomial-time interior-point methods\", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12]."
]
} |
1409.3257 | 2950369225 | We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods. | Our algorithms and theory can be readily generalized to solve the problem of [ x ^d 1 n i=1 ^n (A_i^T x) + g(x), ] where each @math is an @math matrix, and @math is a smooth convex function. This more general formulation is used, e.g., in @cite_12 . Most recently, Lan @cite_29 considered a special case with @math and @math , and recognized that the dual coordinate proximal mapping used in and is equivalent to computing the primal gradients @math at a particular sequence of points @math . Based on this observation, he derived a similar randomized incremental gradient algorithm which share the same order of iteration complexity as we presented in this paper. | {
"cite_N": [
"@cite_29",
"@cite_12"
],
"mid": [
"2964037929",
"2950080435"
],
"abstract": [
"In this paper, we consider a class of finite-sum convex optimization problems whose objective function is given by the average of (m , ( 1) ) smooth components together with some other relatively simple terms. We first introduce a deterministic primal–dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization problems using a primal–dual termination criterion. Our major contribution is to develop a randomized primal–dual gradient (RPDG) method, which needs to compute the gradient of only one randomly selected smooth component at each iteration, but can possibly achieve better complexity than PDG in terms of the total number of gradient evaluations. More specifically, we show that the total number of gradient evaluations performed by RPDG can be ( O ( m ) ) times smaller, both in expectation and with high probability, than those performed by deterministic optimal first-order methods under favorable situations. We also show that the complexity of the RPDG method is not improvable by developing a new lower complexity bound for a general class of randomized methods for solving large-scale finite-sum convex optimization problems.",
"We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings."
]
} |
1409.3257 | 2950369225 | We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods. | We can also solve the primal problem via its dual: Because of the problem structure, coordinate ascent methods (e.g., @cite_20 @cite_4 @cite_40 @cite_43 ) can be more efficient than full gradient methods. In the stochastic dual coordinate ascent (SDCA) method @cite_43 , a dual coordinate @math is picked at random during each iteration and updated to increase the dual objective value. Shalev-Shwartz and Zhang @cite_43 showed that the iteration complexity of SDCA is @math , which corresponds to the batch complexity @math . | {
"cite_N": [
"@cite_43",
"@cite_40",
"@cite_4",
"@cite_20"
],
"mid": [
"1939652453",
"2165966284",
"2128529555",
"1512098439"
],
"abstract": [
"Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.",
"In many applications, data appear with a huge number of instances as well as features. Linear Support Vector Machines (SVM) is one of the most popular tools to deal with such large-scale sparse data. This paper presents a novel dual coordinate descent method for linear SVM with L1-and L2-loss functions. The proposed method is simple and reaches an e-accurate solution in O(log(1 e)) iterations. Experiments indicate that our method is much faster than state of the art solvers such as Pegasos, TRON, SVMperf, and a recent primal coordinate descent implementation.",
"Linear support vector machines (SVM) are useful for classifying large-scale sparse data. Problems with sparse features are common in applications such as document classification and natural language processing. In this paper, we propose a novel coordinate descent algorithm for training linear SVM with the L2-loss function. At each step, the proposed method minimizes a one-variable sub-problem while fixing other variables. The sub-problem is solved by Newton steps with the line search technique. The procedure globally converges at the linear rate. As each sub-problem involves only values of a corresponding feature, the proposed approach is suitable when accessing a feature is more convenient than accessing an instance. Experiments show that our method is more efficient and stable than state of the art methods such as Pegasos and TRON.",
"This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm."
]
} |
1409.3257 | 2950369225 | We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods. | For more general convex optimization problems, there is a vast literature on coordinate descent methods; see, e.g., the recent overview by Wright @cite_38 . In particular, Nesterov's work on randomized coordinate descent @cite_7 sparked a lot of recent activities on this topic. Richt ' a rik and Tak ' a c @cite_9 extended the algorithm and analysis to composite convex optimization. When applied to the dual problem , it becomes one variant of SDCA studied in @cite_43 . Mini-batch and distributed versions of SDCA have been proposed and analyzed in @cite_32 and @cite_28 respectively. Non-uniform sampling schemes have been studied for both stochastic gradient and SDCA methods (e.g., @cite_10 @cite_37 @cite_39 @cite_45 ). | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_32",
"@cite_39",
"@cite_43",
"@cite_45",
"@cite_10"
],
"mid": [
"2000769684",
"2047152541",
"2095984592",
"2123154536",
"2117686388",
"1780115997",
"1512309675",
"1939652453",
"1852760382",
"2950132609"
],
"abstract": [
"Coordinate descent algorithms solve optimization problems by successively performing approximate minimization along coordinate directions or coordinate hyperplanes. They have been used in applications for many years, and their popularity continues to grow because of their usefulness in data analysis, machine learning, and other areas of current interest. This paper describes the fundamentals of the coordinate descent approach, together with variants and extensions and their convergence properties, mostly with reference to convex objectives. We pay particular attention to a certain problem structure that arises frequently in machine learning applications, showing that efficient implementations of accelerated coordinate descent algorithms are possible for problems of this type. We also present some parallel variants and discuss their convergence properties under several models of parallel execution.",
"We consider the problem of minimizing the sum of two convex functions: one is the average of a large number of smooth component functions, and the other is a general convex function that admits a simple proximal mapping. We assume the whole objective function is strongly convex. Such problems often arise in machine learning, known as regularized empirical risk minimization. We propose and analyze a new proximal stochastic gradient method, which uses a multistage scheme to progressively reduce the variance of the stochastic gradient. While each iteration of this algorithm has similar cost as the classical stochastic gradient method (or incremental gradient method), we show that the expected objective value converges to the optimum at a geometric rate. The overall complexity of this method is much lower than both the proximal full gradient method and the standard proximal stochastic gradient method.",
"In this paper we propose new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. Surprisingly enough, for certain classes of objective functions, our results are better than the standard worst-case bounds for deterministic algorithms. We present constrained and unconstrained versions of the method, and its accelerated variant. Our numerical test confirms a high efficiency of this technique on problems of very big size.",
"We present and study a distributed optimization algorithm by employing a stochastic dual coordinate ascent method. Stochastic dual coordinate ascent methods enjoy strong theoretical guarantees and often have better performances than stochastic gradient descent methods in optimizing regularized loss minimization problems. It still lacks of efforts in studying them in a distributed framework. We make a progress along the line by presenting a distributed stochastic dual coordinate ascent algorithm in a star network, with an analysis of the tradeoff between computation and communication. We verify our analysis by experiments on real data sets. Moreover, we compare the proposed algorithm with distributed stochastic gradient descent methods and distributed alternating direction methods of multipliers for optimizing SVMs in the same distributed framework, and observe competitive performances.",
"In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an ( )-accurate solution with probability at least (1- ) in at most (O((n ) (1 )) ) iterations, where (n ) is the number of blocks. This extends recent results of Nesterov (SIAM J Optim 22(2): 341–362, 2012), which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing ( ) from the logarithmic term. More importantly, in contrast with the aforementioned work in which the author achieves the results by applying the method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving first true iteration complexity bounds. For strongly convex functions the method converges linearly. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Finally, we demonstrate numerically that the algorithm is able to solve huge-scale ( _1 )-regularized least squares problems with a billion variables.",
"We address the issue of using mini-batches in stochastic optimization of SVMs. We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent (SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss.",
"Uniform sampling of training data has been commonly used in traditional stochastic optimization algorithms such as Proximal Stochastic Gradient Descent (prox-SGD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although uniform sampling can guarantee that the sampled stochastic quantity is an unbiased estimate of the corresponding true quantity, the resulting estimator may have a rather high variance, which negatively affects the convergence of the underlying optimization procedure. In this paper we study stochastic optimization with importance sampling, which improves the convergence rate by reducing the stochastic variance. Specifically, we study prox-SGD (actually, stochastic mirror descent) with importance sampling and prox-SDCA with importance sampling. For prox-SGD, instead of adopting uniform sampling throughout the training process, the proposed algorithm employs importance sampling to minimize the variance of the stochastic gradient. For prox-SDCA, the proposed importance sampling scheme aims to achieve higher expected dual value at each dual coordinate ascent step. We provide extensive theoretical analysis to show that the convergence rates with the proposed importance sampling methods can be significantly improved under suitable conditions both for prox-SGD and for prox-SDCA. Experiments are provided to verify the theoretical analysis.",
"Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.",
"We study the problem of minimizing the average of a large number of smooth convex functions penalized with a strongly convex regularizer. We propose and analyze a novel primal-dual method (Quartz) which at every iteration samples and updates a random subset of the dual variables, chosen according to an arbitrary distribution. In contrast to typical analysis, we directly bound the decrease of the primal-dual error (in expectation), without the need to first analyze the dual error. Depending on the choice of the sampling, we obtain efficient serial, parallel and distributed variants of the method. In the serial case, our bounds match the best known bounds for SDCA (both with uniform and importance sampling). With standard mini-batching, our bounds predict initial data-independent speedup as well as additional data-driven speedup which depends on spectral and sparsity properties of the data. We calculate theoretical speedup factors and find that they are excellent predictors of actual speedup in practice. Moreover, we illustrate that it is possible to design an efficient mini-batch importance sampling. The distributed variant of Quartz is the first distributed SDCA-like method with an analysis for non-separable data.",
"We obtain an improved finite-sample guarantee on the linear convergence of stochastic gradient descent for smooth and strongly convex objectives, improving from a quadratic dependence on the conditioning @math (where @math is a bound on the smoothness and @math on the strong convexity) to a linear dependence on @math . Furthermore, we show how reweighting the sampling distribution (i.e. importance sampling) is necessary in order to further improve convergence, and obtain a linear dependence in the average smoothness, dominating previous results. We also discuss importance sampling for SGD more broadly and show how it can improve convergence also in other scenarios. Our results are based on a connection we make between SGD and the randomized Kaczmarz algorithm, which allows us to transfer ideas between the separate bodies of literature studying each of the two methods. In particular, we recast the randomized Kaczmarz algorithm as an instance of SGD, and apply our results to prove its exponential convergence, but to the solution of a weighted least squares problem rather than the original least squares problem. We then present a modified Kaczmarz algorithm with partially biased sampling which does converge to the original least squares solution with the same exponential convergence rate."
]
} |
1409.3257 | 2950369225 | We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods. | Shalev-Shwartz and Zhang @cite_0 proposed an accelerated mini-batch SDCA method which incorporates additional primal updates than SDCA, and bears some similarity to our Mini-Batch SPDC method. They showed that its complexity interpolates between that of SDCA and AFG by varying the mini-batch size @math . In particular, for @math , it matches that of the AFG methods (as SPDC does). But for @math , the complexity of their method is the same as SDCA, which is worse than SPDC for ill-conditioned problems. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2951833684"
],
"abstract": [
"Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this method. We discuss an implementation of our method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic gradient descent method of nesterov2007gradient ."
]
} |
1409.3257 | 2950369225 | We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods. | In addition, Shalev-Shwartz and Zhang @cite_12 developed an accelerated proximal SDCA method which achieves the same batch complexity @math as SPDC. Their method is an inner-outer iteration procedure, where the outer loop is a full-dimensional accelerated gradient method in the primal space @math . At each iteration of the outer loop, the SDCA method @cite_43 is called to solve the dual problem with customized regularization parameter and precision. In contrast, SPDC is a straightforward single-loop coordinate optimization methods. | {
"cite_N": [
"@cite_43",
"@cite_12"
],
"mid": [
"1939652453",
"2950080435"
],
"abstract": [
"Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.",
"We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings."
]
} |
1409.3257 | 2950369225 | We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods. | More recently, @cite_44 developed an accelerated proximal coordinate gradient (APCG) method for solving a more general class of composite convex optimization problems. When applied to the dual problem , APCG enjoys the same batch complexity @math as of SPDC. However, it needs an extra primal proximal-gradient step to have theoretical guarantees on the convergence of primal-dual gap [Section 5.1] LinLuXiao14apcg . The computational cost of this additional step is equivalent to one pass of the dataset, thus it does not affect the overall complexity. | {
"cite_N": [
"@cite_44"
],
"mid": [
"2950457428"
],
"abstract": [
"We consider the problem of minimizing the sum of two convex functions: one is smooth and given by a gradient oracle, and the other is separable over blocks of coordinates and has a simple known structure over each block. We develop an accelerated randomized proximal coordinate gradient (APCG) method for minimizing such convex composite functions. For strongly convex functions, our method achieves faster linear convergence rates than existing randomized proximal coordinate gradient methods. Without strong convexity, our method enjoys accelerated sublinear convergence rates. We show how to apply the APCG method to solve the regularized empirical risk minimization (ERM) problem, and devise efficient implementations that avoid full-dimensional vector operations. For ill-conditioned ERM problems, our method obtains improved convergence rates than the state-of-the-art stochastic dual coordinate ascent (SDCA) method."
]
} |
1409.3257 | 2950369225 | We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods. | Another way to approach problem is to reformulate it as a constrained optimization problem and solve it by ADMM type of operator-splitting methods (e.g., @cite_30 ). In fact, as shown in @cite_48 , the batch primal-dual algorithm - is equivalent to a pre-conditioned ADMM (or inexact Uzawa method; see, e.g., @cite_17 ). Several authors @cite_3 @cite_49 @cite_36 @cite_24 have considered a more general formulation than , where each @math is a function of the whole vector @math . They proposed online or stochastic versions of ADMM which operate on only one @math in each iteration, and obtained sublinear convergence rates. However, their cost per iteration is @math instead of @math . | {
"cite_N": [
"@cite_30",
"@cite_36",
"@cite_48",
"@cite_3",
"@cite_24",
"@cite_49",
"@cite_17"
],
"mid": [
"2019569173",
"25321933",
"2092663520",
"1563975843",
"2951481254",
"2510516734",
"2016910236"
],
"abstract": [
"Splitting algorithms for the sum of two monotone operators.We study two splitting algorithms for (stationary and evolution) problems involving the sum of two monotone operators. These algorithms ar...",
"We develop new stochastic optimization methods that are applicable to a wide range of structured regularizations. Basically our methods are combinations of basic stochastic optimization techniques and Alternating Direction Multiplier Method (ADMM). ADMM is a general framework for optimizing a composite function, and has a wide range of applications. We propose two types of online variants of ADMM, which correspond to online proximal gradient descent and regularized dual averaging respectively. The proposed algorithms are computationally efficient and easy to implement. Our methods yield O(1 √T) convergence of the expected risk. Moreover, the online proximal gradient descent type method yields O(log(T) T) convergence for a strongly convex loss. Numerical experiments show effectiveness of our methods in learning tasks with structured sparsity such as overlapped group lasso.",
"In this paper we study a first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1 N) in finite dimensions for the complete class of problems. We further show accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular we show that we can achieve O(1 N 2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(? N ) for some ??(0,1), on smooth problems. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and multi-label image segmentation.",
"Online optimization has emerged as powerful tool in large scale optimization. In this paper, we introduce efficient online algorithms based on the alternating directions method (ADM). We introduce a new proof technique for ADM in the batch setting, which yields the O(1 T) convergence rate of ADM and forms the basis of regret analysis in the online setting. We consider two scenarios in the online setting, based on whether the solution needs to lie in the feasible set or not. In both settings, we establish regret bounds for both the objective function as well as constraint violation for general and strongly convex functions. Preliminary results are presented to illustrate the performance of the proposed algorithms.",
"In this paper, we propose a new stochastic alternating direction method of multipliers (ADMM) algorithm, which incrementally approximates the full gradient in the linearized ADMM formulation. Besides having a low per-iteration complexity as existing stochastic ADMM algorithms, the proposed algorithm improves the convergence rate on convex problems from @math to @math , where @math is the number of iterations. This matches the convergence rate of the batch ADMM algorithm, but without the need to visit all the samples in each iteration. Experiments on the graph-guided fused lasso demonstrate that the new algorithm is significantly faster than state-of-the-art stochastic and batch ADMM algorithms.",
"The Alternating Direction Method of Multipliers (ADMM) has received lots of attention recently due to the tremendous demand from large-scale and data-distributed machine learning applications. In this paper, we present a stochastic setting for optimization problems with non-smooth composite objective functions. To solve this problem, we propose a stochastic ADMM algorithm. Our algorithm applies to a more general class of convex and nonsmooth objective functions, beyond the smooth and separable least squares loss used in lasso. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic function: O(1 √t)) for convex functions and O(log t t) for strongly convex functions. Compared to previous literature, we establish the convergence rate of ADMM for convex problems in terms of both the objective value and the feasibility violation. A novel application named Graph-Guided SVM is proposed to demonstrate the usefulness of our algorithm.",
"In this paper, we propose a unified primal-dual algorithm framework for two classes of problems that arise from various signal and image processing applications. We also show the connections to existing methods, in particular Bregman iteration (, Multiscale Model. Simul. 4(2):460---489, 2005) based methods, such as linearized Bregman (, Commun. Math. Sci. 8(1):93---111, 2010; , SIAM J. Imag. Sci. 2(1):226---252, 2009, CAM Report 09-28, UCLA, March 2009; Yin, CAAM Report, Rice University, 2009) and split Bregman (Goldstein and Osher, SIAM J. Imag. Sci., 2, 2009). The convergence of the general algorithm framework is proved under mild assumptions. The applications to ? 1 basis pursuit, TV?L 2 minimization and matrix completion are demonstrated. Finally, the numerical examples show the algorithms proposed are easy to implement, efficient, stable and flexible enough to cover a wide variety of applications."
]
} |
1409.3257 | 2950369225 | We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods. | Suzuki @cite_11 considered a problem similar to , but with more complex regularization function @math , meaning that @math does not have a simple proximal mapping. Thus primal updates such as step or in SPDC and similar steps in SDCA cannot be computed efficiently. He proposed an algorithm that combines SDCA @cite_43 and ADMM (e.g., @cite_21 ), and showed that it has linear rate of convergence under similar conditions as Assumption . It would be interesting to see if the SPDC method can be extended to their setting to obtain accelerated linear convergence rate. | {
"cite_N": [
"@cite_43",
"@cite_21",
"@cite_11"
],
"mid": [
"1939652453",
"2164278908",
"38875623"
],
"abstract": [
"Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.",
"Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.",
"We propose a new stochastic dual coordinate ascent technique that can be applied to a wide range of regularized learning problems. Our method is based on alternating direction method of multipliers (ADMM) to deal with complex regularization functions such as structured regularizations. Although the original ADMM is a batch method, the proposed method offers a stochastic update rule where each iteration requires only one or few sample observations. Moreover, our method can naturally afford mini-batch update and it gives speed up of convergence. We show that, under mild assumptions, our method converges exponentially. The numerical experiments show that our method actually performs efficiently."
]
} |
1409.2983 | 2950359997 | In this paper, we present a novel approach to predict crime in a geographic space from multiple data sources, in particular mobile phone and demographic data. The main contribution of the proposed approach lies in using aggregated and anonymized human behavioral data derived from mobile network activity to tackle the crime prediction problem. While previous research efforts have used either background historical knowledge or offenders' profiling, our findings support the hypothesis that aggregated human behavioral data captured from the mobile network infrastructure, in combination with basic demographic information, can be used to predict crime. In our experimental results with real crime data from London we obtain an accuracy of almost 70 when predicting whether a specific area in the city will be a crime hotspot or not. Moreover, we provide a discussion of the implications of our findings for data-driven crime analysis. | Researchers have devoted attention to the study of criminal behavior dynamics both from a people- and place- centric perspective. The people-centric perspective has mostly been used for individual or collective criminal profiling. Wang @cite_3 proposed , a machine learning approach to the problem of detecting specific patterns in crimes that are committed by the same offender or group of offenders. In @cite_45 , it is proposed a biased random walk model built upon empirical knowledge of criminal offenders behavior along with spatio-temporal crime information to take into account repeating patterns in historical crime data. Furthermore, Ratcliffe @cite_44 investigated the spatio-temporal constraints underlying offenders' criminal behavior. | {
"cite_N": [
"@cite_44",
"@cite_45",
"@cite_3"
],
"mid": [
"",
"2155458379",
"1776456436"
],
"abstract": [
"",
"Motivated by empirical observations of spatio-temporal clusters of crime across a wide variety of urban settings, we present a model to study the emergence, dynamics, and steady-state properties of crime hotspots. We focus on a two-dimensional lattice model for residential burglary, where each site is characterized by a dynamic attractiveness variable, and where each criminal is represented as a random walker. The dynamics of criminals and of the attractiveness field are coupled to each other via specific biasing and feedback mechanisms. Depending on parameter choices, we observe and describe several regimes of aggregation, including hotspots of high criminal activity. On the basis of the discrete system, we also derive a continuum model; the two are in good quantitative agreement for large system sizes. By means of a linear stability analysis we are able to determine the parameter values that will lead to the creation of stable hotspots. We discuss our model and results in the context of established crim...",
"Our goal is to automatically detect patterns of crime. Among a large set of crimes that happen every year in a major city, it is challenging, time-consuming, and labor-intensive for crime analysts to determine which ones may have been committed by the same individual(s). If automated, data-driven tools for crime pattern detection are made available to assist analysts, these tools could help police to better understand patterns of crime, leading to more precise attribution of past crimes, and the apprehension of suspects. To do this, we propose a pattern detection algorithm called Series Finder, that grows a pattern of discovered crimes from within a database, starting from a \"seed\" of a few crimes. Series Finder incorporates both the common characteristics of all patterns and the unique aspects of each specific pattern, and has had promising results on a decade's worth of crime pattern data collected by the Crime Analysis Unit of the Cambridge Police Department."
]
} |
1409.2983 | 2950359997 | In this paper, we present a novel approach to predict crime in a geographic space from multiple data sources, in particular mobile phone and demographic data. The main contribution of the proposed approach lies in using aggregated and anonymized human behavioral data derived from mobile network activity to tackle the crime prediction problem. While previous research efforts have used either background historical knowledge or offenders' profiling, our findings support the hypothesis that aggregated human behavioral data captured from the mobile network infrastructure, in combination with basic demographic information, can be used to predict crime. In our experimental results with real crime data from London we obtain an accuracy of almost 70 when predicting whether a specific area in the city will be a crime hotspot or not. Moreover, we provide a discussion of the implications of our findings for data-driven crime analysis. | More recently, the proliferation of social media has sparked interest in using this kind of data to predict a variety of variables, including electoral outcomes @cite_39 and market trends @cite_16 . In this line, Wang @cite_2 proposed the usage of social media to predict criminal incidents. Their approach relies on a semantic analysis of tweets using natural language processing along with spatio-temporal information derived from neighborhood demographic data and the tweets metadata. | {
"cite_N": [
"@cite_16",
"@cite_2",
"@cite_39"
],
"mid": [
"2171468534",
"144670803",
"1590495275"
],
"abstract": [
"Behavioral economics tells us that emotions can profoundly affect individual behavior and decision-making. Does this also apply to societies at large, i.e. can societies experience mood states that affect their collective decision making? By extension is the public mood correlated or even predictive of economic indicators? Here we investigate whether measurements of collective mood states derived from large-scale Twitter feeds are correlated to the value of the Dow Jones Industrial Average (DJIA) over time. We analyze the text content of daily Twitter feeds by two mood tracking tools, namely OpinionFinder that measures positive vs. negative mood and Google-Profile of Mood States (GPOMS) that measures mood in terms of 6 dimensions (Calm, Alert, Sure, Vital, Kind, and Happy). We cross-validate the resulting mood time series by comparing their ability to detect the public's response to the presidential election and Thanksgiving day in 2008. A Granger causality analysis and a Self-Organizing Fuzzy Neural Network are then used to investigate the hypothesis that public mood states, as measured by the OpinionFinder and GPOMS mood time series, are predictive of changes in DJIA closing values. Our results indicate that the accuracy of DJIA predictions can be significantly improved by the inclusion of specific public mood dimensions but not others. We find an accuracy of 87.6 in predicting the daily up and down changes in the closing values of the DJIA and a reduction of the Mean Average Percentage Error by more than 6 . Index Terms—stock market prediction — twitter — mood analysis.",
"Prior work on criminal incident prediction has relied primarily on the historical crime record and various geospatial and demographic information sources. Although promising, these models do not take into account the rich and rapidly expanding social media context that surrounds incidents of interest. This paper presents a preliminary investigation of Twitter-based criminal incident prediction. Our approach is based on the automatic semantic analysis and understanding of natural language Twitter posts, combined with dimensionality reduction via latent Dirichlet allocation and prediction via linear modeling. We tested our model on the task of predicting future hit-and-run crimes. Evaluation results indicate that the model comfortably outperforms a baseline model that predicts hit-and-run incidents uniformly across all days.",
"Twitter is a microblogging website where users read and write millions of short messages on a variety of topics every day. This study uses the context of the German federal election to investigate whether Twitter is used as a forum for political deliberation and whether online messages on Twitter validly mirror offline political sentiment. Using LIWC text analysis software, we conducted a content-analysis of over 100,000 messages containing a reference to either a political party or a politician. Our results show that Twitter is indeed used extensively for political deliberation. We find that the mere number of messages mentioning a party reflects the election result. Moreover, joint mentions of two parties are in line with real world political ties and coalitions. An analysis of the tweets’ political sentiment demonstrates close correspondence to the parties' and politicians’ political positions indicating that the content of Twitter messages plausibly reflects the offline political landscape. We discuss the use of microblogging message content as a valid indicator of political sentiment and derive suggestions for further research."
]
} |
1409.3207 | 1810284274 | Consider a network consisting of two subnetworks (communities) connected by some external edges. Given the network topology, the community detection problem can be cast as a graph partitioning problem that aims to identify the external edges as the graph cut that separates these two subnetworks. In this paper, we consider a general model where two arbitrarily connected subnetworks are connected by random external edges. Using random matrix theory and concentration inequalities, we show that when one performs community detection via spectral clustering there exists an abrupt phase transition as a function of the random external edge connection probability. Specifically, the community detection performance transitions from almost perfect detectability to low detectability near some critical value of the random external edge connection probability. We derive upper and lower bounds on the critical value and show that the bounds are equal to each other when two subnetwork sizes are identical. Using simulated and experimental data we show how these bounds can be empirically estimated to validate the detection reliability of any discovered communities. | Community detection arises in technological, social, and biological networks. For social science, the goal is to find tightly connected subgraphs in a social network @cite_31 . In @cite_40 , Newman proposes a measure called modularity that evaluates the number of excessive edges of a graph compared with the corresponding degree-equivalent random graph. More specifically, define the modularity matrix as @math , where @math is the degree vector and @math is the number of edges in the graph. The last term @math is the expected adjacency matrix of the degree-equivalent random graph. Similar to spectral clustering, the community indication vector is obtained by performing K-means clustering on the largest eigenvector of @math . We will compare the community detection results of spectral clustering and the modularity method in Sec. . | {
"cite_N": [
"@cite_40",
"@cite_31"
],
"mid": [
"2151936673",
"2127048411"
],
"abstract": [
"Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as “modularity” over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets.",
"The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks."
]
} |
1409.3207 | 1810284274 | Consider a network consisting of two subnetworks (communities) connected by some external edges. Given the network topology, the community detection problem can be cast as a graph partitioning problem that aims to identify the external edges as the graph cut that separates these two subnetworks. In this paper, we consider a general model where two arbitrarily connected subnetworks are connected by random external edges. Using random matrix theory and concentration inequalities, we show that when one performs community detection via spectral clustering there exists an abrupt phase transition as a function of the random external edge connection probability. Specifically, the community detection performance transitions from almost perfect detectability to low detectability near some critical value of the random external edge connection probability. We derive upper and lower bounds on the critical value and show that the bounds are equal to each other when two subnetwork sizes are identical. Using simulated and experimental data we show how these bounds can be empirically estimated to validate the detection reliability of any discovered communities. | Our model is more general than the stochastic block model since it does not assume any edge connection models within the communities. The details are discussed in Sec. . A similar model is studied in @cite_26 for interconnected networks. However, in @cite_26 the subnetworks are of equal size and the external edges are known (i.e., non-random). The main contribution of @cite_26 was a study of the eigenstructure of the overall graph Laplacian matrix with different interconnected edge strengths as contrasted to community detection. The simulation results in @cite_4 show that phase transition on community detectability exists under this general model, yet the critical phase transition threshold is still poorly understood. Phase transition results on p-resistance distances of random geometric graphs are obtained in @cite_2 . The authors of @cite_2 show that there exist two critical thresholds for the p-resistance. The first (lower) threshold depends on the global graph topology while the second (higher) threshold only depends on local graph connectivity. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_2"
],
"mid": [
"2024224210",
"2078528040",
""
],
"abstract": [
"Real-world networks are rarely isolated. A model of an interdependent network of networks shows that an abrupt phase transition occurs when interconnections between independent networks are added. This study also suggests ways to minimize the danger of abrupt structural changes to real networks.",
"Communities are fundamental entities for the characterization of the structure of real networks. The standard approachtotheidentificationofcommunitiesinnetworksisbasedontheoptimizationofaqualityfunctionknown as modularity. Although modularity has been at the center of an intense research activity and many methods for its maximization have been proposed, not much is yet known about the necessary conditions that communities need to satisfy in order to be detectable with modularity maximization methods. Here, we develop a simple theory to establish these conditions, and we successfully apply it to various classes of network models. Our main result is that heterogeneity in the degree distribution helps modularity to correctly recover the community structure of a network and that, in the realistic case of scale-free networks with degree exponent γ< 2.5, modularity is always able to detect the presence of communities.",
""
]
} |
1409.2902 | 2952991908 | The Hildreth's algorithm is a row action method for solving large systems of inequalities. This algorithm is efficient for problems with sparse matrices, as opposed to direct methods such as Gaussian elimination or QR-factorization. We apply the Hildreth's algorithm, as well as a randomized version, along with prioritized selection of the inequalities, to efficiently detect the highest priority feasible subsystem of equations. We prove convergence results and feasibility criteria for both cyclic and randomized Hildreth's algorithm, as well as a mixed algorithm which uses Hildreth's algorithm for inequalities and Kaczmarz algorithm for equalities. These prioritized, sparse systems of inequalities commonly appear in constraint-based user interface (UI) layout specifications. The performance and convergence of these proposed algorithms are evaluated empirically using randomly generated UI layout specifications of various sizes. The results show that these methods offer improvements in performance over standard methods like Matlab's LINPROG, a well-known efficient linear programming solver, and the recent developed Kaczmarz algorithm with prioritized IIS detection. | Besides methods for MaxFS there are also some methods to solve the IIS problem. These methods are: deletion filtering, IIS detection and grouping constraints. Deletion filtering @cite_19 removes constraints from the set of constraints and checks the feasibility of the reduced set. IIS detection @cite_29 starts with a single constraint and adds constraints successively. The grouping constraints method @cite_40 was introduced to speed up the aforementioned algorithms by adding or removing groups of constraints simultaneously. | {
"cite_N": [
"@cite_19",
"@cite_29",
"@cite_40"
],
"mid": [
"2160089068",
"1981066696",
"2113730096"
],
"abstract": [
"With ongoing advances in hardware and software, the bottleneck in linear programming is no longer a model solution, it is the correct formulation of large models in the first place. During initial formulation (or modification), a very large model may prove infeasible, but it is often difficult to determine how to correct it. We present a formulation aid which analyzes infeasible LP s and identifies minimal sets of inconsistent constraints from among the perhaps very large set of constraints defining the problem. This information helps to focus the search for a diagnosis of the problem, speeding the repair of the model. We present a series of filtering routines and a final integrated algorithm which guarantees the identification of at least one minimal set of inconsistent constraints. This guarantee is a significant advantage over previous methods. The algorithms are simple, relatively efficient, and easily incorporated into standard LP solvers. Preliminary computational results are reported. INFORMS Journal on Computing , ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.",
"This paper presents ideas from goal programming (GP) used as an accompaniment to linear programming (LP) for the analysis of LP infeasibility. A new algorithm (GPIIS) for the detection of irreducibly inconsistent systems (IIS) of constraints is presented using this approach. The structure necessary for implementing such a procedure into a commercial LP solver is outlined. Results for a selection of infeasible LP problems are given, and conclusions drawn.",
"Algorithms and computer-based tools for analyzing infeasible linear and nonlinear programs have been developed in recent years, but few such tools exist for infeasible mixed-integer or integer linear programs. One approach that has proven especially useful for infeasible linear programs is the isolation of an Irreducible Infeasible Set of constraints (IIS), a subset of the constraints defining the overall linear program that is itself infeasible, but for which any proper subset is feasible. Isolating an IIS from the larger model speeds the diagnosis and repair of the model by focussing the analytic effort. This paper describes and tests algorithms for finding small infeasible sets in infeasible mixed-integer and integer linear programs; where possible these small sets are IISs."
]
} |
1409.2329 | 1591801644 | We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation. | Dropout @cite_3 is a recently introduced regularization method that has been very successful with feed-forward neural networks. While much work has extended dropout in various ways @cite_15 @cite_8 , there has been relatively little research in applying it to RNNs. The only paper on this topic is by , who focuses on marginalized dropout'' @cite_15 , a noiseless deterministic approximation to standard dropout. claim that conventional dropout does not work well with RNNs because the recurrence amplifies noise, which in turn hurts learning. In this work, we show that this problem can be fixed by applying dropout to a certain subset of the RNNs' connections. As a result, RNNs can now also benefit from dropout. | {
"cite_N": [
"@cite_15",
"@cite_3",
"@cite_8"
],
"mid": [
"2115701093",
"2183112036",
"4919037"
],
"abstract": [
"Recurrent Neural Networks (RNNs) are rich models for the processing of sequential data. Recent work on advancing the state of the art has been focused on the optimization or modelling of RNNs, mostly motivated by adressing the problems of the vanishing and exploding gradients. The control of overfitting has seen considerably less attention. This paper contributes to that by analyzing fast dropout, a recent regularization method for generalized linear models and neural networks from a back-propagation inspired perspective. We show that fast dropout implements a quadratic form of an adaptive, per-parameter regularizer, which rewards large weights in the light of underfitting, penalizes them for overconfident predictions and vanishes at minima of an unregularized training loss. The derivatives of that regularizer are exclusively based on the training error signal. One consequence of this is the absense of a global weight attractor, which is particularly appealing for RNNs, since the dynamics are not biased towards a certain regime. We positively test the hypothesis that this improves the performance of RNNs on four musical data sets.",
"",
"We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models."
]
} |
1409.2329 | 1591801644 | We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation. | Independently of our work, @cite_14 developed the very same RNN regularization method and applied it to handwriting recognition. We rediscovered this method and demonstrated strong empirical results over a wide range of problems. Other work that applied dropout to LSTMs is @cite_27 . | {
"cite_N": [
"@cite_27",
"@cite_14"
],
"mid": [
"1836307405",
"1987937363"
],
"abstract": [
"Neural language models (LMs) based on recurrent neural networks (RNN) are some of the most successful word and character-level LMs. Why do they work so well, in particular better than linear neural LMs? Possible explanations are that RNNs have an implicitly better regularization or that RNNs have a higher capacity for storing patterns due to their nonlinearities or both. Here we argue for the first explanation in the limit of little training data and the second explanation for large amounts of text data. We show state-of-the-art performance on the popular and small Penn dataset when RNN LMs are regularized with random dropout. Nonetheless, we show even better performance from a simplified, much less expressive linear RNN model without off-diagonal entries in the recurrent matrix. We call this model an impulse-response LM (IRLM). Using random dropout, column normalization and annealed learning rates, IRLMs develop neurons that keep a memory of up to 50 words in the past and achieve a perplexity of 102.5 on the Penn dataset. On two large datasets however, the same regularization methods are unsuccessful for both models and the RNN's expressivity allows it to overtake the IRLM by 10 and 20 percent perplexity, respectively. Despite the perplexity gap, IRLMs still outperform RNNs on the Microsoft Research Sentence Completion (MRSC) task. We develop a slightly modified IRLM that separates long-context units (LCUs) from short-context units and show that the LCUs alone achieve a state-of-the-art performance on the MRSC task of 60.8 . Our analysis indicates that a fruitful direction of research for neural LMs lies in developing more accessible internal representations, and suggests an optimization regime of very high momentum terms for effectively training such models.",
"Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequence is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections."
]
} |
1409.2329 | 1591801644 | We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation. | In this paper, we consider the following tasks: language modeling, speech recognition, and machine translation. Language modeling is the first task where RNNs have achieved substantial success @cite_5 @cite_20 @cite_13 . RNNs have also been successfully used for speech recognition @cite_0 @cite_24 and have recently been applied to machine translation, where they are used for language modeling, re-ranking, or phrase modeling @cite_1 @cite_23 @cite_6 @cite_26 @cite_10 . | {
"cite_N": [
"@cite_26",
"@cite_10",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_24",
"@cite_23",
"@cite_5",
"@cite_13",
"@cite_20"
],
"mid": [
"2131513205",
"2126725946",
"2251682575",
"2950635152",
"1517386993",
"",
"1753482797",
"179875071",
"",
""
],
"abstract": [
"In this paper, we describe BYBLOS, the BBN continuous speech recognition system. The system, designed for large vocabulary applications, integrates acoustic, phonetic, lexical, and linguistic knowledge sources to achieve high recognition performance. The basic approach, as described in previous papers [1, 2], makes extensive use of robust context-dependent models of phonetic coarticulation using Hidden Markov Models (HMM). We describe the components of the BYBLOS system, including: signal processing frontend, dictionary, phonetic model training system, word model generator, grammar and decoder. In recognition experiments, we demonstrate consistently high word recognition performance on continuous speech across: speakers, task domains, and grammars of varying complexity. In speaker-dependent mode, where 15 minutes of speech is required for training to a speaker, 98.5 word accuracy has been achieved in continuous speech for a 350-word task, using grammars with perplexity ranging from 30 to 60. With only 15 seconds of training speech we demonstrate performance of 97 using a grammar.",
"Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90 precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs.",
"Recent work has shown success in using neural network language models (NNLMs) as features in MT systems. Here, we present a novel formulation for a neural network joint model (NNJM), which augments the NNLM with a source context window. Our model is purely lexicalized and can be integrated into any MT decoder. We also present several variations of the NNJM which provide significant additive improvements.",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"This chapter describes a use of recurrent neural networks (i.e., feedback is incorporated in the computation) as an acoustic model for continuous speech recognition. The form of the recurrent neural network is described along with an appropriate parameter estimation procedure. For each frame of acoustic data, the recurrent network generates an estimate of the posterior probability of of the possible phones given the observed acoustic signal. The posteriors are then converted into scaled likelihoods and used as the observation probabilities within a conventional decoding paradigm (e.g., Viterbi decoding). The advantages of using recurrent networks are that they require a small number of parameters and provide a fast decoding capability (relative to conventional, large-vocabulary, HMM systems)3.",
"",
"We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43 lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations.",
"A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition",
"",
""
]
} |
1409.2762 | 2952073973 | Collaborative filtering is amongst the most preferred techniques when implementing recommender systems. Recently, great interest has turned towards parallel and distributed implementations of collaborative filtering algorithms. This work is a survey of the parallel and distributed collaborative filtering implementations, aiming not only to provide a comprehensive presentation of the field's development, but also to offer future research orientation by highlighting the issues that need to be further developed. | This section is devoted to briefly outline the surveys concerning recommender systems. Recommender systems that combine different recommendation techniques are presented in one of the first surveys @cite_102 . A comparison among the different recommendation techniques is provided and their advantages and disadvantages are discussed. Also, the different hybridization methods are described. The existing hybrid approaches are briefly presented and a hybrid recommender system that combines knowledge-based recommendation and collaborative filtering is introduced. Experiments are conducted on the proposed recommender system using data derived from the web server's log. This survey proved that there were many combinations of techniques to be explored and outlined the needs of the field of hybrid recommender systems. | {
"cite_N": [
"@cite_102"
],
"mid": [
"281665770"
],
"abstract": [
"Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, EntreeC, a system that combines knowledge-based recommendation and collaborative filtering to recommend restaurants. Further, we show that semantic ratings obtained from the knowledge-based part of the system enhance the effectiveness of collaborative filtering."
]
} |
1409.2762 | 2952073973 | Collaborative filtering is amongst the most preferred techniques when implementing recommender systems. Recently, great interest has turned towards parallel and distributed implementations of collaborative filtering algorithms. This work is a survey of the parallel and distributed collaborative filtering implementations, aiming not only to provide a comprehensive presentation of the field's development, but also to offer future research orientation by highlighting the issues that need to be further developed. | One of the early surveys addressing recommender systems is @cite_21 . Recommender systems are classified into three categories. Content-based, collaborative and hybrid implementations. The constraints of each category are discussed and possible ways to improve the recommendation methods are proposed. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2171960770"
],
"abstract": [
"This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations."
]
} |
1409.2762 | 2952073973 | Collaborative filtering is amongst the most preferred techniques when implementing recommender systems. Recently, great interest has turned towards parallel and distributed implementations of collaborative filtering algorithms. This work is a survey of the parallel and distributed collaborative filtering implementations, aiming not only to provide a comprehensive presentation of the field's development, but also to offer future research orientation by highlighting the issues that need to be further developed. | Context-aware technology enhanced recommender systems are discussed in one of the most recent surveys @cite_19 . A classification framework of the context information is introduced, which assigns the contextual information among 8 categories. The existing context-aware recommender systems that are used for technology enhanced learning are analysed concerning the proposed framework. Furthermore, the challenges encountered in the evolution of the field are commented. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2025605741"
],
"abstract": [
"Recommender systems have developed in parallel with the web. They were initially based on demographic, content-based and collaborative filtering. Currently, these systems are incorporating social information. In the future, they will use implicit, local and personal information from the Internet of things. This article provides an overview of recommender systems as well as collaborative filtering methods and algorithms; it also explains their evolution, provides an original classification for these systems, identifies areas of future implementation and develops certain areas selected for past, present or future importance."
]
} |
1409.1461 | 2953278188 | Social media users share billions of items per year, only a small fraction of which is geotagged. We present a data- driven approach for identifying non-geotagged content items that can be associated with a hyper-local geographic area by modeling the location distributions of hyper-local n-grams that appear in the text. We explore the trade-off between accuracy, precision and coverage of this method. Further, we explore differences across content received from multiple platforms and devices, and show, for example, that content shared via different sources and applications produces significantly different geographic distributions, and that it is best to model and predict location for items according to their source. Our findings show the potential and the bounds of a data-driven approach to geotag short social media texts, and offer implications for all applications that use data-driven approaches to locate content. | A related set of studies used information about geographic regions in geotagged social media to extract information and characterize geographic areas @cite_8 @cite_16 @cite_10 @cite_19 @cite_6 @cite_29 . | {
"cite_N": [
"@cite_8",
"@cite_29",
"@cite_6",
"@cite_19",
"@cite_16",
"@cite_10"
],
"mid": [
"",
"81018588",
"2099352442",
"2029550988",
"2103388840",
"1990128172"
],
"abstract": [
"",
"We propose a novel algorithm for uncovering the colloquial boundaries of locally characterizing regions present in collections of labeled geospatial data. We address the problem by first modeling the data using scale-space theory, allowing us to represent it simultaneously across different scales as a family of increasingly smoothed density distributions. We then derive region boundaries by applying localized label weighting and image processing techniques to the scale-space representation of each label. Important insights into the data can be acquired by visualizing the shape and size of the resulting boundaries for each label at multiple scales. We demonstrate our technique operating at scale by discovering the boundaries of the most geospatially salient tags associated with a large collection of georeferenced photos from Flickr and compare our characterizing regions that emerge from the data with those produced by a recent technique from the research literature.",
"Policy makers are calling for new socio-economic measures that reflect subjective well-being, to complement traditional measures of material welfare as the Gross Domestic Product (GDP). Self-reporting has been found to be reasonably accurate in measuring one's well-being and conveniently tallies with sentiment expressed on social media (e.g., those satisfied with life use more positive than negative words in their Facebook status updates). Social media content can thus be used to track well-being of individuals. A question left unexplored is whether such content can be used to track well-being of entire physical communities as well. To this end, we consider Twitter users based in a variety of London census communities, and study the relationship between sentiment expressed in tweets and community socio-economic well-being. We find that the two are highly correlated: the higher the normalized sentiment score of a community's tweets, the higher the community's socio-economic well-being. This suggests that monitoring tweets is an effective way of tracking community well-being too.",
"Many innovative location-based services have been established to offer users greater convenience in their everyday lives. These services usually cannot map user's physical locations into semantic names automatically. The semantic names of locations provide important context for mobile recommendations and advertisements. In this article, we proposed a novel location naming approach which can automatically provide semantic names for users given their locations and time. In particular, when a user opens a GPS device and submits a query with her physical location and time, she will be returned the most appropriate semantic name. In our approach, we drew an analogy between location naming and local search, and designed a local search framework to propose a spatiotemporal and user preference (STUP) model for location naming. STUP combined three components, user preference (UP), spatial preference (SP), and temporal preference (TP), by leveraging learning-to-rank techniques. We evaluated STUP on 466,190 check-ins of 5,805 users from Shanghai and 135,052 check-ins of 1,361 users from Beijing. The results showed that SP was most effective among three components and that UP can provide personalized semantic names, and thus it was a necessity for location naming. Although TP was not as discriminative as the others, it can still be beneficial when integrated with SP and UP. Finally, according to the experimental results, STUP outperformed the proposed baselines and returned accurate semantic names for 23.6p and 26.6p of the testing queries from Beijing and Shanghai, respectively.",
"We investigate how to organize a large collection of geotagged photos, working with a dataset of about 35 million images collected from Flickr. Our approach combines content analysis based on text tags and image data with structural analysis based on geospatial data. We use the spatial distribution of where people take photos to define a relational structure between the photos that are taken at popular places. We then study the interplay between this structure and the content, using classification methods for predicting such locations from visual, textual and temporal features of the photos. We find that visual and temporal features improve the ability to estimate the location of a photo, compared to using just textual features. We illustrate using these techniques to organize a large photo collection, while also revealing various interesting properties about popular cities and landmarks at a global scale.",
"Despite their 140-character limitation, tweets embody a lot of valuable information, especially temporal and spatial. In this paper we study the geographic aspects of tweets, for a given object domain. We propose a user-level model for spatial encoding in tweets that goes beyond the explicit geo-coding or place name mentions; this model can be used to match objects to tweets. We illustrate our model and methodology using restaurants as the objects, and show a significant improvement in performance over using standard language models. En route, we obtain a method to geolocate users who tweet about geolocated objects; this may be of independent interest."
]
} |
1409.1461 | 2953278188 | Social media users share billions of items per year, only a small fraction of which is geotagged. We present a data- driven approach for identifying non-geotagged content items that can be associated with a hyper-local geographic area by modeling the location distributions of hyper-local n-grams that appear in the text. We explore the trade-off between accuracy, precision and coverage of this method. Further, we explore differences across content received from multiple platforms and devices, and show, for example, that content shared via different sources and applications produces significantly different geographic distributions, and that it is best to model and predict location for items according to their source. Our findings show the potential and the bounds of a data-driven approach to geotag short social media texts, and offer implications for all applications that use data-driven approaches to locate content. | @cite_8 proposed a model that aggregates knowledge in the form of representative tags'' for arbitrary areas in the world by analyzing tags associated with the geo-referenced Flickr images. @cite_16 used Flickr to find relations between photos and popular places in which the photos were taken and showed how to find representative images for popular landmarks. Similarly, @cite_14 generate representative sets of images for landmarks using Flickr data. @cite_6 proposed applying sentiment analysis to geo-referenced tweets in London in order to find the areas of the city characterized by well being''. A recent review by Tasse @cite_0 listed other possible applications of social media for understanding urban areas. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_6",
"@cite_0",
"@cite_16"
],
"mid": [
"2119742615",
"",
"2099352442",
"830789897",
"2103388840"
],
"abstract": [
"Can we leverage the community-contributed collections of rich media on the web to automatically generate representative and diverse views of the world's landmarks? We use a combination of context- and content-based tools to generate representative sets of images for location-driven features and landmarks, a common search task. To do that, we using location and other metadata, as well as tags associated with images, and the images' visual features. We present an approach to extracting tags that represent landmarks. We show how to use unsupervised methods to extract representative views and images for each landmark. This approach can potentially scale to provide better search and representation for landmarks, worldwide. We evaluate the system in the context of image search using a real-life dataset of 110,000 images from the San Francisco area.",
"",
"Policy makers are calling for new socio-economic measures that reflect subjective well-being, to complement traditional measures of material welfare as the Gross Domestic Product (GDP). Self-reporting has been found to be reasonably accurate in measuring one's well-being and conveniently tallies with sentiment expressed on social media (e.g., those satisfied with life use more positive than negative words in their Facebook status updates). Social media content can thus be used to track well-being of individuals. A question left unexplored is whether such content can be used to track well-being of entire physical communities as well. To this end, we consider Twitter users based in a variety of London census communities, and study the relationship between sentiment expressed in tweets and community socio-economic well-being. We find that the two are highly correlated: the higher the normalized sentiment score of a community's tweets, the higher the community's socio-economic well-being. This suggests that monitoring tweets is an effective way of tracking community well-being too.",
"Understanding urban dynamics is crucial for a number of domains, but it can be expensive and time consuming to gather necessary data. The rapid rise of social media has given us a new and massive source of geotagged data that can be transformative in terms of how we understand our cities. In this position paper, we describe three opportunities in using geotagged social media data: to help city planners, to help small businesses, and to help individuals adapt to their city better. We also sketch some possible research projects to help map out the design space, as well as discuss some limitations and challenges in using this kind of data.",
"We investigate how to organize a large collection of geotagged photos, working with a dataset of about 35 million images collected from Flickr. Our approach combines content analysis based on text tags and image data with structural analysis based on geospatial data. We use the spatial distribution of where people take photos to define a relational structure between the photos that are taken at popular places. We then study the interplay between this structure and the content, using classification methods for predicting such locations from visual, textual and temporal features of the photos. We find that visual and temporal features improve the ability to estimate the location of a photo, compared to using just textual features. We illustrate using these techniques to organize a large photo collection, while also revealing various interesting properties about popular cities and landmarks at a global scale."
]
} |
1409.1461 | 2953278188 | Social media users share billions of items per year, only a small fraction of which is geotagged. We present a data- driven approach for identifying non-geotagged content items that can be associated with a hyper-local geographic area by modeling the location distributions of hyper-local n-grams that appear in the text. We explore the trade-off between accuracy, precision and coverage of this method. Further, we explore differences across content received from multiple platforms and devices, and show, for example, that content shared via different sources and applications produces significantly different geographic distributions, and that it is best to model and predict location for items according to their source. Our findings show the potential and the bounds of a data-driven approach to geotag short social media texts, and offer implications for all applications that use data-driven approaches to locate content. | Recent work by @cite_3 mapped users noisy check-ins on Foursquare to semantically extract meaningful suggestions from a database of known points of interest. In particular, by aggregating locations from geotagged check-ins, the authors were able to create geographic models for different venues using multi-dimensional Gaussian models. Earlier work from Flickr, Alpha Shapes code.flickr.net 2008 10 30 the-shape-of-alpha , modeled information available from geotagged images on Flickr to create geographic models for places like neighborhoods, towns, etc. Finally, @cite_23 not only explored point-of-interest mentions on Twitter but also connected them to the relative temporal values of the visits. | {
"cite_N": [
"@cite_23",
"@cite_3"
],
"mid": [
"1972338643",
"2026532078"
],
"abstract": [
"Twitter is a popular platform for sharing activities, plans, and opinions. Through tweets, users often reveal their location information and short term visiting plans. In this paper, we are interested in extracting fine-grained locations mentioned in tweets with temporal awareness. More specifically, we like to extract each point-of-interest (POI) mention in a tweet and predict whether the user has visited, is currently at, or will soon visit this POI. Our proposed solution, named PETAR, consists of two main components: a POI inventory and a time-aware POI tagger. The POI inventory is built by exploiting the crowd wisdom of Foursquare community. It contains not only the formal names of POIs but also the informal abbreviations. The POI tagger, based on Conditional Random Field (CRF) model, is designed to simultaneously identify the POIs and resolve their associated temporal awareness. In our experiments, we investigated four types of features (i.e., lexical, grammatical, geographical, and BILOU schema features) for time-aware POI extraction. With the four types of features, PETAR achieves promising extraction accuracy and outperforms all baseline methods.",
"In this article we consider the problem of mapping a noisy estimate of a user's current location to a semantically meaningful point of interest, such as a home, restaurant, or store. Despite the poor accuracy of GPS on current mobile devices and the relatively high density of places in urban areas, it is possible to predict a user's location with considerable precision by explicitly modeling both places and users and by combining a variety of signals about a user's current context. Places are often simply modeled as a single latitude and longitude when in fact they are complex entities existing in both space and time and shaped by the millions of people that interact with them. Similarly, models of users reveal complex but predictable patterns of mobility that can be exploited for this task. We propose a novel spatial search algorithm that infers a user's location by combining aggregate signals mined from billions of foursquare check-ins with real-time contextual information. We evaluate a variety of techniques and demonstrate that machine learning algorithms for ranking and spatiotemporal models of places and users offer significant improvement over common methods for location search based on distance and popularity."
]
} |
1409.1730 | 2057577642 | Defining an optimal protection strategy against viruses, spam propagation, or any other kind of contamination process is an important feature for designing new networks and architectures. In this paper, we consider decentralized optimal protection strategies when a virus is propagating over a network through an SIS epidemic process. We assume that each node in the network can fully protect itself from infection at a constant cost, or the node can use recovery software, once it is infected. We model our system using a game-theoretic framework and find pure, mixed equilibria, and the Price of Anarchy in several network topologies. Further, we propose a decentralized algorithm and an iterative procedure to compute a pure equilibrium in the general case of a multiple communities network. Finally, we evaluate the algorithms and give numerical illustrations of all our results. | Virus spread processes in networks have been studied in the past @cite_44 @cite_38 @cite_39 @cite_19 , usually considering the number of infected nodes @cite_44 over the time and in stationary regimes, the epidemic threshold @cite_38 or the relation with eigenvalues @cite_31 . One of the widely explored Susceptible Infected Susceptible (SIS) approximations is the N-intertwined mean-field approximation NIMFA @cite_44 @cite_45 . | {
"cite_N": [
"@cite_38",
"@cite_39",
"@cite_44",
"@cite_19",
"@cite_45",
"@cite_31"
],
"mid": [
"2112494680",
"1914027636",
"",
"2124486211",
"2040330999",
"2002359723"
],
"abstract": [
"How will a virus propagate in a real networkq How long does it take to disinfect a network given particular values of infection rate and virus death rateq What is the single best node to immunizeq Answering these questions is essential for devising network-wide strategies to counter viruses. In addition, viral propagation is very similar in principle to the spread of rumors, information, and “fads,” implying that the solutions for viral propagation would also offer insights into these other problem settings. We answer these questions by developing a nonlinear dynamical system (NLDS) that accurately models viral propagation in any arbitrary network, including real and synthesized network graphs. We propose a general epidemic threshold condition for the NLDS system: we prove that the epidemic threshold for a network is exactly the inverse of the largest eigenvalue of its adjacency matrix. Finally, we show that below the epidemic threshold, infections die out at an exponential rate. Our epidemic threshold model subsumes many known thresholds for special-case graphs (e.g., Erdos--Renyi, BA powerlaw, homogeneous). We demonstrate the predictive power of our model with extensive experiments on real and synthesized graphs, and show that our threshold condition holds for arbitrary graphs. Finally, we show how to utilize our threshold condition for practical uses: It can dictate which nodes to immunize; it can assess the effects of a throttling policy; it can help us design network topologies so that they are more resistant to viruses.",
"Many network phenomena are well modeled as spreads of epidemics through a network. Prominent examples include the spread of worms and email viruses, and, more generally, faults. Many types of information dissemination can also be modeled as spreads of epidemics. In this paper we address the question of what makes an epidemic either weak or potent. More precisely, we identify topological properties of the graph that determine the persistence of epidemics. In particular, we show that if the ratio of cure to infection rates is larger than the spectral radius of the graph, then the mean epidemic lifetime is of order log n, where n is the number of nodes. Conversely, if this ratio is smaller than a generalization of the isoperimetric constant of the graph, then the mean epidemic lifetime is of order e sup na , for a positive constant a. We apply these results to several network topologies including the hypercube, which is a representative connectivity graph for a distributed hash table, the complete graph, which is an important connectivity graph for BGP, and the power law graph, of which the AS-level Internet graph is a prime example. We also study the star topology and the Erdos-Renyi graph as their epidemic spreading behaviors determine the spreading behavior of power law graphs.",
"",
"The strong analogy between biological viruses and their computational counterparts has motivated the authors to adapt the techniques of mathematical epidemiology to the study of computer virus propagation. In order to allow for the most general patterns of program sharing, a standard epidemiological model is extended by placing it on a directed graph and a combination of analysis and simulation is used to study its behavior. The conditions under which epidemics are likely to occur are determined, and, in cases where they do, the dynamics of the expected number of infected individuals are examined as a function of time. It is concluded that an imperfect defense against computer viruses can still be highly effective in preventing their widespread proliferation, provided that the infection rate does not exceed a well-defined critical epidemic threshold. >",
"Besides the epidemic threshold, the recently proposed viral conductance @j by [11] may be regarded as an additional characterizer of the viral robustness of a network, that measures the overall ease in which viruses can spread in a particular network. Motivated to explain observed features of the viral conductance @j in simulations [29], we have analysed this metric in depth using the N-intertwined SIS epidemic model, that upper bounds the real infection probability in any network and, hence, provides safe-side bounds on which network protection can be based. Our study here derives a few exact results for @j, a number of different lower and upper bounds for @j with variable accuracy. We also extend the theory of the N-intertwined SIS epidemic model, by deducing formal series expansions of the steady-state fraction of infected nodes for any graph and any effective infection rate, that result in a series for the viral conductance @j. Though approximate, we illustrate here that the N-intertwined SIS epidemic model is so far the only SIS model on networks that is analytically tractable, and valuable to provide first order estimates of the epidemic impact in networks. Finally, inspired by the analogy between virus spread and synchronization of coupled oscillators in a network, we propose the synchronizability as the analogue of the viral conductance.",
"By making only one approximation of a mean-eld type, we determine the nature of the SIS type of epidemic phase transition in any network: the steady-state fraction of infected nodes y∞ is linear in (� 1 c � 1 ) for effective infection rates �↓�c, the derivative of y∞ at the epidemic thresholdc= 1 �1 is exactly computed and depends on the largest eigenvalue �1 of the adjacency matrix and on the rst- and third-order moments of the corresponding eigenvector. Since coupled oscillators in a network synchonize at a coupling strength proportional to 1 �1 ,a similar characterization of the phase transition is suggested. The behavior of y∞ aroundc was the missing part in the general steady-state theory of a SIS-type epidemic ona network. Copyright c �EPLA, 2012"
]
} |
1409.1730 | 2057577642 | Defining an optimal protection strategy against viruses, spam propagation, or any other kind of contamination process is an important feature for designing new networks and architectures. In this paper, we consider decentralized optimal protection strategies when a virus is propagating over a network through an SIS epidemic process. We assume that each node in the network can fully protect itself from infection at a constant cost, or the node can use recovery software, once it is infected. We model our system using a game-theoretic framework and find pure, mixed equilibria, and the Price of Anarchy in several network topologies. Further, we propose a decentralized algorithm and an iterative procedure to compute a pure equilibrium in the general case of a multiple communities network. Finally, we evaluate the algorithms and give numerical illustrations of all our results. | Game theoretical studies for network problems have been conducted, in routing @cite_34 @cite_50 , network flow @cite_6 , workload on the cloud @cite_2 or optimal network design @cite_16 @cite_4 , employing standard game-theoretic concepts @cite_33 @cite_10 such as pure Nash or mixed equilibrium. The Price of Anarchy (PoA) @cite_5 @cite_10 is often used as an equilibrium performance evaluation metric. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_34",
"@cite_6",
"@cite_50",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_10"
],
"mid": [
"1987511126",
"",
"2113692632",
"2156816884",
"2155361616",
"2094330223",
"",
"2009106897",
"2112269231"
],
"abstract": [
"The effect of virus spreading in a telecommunication network, where a certain curing strategy is deployed, can be captured by epidemic models. In the N-intertwined model proposed and studied in [1], [2], the probability of each node to be infected depends on the curing and infection rate of its neighbors. In this paper, we consider the case where all infection rates are equal and different values of curing rates can be deployed within a given budget, in order to minimize the overall infection of the network. We investigate this difficult optimization together with a related problem where the curing budget must be minimized within a given level of network infection. Some properties of these problems are derived and several solution algorithms are proposed. These algorithms are compared on two real world network instances, while Erdos-Renyi graphs and some special graphs such as the cycle, the star, the wheel and the complete bipartite graph are also addressed.",
"",
"The authors consider a communication network shared by several selfish users. Each user seeks to optimize its own performance by controlling the routing of its given flow demand, giving rise to a noncooperative game. They investigate the Nash equilibrium of such systems. For a two-node multiple links system, uniqueness of the Nash equilibrium is proven under reasonable convexity conditions. It is shown that this Nash equilibrium point possesses interesting monotonicity properties. For general networks, these convexity conditions are not sufficient for guaranteeing uniqueness, and a counterexample is presented. Nonetheless, uniqueness of the Nash equilibrium for general topologies is established under various assumptions. >",
"The existence of Nash equilibria in noncooperative flow control in a general product-form network shared by K users is investigated. The performance objective of each user is to maximize its average throughput subject to an upper bound on its average time-delay. Previous attempts to study existence of equilibria for this flow control model were not successful, partly because the time-delay constraints couple the strategy spaces of the individual users in a way that does not allow the application of standard equilibrium existence theorems from the game theory literature. To overcome this difficulty, a more general approach to study the existence of Nash equilibria for decentralized control schemes is introduced. This approach is based on directly proving the existence of a fixed point of the best reply correspondence of the underlying game. For the investigated flow control model, the best reply correspondence is shown to be a function, implicitly defined by means of K interdependent linear programs. Employing an appropriate definition for continuity of the set of optimal solutions of parameterized linear programs, it is shown that, under appropriate conditions, the best reply function is continuous. Brouwer's theorem implies, then, that the best reply function has a fixed point.",
"We study a class of noncooperative general topology networks shared by N users. Each user has a given flow which it has to ship from a source to a destination. We consider a class of polynomial link cost functions, adopted originally in the context of road traffic modeling, and show that these costs have appealing properties that lead to predictable and efficient network flows. In particular, we show that the Nash equilibrium is unique, and is moreover efficient, i.e., it coincides with the solution of a corresponding global optimization problem with a single user. These properties make the cost structure attractive for traffic regulation and link pricing in telecommunication networks. We finally discuss the computation of the equilibrium in the special case of the affine cost structure for a topology of parallel links.",
"Cloud computing is an emerging paradigm in which tasks are assigned to a combination (“cloud”) of servers and devices, accessed over a network. Typically, the cloud constitutes an additional means of computation and a user can perform workload factoring, i.e., split its load between the cloud and its other resources. Based on empirical data, we demonstrate that there is an intrinsic relation between the “benefit” that a user perceives from the cloud and the usage pattern followed by other users. This gives rise to a non-cooperative game, which we model and investigate. We show that the considered game admits a Nash equilibrium. Moreover, we show that this equilibrium is unique. We investigate the “price of anarchy” of the game and show that, while in some cases of interest the Nash equilibrium coincides with a social optimum, in other cases the gap can be arbitrarily large. We show that, somewhat counter-intuitively, exercising admission control to the cloud may deteriorate its performance. Furthermore, we demonstrate that certain (heavy) users may “scare off” other, potentially large, communities of users. Accordingly, we propose a resource allocation scheme that addresses this problem and opens the cloud to a wide range of user types.",
"",
"We study the performance of noncooperative networks in light of three major topology design considerations, namely the price of establishing a link, path delay, and path proneness to congestion, the latter being modeled through the \"relaying extent\" of the nodes. We analyze these considerations and the tradeoffs between them from a game-theoretic perspective, where each network element attempts to optimize its individual performance. We show that for all considered cases but one, the existence of a Nash equilibrium point is guaranteed. For the latter case, we indicate, by simulations, that practical scenarios tend to admit a Nash equilibrium. In addition, we demonstrate that the price of anarchy, i.e., the performance penalty incurred by noncooperative behavior, may be prohibitively large; yet, we also show that such games usually admit at least one Nash equilibrium that is system-wide optimal, i.e., their price of stability is 1. This finding suggests that a major improvement can be achieved by providing a central (\"social\") agent with the ability to impose the initial configuration on the system.",
"We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times---the total latency---is minimized.In many settings, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a \"selfishly motivated\" assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance.In this article, we quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4 3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and nondecreasing in the edge congestion. Here, the total latency of the routes chosen by unregulated selfish network users may be arbitrarily larger than the minimum possible total latency; however, we prove that it is no more than the total latency incurred by optimally routing twice as much traffic."
]
} |
1409.1730 | 2057577642 | Defining an optimal protection strategy against viruses, spam propagation, or any other kind of contamination process is an important feature for designing new networks and architectures. In this paper, we consider decentralized optimal protection strategies when a virus is propagating over a network through an SIS epidemic process. We assume that each node in the network can fully protect itself from infection at a constant cost, or the node can use recovery software, once it is infected. We model our system using a game-theoretic framework and find pure, mixed equilibria, and the Price of Anarchy in several network topologies. Further, we propose a decentralized algorithm and an iterative procedure to compute a pure equilibrium in the general case of a multiple communities network. Finally, we evaluate the algorithms and give numerical illustrations of all our results. | Game theory has been used in several studies @cite_25 @cite_22 @cite_15 @cite_48 @cite_18 @cite_29 @cite_1 @cite_37 @cite_43 @cite_12 related to epidemic protection or curing, for example, in a generalized game settings @cite_18 without considering the infection state of the neighbors; by assigning nodal weights to reflect the security level @cite_48 etc. Omi ' c @cite_25 tune the strength of the nodal antivirus protection i.e. how big those (different) @math should be taken. Contrarily to @cite_25 , (i) we fix the curing and infection rates, which are not part of the game, and the decision consists of a player's choice to invest in an antivirus or not; (ii) we also consider mixed strategies Nash Equilibrium and (iii) propose a convergence algorithm to the equilibrium point. The goal of @cite_25 is in finding the optimal curing rates @math for each player @math , while this paper targets the optimal decision of taking an anti-virus that fully protects the host, because today's antivirus software packages provide accurate and up-to-date virus protection. | {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_22",
"@cite_48",
"@cite_29",
"@cite_1",
"@cite_43",
"@cite_15",
"@cite_25",
"@cite_12"
],
"mid": [
"2078755664",
"1787828156",
"2025392770",
"2119514226",
"2149705024",
"2086916865",
"2152980518",
"1520761104",
"2169017746",
"2398538542"
],
"abstract": [
"Getting new security features and protocols to be widely adopted and deployed in the Internet has been a continuing challenge. There are several reasons for this, in particular economic reasons arising from the presence of network externalities. Indeed, like the Internet itself, the technologies to secure it exhibit network effects: their value to individual users changes as other users decide to adopt them or not. In particular, the benefits felt by early adopters of security solutions might fall significantly below the cost of adoption, making it difficult for those solutions to gain attraction and get deployed at a large scale. Our goal in this paper is to model and quantify the impact of such externalities on the adoptability and deployment of security features and protocols in the Internet. We study a network of interconnected agents, which are subject to epidemic risks such as those caused by propagating viruses and worms, and which can decide whether or not to invest some amount to deploy security solutions. Agents experience negative externalities from other agents, as the risks faced by an agent depend not only on the choices of that agent (whether or not to invest in self-protection), but also on those of the other agents. Expectations about choices made by other agents then influence investments in self-protection, resulting in a possibly suboptimal outcome overall. We present and solve an analytical model where the agents are connected according to a variety of network topologies. Borrowing ideas and techniques used in statistical physics, we derive analytic solutions for sparse random graphs, for which we obtain asymptotic results. We show that we can explicitly identify the impact of network externalities on the adoptability and deployment of security features. In other words, we identify both the economic and network properties that determine the adoption of security technologies. Therefore, we expect our results to provide useful guidance for the design of new economic mechanisms and for the development of network protocols likely to be deployed at a large scale.",
"Epidemic outbreaks in human populations are facilitated by the underlying transportation network. We consider strategies for containing a viral spreading process by optimally allocating a limited budget to three types of protection resources: (i) Traffic control resources, (ii), preventative resources and (iii) corrective resources. Traffic control resources are employed to impose restrictions on the traffic flowing across directed edges in the transportation network. Preventative resources are allocated to nodes to reduce the probability of infection at that node (e.g. vaccines), and corrective resources are allocated to nodes to increase the recovery rate at that node (e.g. antidotes). We assume these resources have monetary costs associated with them, from which we formalize an optimal budget allocation problem which maximizes containment of the infection. We present a polynomial time solution to the optimal budget allocation problem using Geometric Programming (GP) for an arbitrary weighted and directed contact network and a large class of resource cost functions. We illustrate our approach by designing optimal traffic control strategies to contain an epidemic outbreak that propagates through a real-world air transportation network.",
"We propose a simple game for modeling containment of the spread of viruses in a graph of n nodes. Each node must choose to either install anti-virus software at some known cost C, or risk infection and a loss L if a virus that starts at a random initial point in the graph can reach it without being stopped by some intermediate node. We prove many game theoretic properties of the model, including an easily applied characterization of Nash equilibria, culminating in our showing that a centralized solution can give a much better total cost than an equilibrium solution. Though it is NP-hard to compute such a social optimum, we show that the problem can be reduced to a previously unconsidered combinatorial problem that we call the sum-of-squares partition problem. Using a greedy algorithm based on sparse cuts, we show that this problem can be approximated to within a factor of O(log1.5 n).",
"Internet security does not only depend on the security-related investments of individual users, but also on how these users affect each other. In a non-cooperative environment, each user chooses a level of investment to minimize its own security risk plus the cost of investment. Not surprisingly, this selfish behavior often results in undesirable security degradation of the overall system. In this paper, we first characterize the price of anarchy (POA) of network security under two models: an \"Effective-investment\" model, and a \"Bad-traffic\" model. We give insight on how the POA depends on the network topology, individual users' cost functions, and their mutual influence. We also introduce the concept of \"weighted POA\" to bound the region of all feasible payoffs. In a repeated game, on the other hand, users have more incentive to cooperate for their long term interests. We consider the socially best outcome that can be supported by the repeated game, and give a ratio between this outcome and the social optimum. Although the paper focuses on Internet security, many results are generally applicable to games with positive externalities.",
"Inspired by events ranging from 9 11 to the collapse of the accounting firm Arthur Andersen, economists Kunreuther and Heal [5] recently introduced an interesting game-theoretic model for problems of interdependent security (IDS), in which a large number of players must make individual investment decisions related to security — whether physical, financial, medical, or some other type — but in which the ultimate safety of each participant may depend in a complex way on the actions of the entire population. A simple example is the choice of whether to install a fire sprinkler system in an individual condominium in a large building. While such a system might greatly reduce the chances of the owner’s property being destroyed by a fire originating within their own unit, it might do little or nothing to reduce the chances of damage caused by fires originating in other units (since sprinklers can usually only douse small fires early). If “enough” other unit owners have not made the investment in sprinklers, it may be not cost-effective for any individual to do so.",
"We consider the problem of controlling the propagation of an epidemic outbreak in an arbitrary contact network by distributing vaccination resources throughout the network. We analyze a networked version of the Susceptible-Infected-Susceptible (SIS) epidemic model when individuals in the network present different levels of susceptibility to the epidemic. In this context, controlling the spread of an epidemic outbreak can be written as a spectral condition involving the eigenvalues of a matrix that depends on the network structure and the parameters of the model. We study the problem of finding the optimal distribution of vaccines throughout the network to control the spread of an epidemic outbreak. We propose a convex framework to find cost-optimal distribution of vaccination resources when different levels of vaccination are allowed.We illustrate our approaches with numerical simulations in a real social network.",
"An epidemic spreading in a network calls for a decision on the part of the network members: They should decide whether to protect themselves or not. Their decision depends on the trade off between their perceived risk of being infected and the cost of being protected. The network members can make decisions repeatedly, based on information that they receive about the changing infection level in the network. We study the equilibrium states reached by a network whose members increase (resp. decrease) their security deployment when learning that the network infection is higher (resp. lower). Our main result is that as the learning rate of the members increases, the equilibrium level of infection increases. We demonstrate this result both when members are strictly rational and when they are not. We characterize the domains of attraction of the equilibrium points. We validate our conclusions with simulations on human mobility traces.",
"This paper develops a theoretical model of investments in security in a network of interconnected agents. The network connections introduce the possibility of cascading failures depending on exogenous or endogenous attacks and the profile of security investments by the agents. The general presumption in the literature, based on intuitive arguments or analysis of symmetric networks, is that because security investments create positive externalities on other agents, there will be underinvestment in security. We show that this reasoning is incomplete because of a first-order economic force: security investments are also strategic substitutes. In a general (non-symmetric) network, this implies that underinvestment by some agents will encourage overinvestment by others. We demonstrate by means of examples that not only there will be overinvestment by some agents but also aggregate probabilities of infection can be lower in equilibrium than in the social optimum. We then provide sufficient conditions for underinvestment. This requires both sufficiently convex cost functions (just convexity is not enough) and networks that are either symmetric or locally tree-like (i.e., either trees or in the case of stochastic networks, without local cycles with high probability). We also characterize the impact of network structure on equilibrium and optimal investments. Finally, we show that when the attack location is endogenized (by assuming that the attacker chooses a probability distribution over the location of the attack in order to maximize damage), there is another reason for overinvestment: greater investment by an agent shifts the attack to other parts of the network.",
"Security breaches and attacks are critical problems in today’s networking. A key-point is that the security of each host depends not only on the protection strategies it chooses to adopt but also on those chosen by other hosts in the network. The spread of Internet worms and viruses is only one example. This class of problems has two aspects. First, it deals with epidemic processes, and as such calls for the employment of epidemic theory. Second, the distributed and autonomous nature of decision-making in major classes of networks (e.g., P2P, adhoc, and most notably the Internet) call for the employment of game theoretical approaches. Accordingly, we propose a unified framework that combines the N-intertwined, SIS epidemic model with a noncooperative game model. We determine the existence of a Nash equilibrium of the respective game and characterize its properties. We show that its quality, in terms of overall network security, largely depends on the underlying topology. We then provide a bound on the level of system inefficiency due to the noncooperative behavior, namely, the “price of anarchy” of the game. We observe that the price of anarchy may be prohibitively high, hence we propose a scheme for steering users towards socially efficient behavior.",
"The spread of epidemics and malware is commonly modeled by diffusion processes on networks. Protective interventions such as vaccinations or installing anti-virus software are used to contain their spread. Typically, each node in the network has to decide its own strategy of securing itself, and its benefit depends on which other nodes are secure, making this a natural game-theoretic setting. There has been a lot of work on network security game models, but most of the focus has been either on simplified epidemic models or homogeneous network structure. We develop a new formulation for an epidemic containment game, which relies on the characterization of the SIS model in terms of the spectral radius of the network. We show in this model that pure Nash equilibria (NE) always exist, and can be found by a best response strategy. We analyze the complexity of finding NE, and derive rigorous bounds on their costs and the Price of Anarchy or PoA (the ratio of the cost of the worst NE to the optimum social cost) in general graphs as well as in random graph models. In particular, for arbitrary power-law graphs with exponent β > 2, we show that the PoA is bounded by O(T2(β -1))), where T = γ α is the ratio of the recovery rate to the transmission rate in the SIS model. We prove that this bound is tight up to a constant factor for the Chung-Lu random power-law graph model. We study the characteristics of Nash equilibria empirically in different real communication and infrastructure networks, and find that our analytical results can help explain some of the empirical observations."
]
} |
1409.1636 | 1616165289 | In data warehousing, Extract-Transform-Load (ETL) extracts the data from data sources into a central data warehouse regularly for the support of business decision-makings. The data from transaction processing systems are featured with the high frequent changes of insertion, update, and deletion. It is challenging for ETL to propagate the changes to the data warehouse, and maintain the change history. Moreover, ETL jobs typically run in a sequential order when processing the data with dependencies, which is not optimal, e.g., when processing early-arriving data. In this paper, we propose a two-level data staging ETL for handling transaction data. The proposed method detects the changes of the data from transactional processing systems, identifies the corresponding operation codes for the changes, and uses two staging databases to facilitate the data processing in an ETL process. The proposed ETL provides the “onestop" method for fast-changing, slowly-changing and early-arriving data processing. | Optimizing ETL is a time-consuming process, but it is essential to ensure ETL jobs to complete within specific time frames. For ETL optimization, Simitsis propose a theoretical framework @cite_27 @cite_6 @cite_25 , which formalizes ETL state spaces into a directed acyclic graph (DAG), then searches the best execution plan with regards to the time of the state spaces. Tziovara propose the approach of optimizing ETL based on an input logical ETL template @cite_8 . In @cite_12 , Li and Zhan analyze the task dependencies in an ETL workflow, and optimize ETL workflow by applying the parallelization technique to the tasks without dependency. Behrend and J " o rg use rule-based approach to optimize ETL flows @cite_20 , where the rules are generated based on the algebraic equivalences. Our approach is an ETL method, but can also be used to optimize the ETL in solving the early-arriving data problem, in which no dependencies need to be considered. | {
"cite_N": [
"@cite_8",
"@cite_6",
"@cite_27",
"@cite_20",
"@cite_25",
"@cite_12"
],
"mid": [
"2020829916",
"2149877754",
"2096347930",
"2003374655",
"2158643472",
"2184279421"
],
"abstract": [
"In this paper, we deal with the problem of determining the best possible physical implementation of an ETL workflow, given its logical-level description and an appropriate cost model as inputs. We formulate the problem as a state-space problem and provide a suitable solution for this task. We further extend this technique by intentionally introducing sorter activities in the workflow in order to search for alternative physical implementations with lower cost. We experimentally assess our method based on a principled organization of test suites.",
"Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. Usually, these processes must be completed in a certain time window; thus, it is necessary to optimize their execution time. In this paper, we delve into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide algorithms towards the minimization of the execution cost of an ETL workflow.",
"Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization, and insertion into a data warehouse. In this paper, we derive into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide an exhaustive and two heuristic algorithms toward the minimization of the execution cost of an ETL workflow. The heuristic algorithm with greedy characteristics significantly outperforms the other two algorithms for a large set of experimental cases.",
"ETL jobs are used to integrate data from distributed and heterogeneous sources into a data warehouse. A well-known challenge in this context is the development of incremental ETL jobs for efficiently maintaining warehouse data in the presence of source data updates. In this paper, we present a new transformation-based approach to automatically derive incremental ETL jobs. To this end, we consider a simplification of the underlying update propagation process based on the computation of so-called safe updates instead of true ones. Additionally, we identify the limitations of already proposed incremental solutions, which are cured by employing Magic Sets leading to dramatic performance gains.",
"As business intelligence becomes increasingly essential for organizations and as it evolves from strategic to operational, the complexity of Extract-Transform-Load (ETL) processes grows. In consequence, ETL engagements have become very time consuming, labor intensive, and costly. At the same time, additional requirements besides functionality and performance need to be considered in the design of ETL processes. In particular, the design quality needs to be determined by an intricate combination of different metrics like reliability, maintenance, scalability, and others. Unfortunately, there are no methodologies, modeling languages or tools to support ETL design in a systematic, formal way for achieving these quality requirements. The current practice handles them with ad-hoc approaches only based on designers' experience. This results in either poor designs that do not meet the quality objectives or costly engagements that require several iterations to meet them. A fundamental shift that uses automation in the ETL design task is the only way to reduce the cost of these engagements while obtaining optimal designs. Towards this goal, we present a novel approach to ETL design that incorporates a suite of quality metrics, termed QoX, at all stages of the design process. We discuss the challenges and tradeoffs among QoX metrics and illustrate their impact on alternative designs.",
"Approaches to shorten workflow execution time have been discussed in many area of computer engineering such as parallel and distributed systems, a computer circuit, and PERT chart for project management. To optimize workflow model structure of workflow, an approach with corresponding algorithms is proposed to cut timed critical path of workflow schema, which has the longest average execution time path from the start activity to the end activity. Through systematically analyzing the dependency relationships between tasks at build-time, traditional optimization method of critical path is improved through adding selective and parallel control structures into workflow schemas. Data dependency rules are converted to control dependency rules according to semantic rules mined. Further more, consistency between tasks is guaranteed. Finally, to explain validity of the algorithm proposed, an experiment is provided to compare optimized model with original using critical path identification algorithm. (Nature and Science. 2005;3(2):65-74)."
]
} |
1409.1636 | 1616165289 | In data warehousing, Extract-Transform-Load (ETL) extracts the data from data sources into a central data warehouse regularly for the support of business decision-makings. The data from transaction processing systems are featured with the high frequent changes of insertion, update, and deletion. It is challenging for ETL to propagate the changes to the data warehouse, and maintain the change history. Moreover, ETL jobs typically run in a sequential order when processing the data with dependencies, which is not optimal, e.g., when processing early-arriving data. In this paper, we propose a two-level data staging ETL for handling transaction data. The proposed method detects the changes of the data from transactional processing systems, identifies the corresponding operation codes for the changes, and uses two staging databases to facilitate the data processing in an ETL process. The proposed ETL provides the “onestop" method for fast-changing, slowly-changing and early-arriving data processing. | The latest trend of data warehousing is to support big data, and offer real-time right-time capability, , @cite_5 @cite_22 . The emergence of the cloud computing technologies, such as MapReduce @cite_24 , makes it feasible for ETL to process large-scale data on many nodes. As the evidence, the two open source MapReduce-based systems, Pig @cite_4 and Hive @cite_11 , become increasingly used in data warehousing. But, they both are designed for the generic purpose for big data analytics, with limited ETL capabilities, somewhat like DBMSs other than full-fledged ETL tools. To complement this, our previous work, ETLMR @cite_28 @cite_3 , extends the ETL programming framework @cite_18 @cite_14 using MapReduce, but maintains its simplicity in implementing a parallel dimensional ETL program. Besides, the framework @cite_2 is proposed to improve the dimensional ETL capability of Hive. The difference is that ETLMR aims at the traditional RDBMS-based data warehousing system, while the latter uses Hive for more scalability. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_28",
"@cite_3",
"@cite_24",
"@cite_2",
"@cite_5",
"@cite_11"
],
"mid": [
"2973379277",
"2078783226",
"2098935637",
"2107760270",
"2170634549",
"1982583191",
"2173213060",
"2022678827",
"340880505",
"2110086534"
],
"abstract": [
"The industrial use of open source Business Intelligence (BI) tools is becoming more common, but is still not as widespread as for other types of software. It is therefore of interest to explore which possibilities are available for open source BI and compare the tools. In this survey article, we consider the capabilities of a number of open source tools for BI. In the article, we consider a number of Extract-Transform-Load (ETL) tools, database management systems (DBMSs), On-Line Analytical Processing (OLAP) servers, and OLAP clients. We find that, unlike the situation a few years ago, there now exist mature and powerful tools in all these categories. However, the functionality still falls somewhat short of that found in commercial tools.",
"Extract-Transform-Load (ETL) programs are used to load data into data warehouses (DWs). An ETL program must extract data from sources, apply different transformations to it, and use the DW to look up insert the data. It is both time consuming to develop and to run an ETL program. It is, however, typically the case that the ETL program can exploit both task parallelism and data parallelism to run faster. This, on the other hand, makes the development time longer as it is complex to create a parallel ETL program. To remedy this situation, we propose efficient ways to parallelize typical ETL tasks and we implement these new constructs in an ETL framework. The constructs are easy to apply and do only require few modifications to an ETL program to parallelize it. They support both task and data parallelism and give the programmer different possibilities to choose from. An experimental evaluation shows that by using a little more CPU time, the (wall-clock) time to run an ETL program can be greatly reduced.",
"There is a growing need for ad-hoc analysis of extremely large data sets, especially at internet companies where innovation critically depends on being able to analyze terabytes of data collected every day. Parallel database products, e.g., Teradata, offer a solution, but are usually prohibitively expensive at this scale. Besides, many of the people who analyze this data are entrenched procedural programmers, who find the declarative, SQL style to be unnatural. The success of the more procedural map-reduce programming model, and its associated scalable implementations on commodity hardware, is evidence of the above. However, the map-reduce paradigm is too low-level and rigid, and leads to a great deal of custom user code that is hard to maintain, and reuse. We describe a new language called Pig Latin that we have designed to fit in a sweet spot between the declarative style of SQL, and the low-level, procedural style of map-reduce. The accompanying system, Pig, is fully implemented, and compiles Pig Latin into physical plans that are executed over Hadoop, an open-source, map-reduce implementation. We give a few examples of how engineers at Yahoo! are using Pig to dramatically reduce the time required for the development and execution of their data analysis tasks, compared to using Hadoop directly. We also report on a novel debugging environment that comes integrated with Pig, that can lead to even higher productivity gains. Pig is an open-source, Apache-incubator project, and available for general use.",
"Data warehouses (DWs) have traditionally been loaded with data at regular time intervals, e.g., monthly, weekly, or daily, using fast bulk loading techniques. Recently, the trend is to insert all (or only some) new source data very quickly into DWs, called near-realtime DWs (right-time DWs). This is done using regular INSERT statements, resulting in too low insert speeds. There is thus a great need for a solution that makes inserted data available quickly, while still providing bulk-load insert speeds. This paper presents RiTE (\"Right-Time ETL\"), a middleware system that provides exactly that. A data producer (ETL) can insert data that becomes available to data consumers on demand. RiTE includes an innovative main-memory based catalyst that provides fast storage and offers concurrency control. A number of policies controlling the bulk movement of data based on user requirements for persistency, availability, freshness, etc. are supported. The system works transparently to both producer and consumers. The system is integrated with an open source DBMS, and experiments show that it provides \"the best of both worlds\", i.e., INSERT-like data availability, but with bulk-load speeds (up to 10 times faster).",
"Extract-Transform-Load (ETL) flows periodically populate data warehouses (DWs) with data from different source systems. An increasing challenge for ETL flows is processing huge volumes of data quickly. MapReduce is establishing itself as the de-facto standard for large-scale data-intensive processing. However, MapReduce lacks support for high-level ETL specific constructs, resulting in low ETL programmer productivity. This paper presents a scalable dimensional ETL framework, ETLMR, based on MapReduce. ETLMR has built-in native support for operations on DW-specific constructs such as star schemas, snowflake schemas and slowly changing dimensions (SCDs). This enables ETL developers to construct scalable MapReduce-based ETL flows with very few code lines. To achieve good performance and load balancing, a number of dimension and fact processing schemes are presented, including techniques for efficiently processing different types of dimensions. The paper describes the integration of ETLMR with aMapReduce framework and evaluates its performance on large realistic data sets. The experimental results show that ETLMR achieves very good scalability and compares favourably with other MapReduce data warehousing tools.",
"This paper demonstrates ETLMR, a novel dimensional Extract--Transform--Load (ETL) programming framework that uses Map-Reduce to achieve scalability. ETLMR has built-in native support of data warehouse (DW) specific constructs such as star schemas, snowflake schemas, and slowly changing dimensions (SCDs). This makes it possible to build MapReduce-based dimensional ETL flows very easily. The ETL process can be configured with only few lines of code. We will demonstrate the concrete steps in using ETLMR to load data into a (partly snowflaked) DW schema. This includes configuration of data sources and targets, dimension processing schemes, fact processing, and deployment. In addition, we also present the scalability on large data sets.",
"MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.",
"Extract-Transform-Load (ETL) programs process data into data warehouses (DWs). Rapidly growing data volumes demand systems that scale out. Recently, much attention has been given to MapReduce for parallel handling of massive data sets in cloud environments. Hive is the most widely used RDBMS-like system for DWs on MapReduce and provides scalable analytics. It is, however, challenging to do proper dimensional ETL processing with Hive; e.g., the concept of slowly changing dimensions (SCDs) is not supported (and due to lacking support for UPDATEs, SCDs are complex to handle manually). Also the powerful Pig platform for data processing on MapReduce does not support such dimensional ETL processing. To remedy this, we present the ETL framework CloudETL which uses Hadoop to parallelize ETL execution and to process data into Hive. The user defines the ETL process by means of high-level constructs and transformations and does not have to worry about technical MapReduce details. CloudETL supports different dimensional concepts such as star schemas and SCDs. We present how CloudETL works and uses different performance optimizations including a purpose-specific data placement policy to co-locate data. Further, we present a performance study and compare with other cloud-enabled systems. The results show that CloudETL scales very well and outperforms the dimensional ETL capabilities of Hive both with respect to performance and programmer productivity. For example, Hive uses 3.9 times as long to load an SCD in an experiment and needs 112 statements while CloudETL only needs 4.",
"In this paper we explore the possibility of taking a data warehouse with a traditional architecture and making it real-time-capable. Real-time in warehousing concerns data freshness, the capacity to integrate data constantly, or at a desired rate, without requiring the warehouse to be taken offline. We discuss the approach and show experimental results that prove the validity of the solution.",
"The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [3] is a popular open-source map-reduce implementation which is being used as an alternative to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse."
]
} |
1409.1496 | 2087088589 | Socialization in online communities allows existing members to welcome and recruit newcomers, introduce them to community norms and practices, and sustain their early participation. However, socializing newcomers does not come for free: in large communities, socialization can result in a significant workload for mentors and is hard to scale. In this study we present results from an experiment that measured the effect of a lightweight socialization tool on the activity and retention of newly registered users attempting to edit for the first time Wikipedia. Wikipedia is struggling with the retention of newcomers and our results indicate that a mechanism to elicit lightweight feedback and to provide early mentoring to newcomers improves their chances of becoming long-term contributors. | A large literature has studied incentives and drivers of participation in online communities, with a focus on early socialization. Early research on Wikipedia and open source software projects suggests that a mix of intrinsic motivation and extrinsic rewards drives participation @cite_25 @cite_32 @cite_8 . Top contributors may have strong intrinsic motives to participate @cite_37 . Non-monetary rewards such as acknowledgements @cite_24 @cite_30 @cite_19 @cite_29 , badges @cite_6 @cite_7 , and gamified feedback @cite_15 have been shown to increase engagement of users. Certain forms of reward can exert fine-grained control, even though instilling long-term behavior still proves to be difficult @cite_35 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_37",
"@cite_7",
"@cite_8",
"@cite_29",
"@cite_32",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_15",
"@cite_25"
],
"mid": [
"",
"2148527862",
"2016974644",
"",
"",
"",
"",
"2108478329",
"2077802345",
"",
"2049080106",
"2108777707"
],
"abstract": [
"",
"An increasingly common feature of online communities and social media sites is a mechanism for rewarding user achievements based on a system of badges. Badges are given to users for particular contributions to a site, such as performing a certain number of actions of a given type. They have been employed in many domains, including news sites like the Huffington Post, educational sites like Khan Academy, and knowledge-creation sites like Wikipedia and Stack Overflow. At the most basic level, badges serve as a summary of a user's key accomplishments; however, experience with these sites also shows that users will put in non-trivial amounts of work to achieve particular badges, and as such, badges can act as powerful incentives. Thus far, however, the incentive structures created by badges have not been well understood, making it difficult to deploy badges with an eye toward the incentives they are likely to create. In this paper, we study how badges can influence and steer user behavior on a site---leading both to increased participation and to changes in the mix of activities a user pursues on the site. We introduce a formal model for reasoning about user behavior in the presence of badges, and in particular for analyzing the ways in which badges can steer users to change their behavior. To evaluate the main predictions of our model, we study the use of badges and their effects on the widely used Stack Overflow question-answering site, and find evidence that their badges steer behavior in ways closely consistent with the predictions of our model. Finally, we investigate the problem of how to optimally place badges in order to induce particular user behaviors. Several robust design principles emerge from our framework that could potentially aid in the design of incentives for a broad range of sites.",
"Open content web sites depend on users to produce information of value. Wikipedia is the largest and most well-known such site. Previous work has shown that a small fraction of editors --Wikipedians -- do most of the work and produce most of the value. Other work has offered conjectures about how Wikipedians differ from other editors and how Wikipedians change over time. We quantify and test these conjectures. Our key findings include: Wikipedians' edits last longer; Wikipedians invoke community norms more often to justify their edits; on many dimensions of activity, Wikipedians start intensely, tail off a little, then maintain a relatively high level of activity over the course of their career. Finally, we show that the amount of work done by Wikipedians and non-Wikipedians differs significantly from their very first day. Our results suggest a design opportunity: customizing the initial user experience to improve retention and channel new users' intense energy.",
"",
"",
"",
"",
"We test the effects of informal rewards in online peer production. Using a randomized, experimental design, we assigned editing awards or “barnstars” to a subset of the 1 most productive Wikipedia contributors. Comparison with the control group shows that receiving a barnstar increases productivity by 60 and makes contributors six times more likely to receive additional barnstars from other community members, revealing that informal rewards significantly impact individual effort.",
"Under-contribution is a problem for many online communities. Social psychology theories of social loafing and goal-setting can provide mid-level design principles to address this problem. We tested the design principles in two field experiments. In one, members of an online movie recommender community were reminded of the uniqueness of their contributions and the benefits that follow from them. In the second, they were given a range of individual or group goals for contribution. As predicted by theory, individuals contributed when they were reminded of their uniqueness and when they were given specific and challenging goals, but other predictions were not borne out. The paper ends with suggestions and challenges for mining social science theories as well as implications for design.",
"",
"\"Gamification\" is an informal umbrella term for the use of video game elements in non-gaming systems to improve user experience (UX) and user engagement. The recent introduction of 'gamified' applications to large audiences promises new additions to the existing rich and diverse research on the heuristics, design patterns and dynamics of games and the positive UX they provide. However, what is lacking for a next step forward is the integration of this precise diversity of research endeavors. Therefore, this workshop brings together practitioners and researchers to develop a shared understanding of existing approaches and findings around the gamification of information systems, and identify key synergies, opportunities, and questions for future research.",
"The success of the Linux operating system has demonstrated the viability of an alternative form of software development: open source software, which challenges traditional assumptions about software markets. Understanding what drives open source developers to participate in open source projects is crucial for assessing the impact of open source software. The article identifies two broad types of motivations that account for their participation in open source projects. The first category includes internal factors such as intrinsic motivation and altruism, and the second category focuses on external rewards such as expected future returns and personal needs. The article also reports the results of a survey administered to open source programmers."
]
} |
1409.1496 | 2087088589 | Socialization in online communities allows existing members to welcome and recruit newcomers, introduce them to community norms and practices, and sustain their early participation. However, socializing newcomers does not come for free: in large communities, socialization can result in a significant workload for mentors and is hard to scale. In this study we present results from an experiment that measured the effect of a lightweight socialization tool on the activity and retention of newly registered users attempting to edit for the first time Wikipedia. Wikipedia is struggling with the retention of newcomers and our results indicate that a mechanism to elicit lightweight feedback and to provide early mentoring to newcomers improves their chances of becoming long-term contributors. | Besides individual incentives, previous studies also stressed the importance of the initial period of socialization in online groups. A successful early socialization experience is associated with, and sometimes even predicts, increased engagement in mailing lists @cite_4 , newsgroups @cite_13 , social networks @cite_23 , and Wikipedia @cite_27 @cite_2 , to cite a few. However, the causal structure between socialization, motivation, and participation is still not entirely clear. Strong motivational factors, perhaps in conjunction with individual-level skills @cite_0 , may be the cause for both a successful early socialization stage and a later long-term participation. To further establish a causal connection, controlled and field experiments on groups of limited size have been performed, with encouraging results: sharing in a digital information good is increased by social incentives @cite_21 , personal messages improve the retention of newcomers to Wikipedia who had their edits rejected @cite_3 , and top contributors in a Q &A community contributed more on the long term if they had received a personalized socialization experience @cite_11 . | {
"cite_N": [
"@cite_4",
"@cite_21",
"@cite_3",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_13",
"@cite_11"
],
"mid": [
"2065096118",
"2157278425",
"2176969472",
"1572054464",
"2110800699",
"2151572696",
"",
"1968760319",
"2150523688"
],
"abstract": [
"Online communities in the form of message boards, listservs, and newsgroups continue to represent a considerable amount of the social activity on the Internet. Every year thousands of groups ourish while others decline into relative obscurity; likewise, millions of members join a new community every year, some of whom will come to manage or moderate the conversation while others simply sit by the sidelines and observe. These processes of group formation, growth, and dissolution are central in social science, and in an online venue they have ramifications for the design and development of community software In this paper we explore a large corpus of thriving online communities. These groups vary widely in size, moderation and privacy, and cover an equally diverse set of subject matter. We present a broad range of descriptive statistics of these groups. Using metadata from groups, members, and individual messages, we identify users who post and are replied-to frequently by multiple group members; we classify these high-engagement users based on the longevity of their engagements. We show that users who will go on to become long-lived, highly-engaged users experience significantly better treatment than other users from the moment they join the group, well before there is an opportunity for them to develop a long-standing relationship with members of the group We present a simple model explaining long-term heavy engagement as a combination of user-dependent and group-dependent factors. Using this model as an analytical tool, we show that properties of the user alone are sufficient to explain 95 of all memberships, but introducing a small amount of per-group information dramatically improves our ability to model users belonging to multiple groups.",
"The goal of this research is to understand how generalized exchange systems emerge when information, as the object of exchange, produces a collective good. When individuals contribute information for a collective benefit, it can create a group-generalized exchange system that involves a social dilemma. I argue that two properties of information, replication and high jointness of supply, are crucial for understanding the nature of the social dilemma in these exchange systems. Combined with low-cost contributions, these special features of information can allow social psychological selective incentives to significantly encourage cooperation. Experiments were conducted to examine the independent effects of two social psychological selective incentives (social approval and observational cooperation) on sharing behavior in a generalized information exchange system. The results indicate that observing high levels of cooperative behavior is beneficial in the short run, but ultimately it only leads to moderately higher levels of cooperation than when individuals cannot observe cooperative behavior. On the other hand, when individuals receive either high or low levels of social approval, it has a very positive, significant impact on cooperative behavior. This research has implications for real-world generalized information exchange systems such as those found on the Internet. In addition, the theory and results in this study can also be extended to public goods that share the features of low-costs contributions, replication, and high jointness of supply.",
"Unlike traditional firms, open collaborative systems rely on volunteers to operate, and many communities struggle to maintain enough contributors to ensure the quality and quantity of content. However, Wikipedia has historically faced the exact opposite problem: too much participation, particularly from users who, knowingly or not, do not share the same norms as veteran Wikipedians. During its period of exponential growth, the Wikipedian community developed specialized socio-technical defense mechanisms to protect itself from the negatives of massive participation: spam, vandalism, falsehoods, and other damage. Yet recently, Wikipedia has faced a number of high-profile issues with recruiting and retaining new contributors. In this paper, we first illustrate and describe the various defense mechanisms at work in Wikipedia, which we hypothesize are inhibiting newcomer retention. Next, we present results from an experiment aimed at increasing both the quantity and quality of editors by altering various elements of these defense mechanisms, specifically pre-scripted warnings and notifications that are sent to new editors upon reverting or rejecting contributions. Using logistic regressions to model new user activity, we show which tactics work best for different populations of users based on their motivations when joining Wikipedia. In particular, we found that personalized messages in which Wikipedians identified themselves in active voice and took direct responsibility for rejecting an editor’s contributions were much more successful across a variety of outcome metrics than the current messages, which typically use an institutional and passive voice.",
"Publisher Summary This chapter focuses on individual differences in human–computer interaction. Differences among users have not been a major concern of commercial computer interface designers. Even behavioral scientists usually select narrowly defined user samples to minimize experimental error when comparing the mean performance of different systems. Those behavioral studies that have analyzed differences among users often have produced descriptive results rather than prescriptions for interface design. In the future, interface designers should focus a great deal of attention on the differences among potential users for three reasons. First, individual differences usually play a major role in determining whether humans can use a computer to perform a job effectively. Second, personnel selection testing, the standard solution to problems of job-related individual differences, cannot be applied to many settings where humans interact with computers. The third reason for designers to be concerned with individual differences is that the technology has reached the point where it is possible to accommodate more user differences.",
"Socialization of newcomers is critical both for conventional groups. It helps groups perform effectively and the newcomers develop commitment. However, little empirical research has investigated the impact of specific socialization tactics on newcomers' commitment to online groups. We examined WikiProjects, subgroups in Wikipedia organized around working on common topics or tasks. In study 1, we identified the seven socialization tactics used most frequently: invitations to join, welcome messages, requests to work on project-related tasks, offers of assistance, positive feedback on a new member's work, constructive criticism, and personal-related comments. In study 2, we examined their impact on newcomers' commitment to the project. Whereas most newcomers contributed fewer edits over time, the declines were slowed or reversed for those socialized with welcome messages, assistance, and constructive criticism. In contrast, invitations led to steeper declines in edits. These results suggest that different socialization tactics play different roles in socializing new members in online groups compared to offline ones.",
"Social networking sites (SNS) are only as good as the content their users share. Therefore, designers of SNS seek to improve the overall user experience by encouraging members to contribute more content. However, user motivations for contribution in SNS are not well understood. This is particularly true for newcomers, who may not recognize the value of contribution. Using server log data from approximately 140,000 newcomers in Facebook, we predict long-term sharing based on the experiences the newcomers have in their first two weeks. We test four mechanisms: social learning, singling out, feedback, and distribution. In particular, we find support for social learning: newcomers who see their friends contributing go on to share more content themselves. For newcomers who are initially inclined to contribute, receiving feedback and having a wide audience are also predictors of increased sharing. On the other hand, singling out appears to affect only those newcomers who are not initially inclined to share. The paper concludes with design implications for motivating newcomer sharing in online communities.",
"",
"Turnover in online communities is very high, with most people who initially post a message to an online community never contributing again. In this paper, we test whether the responses that newcomers receive to their first posts influence the extent to which they continue to participate. The data come from initial posts made by 2,777 newcomers to six public newsgroups. We coded the content and valence of the initial post and its first response, if it received one, to see if these factors influenced newcomers’ likelihood of posting again. Approximately 61 of newcomers received a reply to their initial post, and those who got a reply were 12 more likely to post to the community again; their probability of posting again increased from 44 to 56 . They were more likely to receive a response if they asked a question or wrote a longer post. Surprisingly, the quality of the response they received—its emotional tone and whether it answered a newcomer’s question—did not influence the likelihood of the newcomer’s posting again.",
"Although many off-line organizations give their employees training, mentorship, a cohort and other socialization experiences that improve their retention and productivity, online production communities rarely do this. This paper describes the planning, execution and evaluation of a socialization regime for an online technical support community. In a two-phase project, we first automatically identified from participants' early behavior, those with high potential to become core members. We then designed, delivered and experimentally evaluated socialization experiences intended to build commitment and competence among these potential core members. We were able to identify potential core members with high accuracy from only two weeks of behavior. A year later, those classified as potential core members participated in the community ten times more actively than those not identified. In an evaluation experiment, some potential core members were randomly assigned to receive socialization experiences, while others were not. A year later, those who had participated in the socialization regime contributed more answers in the community compared to those in the control condition. The socialization experiences, however, undercut their sense of connection to the community and the quality of their contributions. We discuss what was effective and what could be improved in designing socialization experiences for online groups."
]
} |
1409.1715 | 2949066526 | The balance of exploration versus exploitation (EvE) is a key issue on evolutionary computation. In this paper we will investigate how an adaptive controller aimed to perform Operator Selection can be used to dynamically manage the EvE balance required by the search, showing that the search strategies determined by this control paradigm lead to an improvement of solution quality found by the evolutionary algorithm. | Parameter setting is an important challenge for building efficient and robust EAs. As mentioned in the introduction, using an EA requires us to define its basic structural components and to set the values of its behavioral parameters. The components may be considered as structural parameters of the algorithm. Therefore, parameter setting in EA addresses two general classes of parameters: and (alternatively, the terms and parameters are used ). Concerning structural parameters, automated tuning techniques can be used as tools for selecting the initial configuration of the algorithm. The configuration and the discovery of new heuristics from building blocks is also addressed by the concept of hyperheuristics . We may also mention self-adpative operators that mainly consists in encoding directly the parameters of the oprator in the individuals. This approach also allows the algorithm to dynamically manage the EvE balanceand has been successfully applied for solving combinatorial and continous optimization problems @cite_1 @cite_2 @cite_0 @cite_3 . Note that an adaptive management of the operators, which dynamically adds and discards operators during the search, has been proposed by . | {
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_3",
"@cite_2"
],
"mid": [
"2137340504",
"1977474365",
"2123497782",
"2152028815"
],
"abstract": [
"Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants.",
"Several techniques have been proposed to tackle the Adaptive Operator Selection (AOS) issue in Evolutionary Algorithms. Some recent proposals are based on the Multi-armed Bandit (MAB) paradigm: each operator is viewed as one arm of a MAB problem, and the rewards are mainly based on the fitness improvement brought by the corresponding operator to the individual it is applied to. However, the AOS problem is dynamic, whereas standard MAB algorithms are known to optimally solve the exploitation versus exploration trade-off in static settings. An original dynamic variant of the standard MAB Upper Confidence Bound algorithm is proposed here, using a sliding time window to compute both its exploitation and exploration terms. In order to perform sound comparisons between AOS algorithms, artificial scenarios have been proposed in the literature. They are extended here toward smoother transitions between different reward settings. The resulting original testbed also includes a real evolutionary algorithm that is applied to the well-known Royal Road problem. It is used here to perform a thorough analysis of the behavior of AOS algorithms, to assess their sensitivity with respect to their own hyper-parameters, and to propose a sound comparison of their performances.",
"Recently, the hybridization between evolutionary algorithms and other metaheuristics has shown very good performances in many kinds of multiobjective optimization problems (MOPs), and thus has attracted considerable attentions from both academic and industrial communities. In this paper, we propose a novel hybrid multiobjective evolutionary algorithm (HMOEA) for real-valued MOPs by incorporating the concepts of personal best and global best in particle swarm optimization and multiple crossover operators to update the population. One major feature of the HMOEA is that each solution in the population maintains a nondominated archive of personal best and the update of each solution is in fact the exploration of the region between a selected personal best and a selected global best from the external archive. Before the exploration, a selfadaptive selection mechanism is developed to determine an appropriate crossover operator from several candidates so as to improve the robustness of the HMOEA for different instances of MOPs. Besides the selection of global best from the external archive, the quality of the external archive is also considered in the HMOEA through a propagating mechanism. Computational study on the biobjective and three-objective benchmark problems shows that the HMOEA is competitive or superior to previous multiobjective algorithms in the literature.",
"This paper studies a challenging problem of dynamic scheduling in steelmaking-continuous casting (SCC) production. The problem is to re-optimize the assignment, sequencing, and timetable of a set of existing and new jobs among various production stages for the new environment when unforeseen changes occur in the production system. We model the problem considering the constraints of the practical technological requirements and the dynamic nature. To solve the SCC scheduling problem, we propose an improved differential evolution (DE) algorithm with a real-coded matrix representation for each individual of the population, a two-step method for generating the initial population, and a new mutation strategy. To further improve the efficiency and effectiveness of the solution process for dynamic use, an incremental mechanism is proposed to generate a new initial population for the DE whenever a real-time event arises, based on the final population in the last DE solution process. Computational experiments on randomly generated instances and the practical production data show that the proposed improved algorithm can obtain better solutions compared to other algorithms."
]
} |
1409.1320 | 2115060927 | In this work, we propose the marginal structured SVM (MSSVM) for structured prediction with hidden variables. MSSVM properly accounts for the uncertainty of hidden variables, and can significantly outperform the previously proposed latent structured SVM (LSSVM; Yu & Joachims (2009)) and other state-of-art methods, especially when that uncertainty is large. Our method also results in a smoother objective function, making gradient-based optimization of MSSVMs converge significantly faster than for LSSVMs. We also show that our method consistently outperforms hidden conditional random fields (HCRFs; (2007)) on both simulated and real-world datasets. Furthermore, we propose a unified framework that includes both our and several other existing methods as special cases, and provides insights into the comparison of different models in practice. | HCRFs naturally extend CRFs to include hidden variables, and have found numerous applications in areas such as object recognition @cite_8 and gesture recognition . HCRFs have the same pros and cons as general CRFs; in particular, they perform well when the model assumptions hold and when there are enough training instances, but may otherwise perform badly. Alternatively, the LSSVM is an extension of structured SVM that handles hidden variables, with wide application in areas like object detection @cite_9 , human action recognition @cite_10 , document-level sentiment classification @cite_11 and link prediction @cite_5 . However, LSSVM relies on a joint MAP procedure, and may not perform well when a non-trivial uncertainty exists in the hidden variables. Recently, proposed an @math -extension framework for discriminative graphical models with hidden variables that includes both HCRFs and LSSVM as special cases. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_5",
"@cite_10",
"@cite_11"
],
"mid": [
"2141303268",
"2141357020",
"588318799",
"1747312753",
"2120340025"
],
"abstract": [
"This paper presents a new algorithm for the automatic recognition of object classes from images (categorization). Compact and yet discriminative appearance-based object class models are automatically learned from a set of training images. The method is simple and extremely fast, making it suitable for many applications such as semantic image retrieval, Web search, and interactive image editing. It classifies a region according to the proportions of different visual words (clusters in feature space). The specific visual words and the typical proportions in each object are learned from a segmented training set. The main contribution of this paper is twofold: i) an optimally compact visual dictionary is learned by pair-wise merging of visual words from an initially large dictionary. The final visual words are described by GMMs. ii) A novel statistical measure of discrimination is proposed which is optimized by each merge operation. High classification accuracy is demonstrated for nine object classes on photographs of real objects viewed under general lighting conditions, poses and viewpoints. The set of test images used for validation comprise: i) photographs acquired by us, ii) images from the Web and iii) images from the recently released Pascal dataset. The proposed algorithm performs well on both texture-rich objects (e.g. grass, sky, trees) and structure-rich ones (e.g. cars, bikes, planes)",
"We present a latent hierarchical structural learning method for object detection. An object is represented by a mixture of hierarchical tree models where the nodes represent object parts. The nodes can move spatially to allow both local and global shape deformations. The models can be trained discriminatively using latent structural SVM learning, where the latent variables are the node positions and the mixture component. But current learning methods are slow, due to the large number of parameters and latent variables, and have been restricted to hierarchies with two layers. In this paper we describe an incremental concave-convex procedure (iCCCP) which allows us to learn both two and three layer models efficiently. We show that iCCCP leads to a simple training algorithm which avoids complex multi-stage layer-wise training, careful part selection, and achieves good performance without requiring elaborate initialization. We perform object detection using our learnt models and obtain performance comparable with state-of-the-art methods when evaluated on challenging public PASCAL datasets. We demonstrate the advantages of three layer hierarchies – outperforming 's two layer models on all 20 classes.",
"Predicting the existence of links between pairwise objects in networks is a key problem in the study of social networks. However, relationships among objects are often more complex than simple pairwise relations. By restricting attention to dyads, it is possible that information valuable for many learning tasks can be lost. The hypernetwork relaxes the assumption that only two nodes can participate in a link, permitting instead an arbitrary number of nodes to participate in so-called hyperlinks or hyperedges, which is a more natural representation for complex, multi-party relations. However, the hyperlink prediction problem has yet to be studied. In this paper, we propose HPLSF (Hyperlink Prediction using Latent Social Features), a hyperlink prediction algorithm for hypernetworks. By exploiting the homophily property of social networks, HPLSF explores social features for hyperlink prediction. To handle the problem that social features are not always observable, a latent social feature learning scheme is developed. To cope with the arbitrary cardinality hyperlink issue in hypernetworks, we design a feature-embedding scheme to map the a priori arbitrarily-sized feature set associated with each hyperlink into a uniformly-sized auxiliary space. To address the fact that observed features and latent features may be not independent, we generalize a structural SVM to learn using both observed features and latent features. In experiments, we evaluate the proposed HPLSF framework on three large-scale hypernetwork datasets. Our results on the three diverse datasets demonstrate the effectiveness of the HPLSF algorithm. Although developed in the context of social networks, HPLSF is a general methodology and applies to arbitrary hypernetworks.",
"Many NLP tasks make predictions that are inherently coupled to syntactic relations, but for many languages the resources required to provide such syntactic annotations are unavailable. For others it is unclear exactly how much of the syntactic annotations can be effectively leveraged with current models, and what structures in the syntactic trees are most relevant to the current task. We propose a novel method which avoids the need for any syntactically annotated data when predicting a related NLP task. Our method couples latent syntactic representations, constrained to form valid dependency graphs or constituency parses, with the prediction task via specialized factors in a Markov random field. At both training and test time we marginalize over this hidden structure, learning the optimal latent representations for the problem. Results show that this approach provides significant gains over a syntactically un-informed baseline, outperforming models that observe syntax on an English relation extraction task, and performing comparably to them in semantic role labeling.",
"The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building large-scale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide variety of algorithms — among them sum-product, cluster variational methods, expectation-propagation, mean field methods, max-product and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models."
]
} |
1409.0908 | 342845200 | In this paper, we describe a simple strategy for mitigating variability in temporal data series by shifting focus onto long-term, frequency domain features that are less susceptible to variability. We apply this method to the human action recognition task and demonstrate how working in the frequency domain can yield good recognition features for commonly used optical flow and articulated pose features, which are highly sensitive to small differences in motion, viewpoint, dynamic backgrounds, occlusion and other sources of variability. We show how these frequency-based features can be used in combination with a simple forest classifier to achieve good and robust results on the popular KTH Actions dataset. | Innovations in pose estimation technology have also inspired representations centered on features extracted from articulated body parts. In this area, some researchers like to work with articulated pose in 2D, as in @cite_12 @cite_1 ; while others prefer to avoid the challenges of 2-dimensional image data by directly recording joints coordinates in 3D using the increasingly accessible RGBD cameras or other commercial motion capture systems, as in @cite_2 @cite_4 @cite_7 @cite_15 @cite_16 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_1",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_12"
],
"mid": [
"2144380653",
"2143267104",
"2156135524",
"2137806997",
"2017695267",
"2145546283",
"2117973875"
],
"abstract": [
"This paper presents a method to recognize human actions from sequences of depth maps. Specifically, we employ an action graph to model explicitly the dynamics of the actions and a bag of 3D points to characterize a set of salient postures that correspond to the nodes in the action graph. In addition, we propose a simple, but effective projection based sampling scheme to sample the bag of 3D points from the depth maps. Experimental results have shown that over 90 recognition accuracy were achieved by sampling only about 1 3D points from the depth maps. Compared to the 2D silhouette based recognition, the recognition errors were halved. In addition, we demonstrate the potential of the bag of points posture model to deal with occlusions through simulation.",
"Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms.",
"3D human pose recovery is considered as a fundamental step in view-invariant human action recognition. However, inferring 3D poses from a single view usually is slow due to the large number of parameters that need to be estimated and recovered poses are often ambiguous due to the perspective projection. We present an approach that does not explicitly infer 3D pose at each frame. Instead, from existing action models we search for a series of actions that best match the input sequence. In our approach, each action is modeled as a series of synthetic 2D human poses rendered from a wide range of viewpoints. The constraints on transition of the synthetic poses is represented by a graph model called Action Net. Given the input, silhouette matching between the input frames and the key poses is performed first using an enhanced Pyramid Match Kernel algorithm. The best matched sequence of actions is then tracked using the Viterbi algorithm. We demonstrate this approach on a challenging video sets consisting of 15 complex action classes.",
"A new method for representing and recognizing human body movements is presented. The basic idea is to identify sets of constraints that are diagnostic of a movement: expressed using body-centered coordinates such as joint angles and in force only during a particular movement. Assuming the availability of Cartesian tracking data, we develop techniques for a representation of movements defined by space curves in subspaces of a \"phase space.\" The phase space has axes of joint angles and torso location and attitude, and the axes of the subspaces are subsets of the axes of the phase space. Using this representation we develop a system for learning new movements from ground truth data by searching for constraints. We then use the learned representation for recognizing movements in unsegmented data. We train and test the system on nine fundamental steps from classical ballet performed by two dancers; the system accurately recognizes the movements in the unsegmented stream of motion. >",
"Being able to detect and recognize human activities is essential for several applications, including personal assistive robotics. In this paper, we perform detection and recognition of unstructured human activity in unstructured environments. We use a RGBD sensor (Microsoft Kinect) as the input sensor, and compute a set of features based on human pose and motion, as well as based on image and point-cloud information. Our algorithm is based on a hierarchical maximum entropy Markov model (MEMM), which considers a person's activity as composed of a set of sub-activities. We infer the two-layered graph structure using a dynamic programming approach. We test our algorithm on detecting and recognizing twelve different activities performed by four people in different environments, such as a kitchen, a living room, an office, etc., and achieve good performance even when the person was not seen before in the training set.1",
"In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skelet al joint locations from Kinect depth maps using 's method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action 3D dataset and our algorithm outperforms [25] on most of the cases.",
"One of the fundamental challenges of recognizing actions is accounting for the variability that arises when arbitrary cameras capture humans performing actions. In this paper, we explicitly identify three important sources of variability: (1) viewpoint, (2) execution rate, and (3) anthropometry of actors, and propose a model of human actions that allows us to investigate all three. Our hypothesis is that the variability associated with the execution of an action can be closely approximated by a linear combination of action bases in joint spatio-temporal space. We demonstrate that such a model bounds the rank of a matrix of image measurements and that this bound can be used to achieve recognition of actions based only on imaged data. A test employing principal angles between subspaces that is robust to statistical fluctuations in measurement data is presented to find the membership of an instance of an action. The algorithm is applied to recognize several actions, and promising results have been obtained."
]
} |
1409.1057 | 2952405762 | Consumer Debt has risen to be an important problem of modern societies, generating a lot of research in order to understand the nature of consumer indebtness, which so far its modelling has been carried out by statistical models. In this work we show that Computational Intelligence can offer a more holistic approach that is more suitable for the complex relationships an indebtness dataset has and Linear Regression cannot uncover. In particular, as our results show, Neural Networks achieve the best performance in modelling consumer indebtness, especially when they manage to incorporate the significant and experimentally verified results of the Data Mining process in the model, exploiting the flexibility Neural Networks offer in designing their topology. This novel method forms an elaborate framework to model Consumer indebtness that can be extended to any other real world application. | Statistical models and linear regression are primarily used for the level of debt prediction in the literature. A significant amount of the work is summarised in @cite_3 where they also provide a model for separating debtors from non-debtors. However, their suggested logit model suffers from a low @math (33 On the other hand, Random Forests, a popular machine learning algorithm for Data Mining, has been shown to be able to handle non-linearities in the data @cite_4 . They have received a lot of attention in biostatistics and other fields @cite_4 due to their ability to handle a large number of variables with a relatively small number of observations and because they provide a way to identify variable importance @cite_4 @cite_6 . They manage to demonstrate exceptional performance with only one parameter and their regression has been proven not to overfit the data @cite_6 . An interesting application of Random Forests is in @cite_13 where a model measuring the impact of the reviews of products in sales and perceived usefulness was constructed. | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_6",
"@cite_3"
],
"mid": [
"2098173428",
"2101664201",
"1599871777",
""
],
"abstract": [
"With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we reexamine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes such as their perceived usefulness. Our approach explores multiple aspects of review text, such as subjectivity levels, various measures of readability and extent of spelling errors to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are rated more informative (or helpful) by other users. By using Random Forest-based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. We examine the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates econometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their helpfulness and economic impact.",
"Relative importance of regressor variables is an old topic that still awaits a satisfactory solution. When interest is in attributing importance in linear regression, averaging over orderings methods for decomposing R2 are among the state-of-the-art methods, although the mechanism behind their behavior is not (yet) completely understood. Random forests—a machine-learning tool for classification and regression proposed a few years ago—have an inherent procedure of producing variable importances. This article compares the two approaches (linear model on the one hand and two versions of random forests on the other hand) and finds both striking similarities and differences, some of which can be explained whereas others remain a challenge. The investigation improves understanding of the nature of variable importance in random forests. This article has supplementary material online.",
"Breiman (2001a,b) has recently developed an ensemble classification and regression approach that displayed outstanding performance with regard prediction error on a suite of benchmark datasets. As the base constituents of the ensemble are tree-structured predictors, and since each of these is constructed using an injection of randomness, the method is called ‘random forests’. That the exceptional performance is attained with seemingly only a single tuning parameter, to which sensitivity is minimal, makes the methodology all the more remarkable. The individual trees comprising the forest are all grown to maximal depth. While this helps with regard bias, there is the familiar tradeoff with variance. However, these variability concerns were potentially obscured because of an interesting feature of those benchmarking datasets extracted from the UCI machine learning repository for testing: all these datasets are hard to overfit using tree-structured methods. This raises issues about the scope of the repository. With this as motivation, and coupled with experience from boosting methods, we revisit the formulation of random forests and investigate prediction performance on real-world and simulated datasets for which maximally sized trees do overfit. These explorations reveal that gains can be realized by additional tuning to regulate tree size via limiting the number of splits and or the size of nodes for which splitting is allowed. Nonetheless, even in these settings, good performance for random forests can be attained by using larger (than default) primary tuning parameter values.",
""
]
} |
1409.0932 | 2201701396 | The study of the optimality of low-complexity greedy scheduling techniques in wireless communications networks is a very complex problem. The Local Pooling (LoP) factor provides a single-parameter means of expressing the achievable capacity region (and optimality) of one such scheme, greedy maximal scheduling (GMS). The exact LoP factor for an arbitrary network graph is generally difficult to obtain, but may be evaluated or bounded based on the network graph’s particular structure. In this paper, we provide rigorous characterizations of the LoP factor in large networks modeled as Erdős–Renyi (ER) and random geometric (RG) graphs under the primary interference model. We employ threshold functions to establish critical values for either the edge probability or communication radius to yield useful bounds on the range and expectation of the LoP factor as the network grows large. For sufficiently dense random graphs, we find that the LoP factor is between 1 2 and 2 3, while sufficiently sparse random graphs permit GMS optimality (the LoP factor is 1) with high probability. We then place LoP within a larger context of commonly studied random graph properties centered around connectedness. We observe that edge densities permitting connectivity generally admit cycle subgraphs that form the basis for the LoP factor upper bound of 2 3. We conclude with simulations to explore the regime of small networks, which suggest the probability that an ER or RG graph satisfies LoP and is connected decays quickly in network size. | Sufficient conditions for the optimality of Greedy Maximal Scheduling (GMS) employed on a network graph (G(V,E) ) were produced by Dimakis and Walrand @cite_12 and called Local Pooling (LoP). The GMS algorithm (called Longest Queue First, LQF @cite_12 ) consists of an iterated selection of links in order of decreasing queue lengths, subject to pair-wise interference constraints. Computing whether or not an arbitrary graph (G ) satisfies LoP consists of solving an exponential number of linear programs (LPs), one for each subset of links in (G ). Trees are an example of one class of graphs proved to satisfy LoP. While LoP is necessary and sufficient under deterministic traffic processes, a full characterization of the graphs for which GMS is optimal under random arrivals is unknown. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2009508617"
],
"abstract": [
"We consider the stability of the longest-queue-first scheduling policy (LQF), a natural and low-complexity scheduling policy, for a generalized switch model. Unlike that of common scheduling policies, the stability of LQF depends on the variance of the arrival processes in addition to their average intensities. We identify new sufficient conditions for LQF to be throughput optimal for independent, identically distributed arrival processes. Deterministic fluid analogs, proved to be powerful in the analysis of stability in queueing networks, do not adequately characterize the stability of LQF. We combine properties of diffusion-scaled sample path functionals and local fluid limits into a sharper characterization of stability."
]
} |
1409.0932 | 2201701396 | The study of the optimality of low-complexity greedy scheduling techniques in wireless communications networks is a very complex problem. The Local Pooling (LoP) factor provides a single-parameter means of expressing the achievable capacity region (and optimality) of one such scheme, greedy maximal scheduling (GMS). The exact LoP factor for an arbitrary network graph is generally difficult to obtain, but may be evaluated or bounded based on the network graph’s particular structure. In this paper, we provide rigorous characterizations of the LoP factor in large networks modeled as Erdős–Renyi (ER) and random geometric (RG) graphs under the primary interference model. We employ threshold functions to establish critical values for either the edge probability or communication radius to yield useful bounds on the range and expectation of the LoP factor as the network grows large. For sufficiently dense random graphs, we find that the LoP factor is between 1 2 and 2 3, while sufficiently sparse random graphs permit GMS optimality (the LoP factor is 1) with high probability. We then place LoP within a larger context of commonly studied random graph properties centered around connectedness. We observe that edge densities permitting connectivity generally admit cycle subgraphs that form the basis for the LoP factor upper bound of 2 3. We conclude with simulations to explore the regime of small networks, which suggest the probability that an ER or RG graph satisfies LoP and is connected decays quickly in network size. | The work by Birand @cite_8 produced a simpler characterization of all LoP-satisfying graphs under primary interference using forbidden subgraphs on the graph topology. Even more remarkably, they provide an ( n )-time algorithm for computing whether or not a graph (G ) satisfies LoP. Concerning general interference models, the class of interference graphs are shown to satisfy LoP conditions. The definition of co-strongly perfect graphs is equated with the LoP conditions of Dimakis and Walrand @cite_12 . Additionally, both Joo @cite_9 and Zussman @cite_6 prove that GMS is optimal on tree graphs for (k )-hop interference models. | {
"cite_N": [
"@cite_9",
"@cite_6",
"@cite_12",
"@cite_8"
],
"mid": [
"1995120351",
"2171294812",
"2009508617",
"2039917366"
],
"abstract": [
"In this paper, we characterize the performance of an important class of scheduling schemes, called greedy maximal scheduling (GMS), for multihop wireless networks. While a lower bound on the throughput performance of GMS has been well known, empirical observations suggest that it is quite loose and that the performance of GMS is often close to optimal. In this paper, we provide a number of new analytic results characterizing the performance limits of GMS. We first provide an equivalent characterization of the efficiency ratio of GMS through a topological property called the local-pooling factor of the network graph. We then develop an iterative procedure to estimate the local-pooling factor under a large class of network topologies and interference models. We use these results to study the worst-case efficiency ratio of GMS on two classes of network topologies. We show how these results can be applied to tree networks to prove that GMS achieves the full capacity region in tree networks under the K -hop interference model. Then, we show that the worst-case efficiency ratio of GMS in geometric unit-disk graphs is between 1 6 and 1 3.",
"Efficient operation of wireless networks requires distributed routing and scheduling algorithms that take into account interference constraints. Recently, a few algorithms for networks with primary- or secondary-interference constraints have been developed. Due to their distributed operation, these algorithms can achieve only a guaranteed fraction of the maximum possible throughput. It was also recently shown that if a set of conditions (known as Local Pooling) is satisfied, simple distributed scheduling algorithms achieve 100 throughput. However, previous work regarding Local Pooling focused mostly on obtaining abstract conditions and on networks with single-hop interference or single-hop traffic. In this paper, we identify several graph classes that satisfy the Local Pooling conditions, thereby enabling the use of such graphs in network design algorithms. Then, we study the multihop implications of Local Pooling. We show that in many cases, as the interference degree increases, the Local Pooling conditions are more likely to hold. Consequently, although increased interference reduces the maximum achievable throughput of the network, it tends to enable distributed algorithms to achieve 100 of this throughput. Regarding multihop traffic, we show that if the network satisfies only the single-hop Local Pooling conditions, distributed joint routing and scheduling algorithms are not guaranteed to achieve maximum throughput. Therefore, we present new conditions for Multihop Local Pooling, under which distributed algorithms achieve 100 throughout. Finally, we identify network topologies in which the conditions hold and discuss the algorithmic implications of the results.",
"We consider the stability of the longest-queue-first scheduling policy (LQF), a natural and low-complexity scheduling policy, for a generalized switch model. Unlike that of common scheduling policies, the stability of LQF depends on the variance of the arrival processes in addition to their average intensities. We identify new sufficient conditions for LQF to be throughput optimal for independent, identically distributed arrival processes. Deterministic fluid analogs, proved to be powerful in the analysis of stability in queueing networks, do not adequately characterize the stability of LQF. We combine properties of diffusion-scaled sample path functionals and local fluid limits into a sharper characterization of stability.",
"Efficient operation of wireless networks and switches requires using simple (and in some cases distributed) scheduling algorithms. In general, simple greedy algorithms (known as Greedy Maximal Scheduling, or GMS) are guaranteed to achieve only a fraction of the maximum possible throughput (e.g., 50 throughput in switches). However, it was recently shown that in networks in which the Local Pooling conditions are satisfied, GMS achieves 100 throughput. Moreover, in networks in which the σ-Local Pooling conditions hold, GMS achieves σ throughput. In this paper, we focus on identifying the specific network topologies that satisfy these conditions. In particular, we provide the first characterization of all the network graphs in which Local Pooling holds under primary interference constraints (in these networks, GMS achieves 100 throughput). This leads to a linear-time algorithm for identifying Local-Pooling-satisfying graphs. Moreover, by using similar graph-theoretical methods, we show that in all bipartite graphs (i.e., input-queued switches) of size up to 7 × n, GMS is guaranteed to achieve 66 throughput, thereby improving upon the previously known 50 lower bound. Finally, we study the performance of GMS in interference graphs and show that in certain specific topologies, its performance could be very bad. Overall, the paper demonstrates that using graph-theoretical techniques can significantly contribute to our understanding of greedy scheduling algorithms."
]
} |
1409.0932 | 2201701396 | The study of the optimality of low-complexity greedy scheduling techniques in wireless communications networks is a very complex problem. The Local Pooling (LoP) factor provides a single-parameter means of expressing the achievable capacity region (and optimality) of one such scheme, greedy maximal scheduling (GMS). The exact LoP factor for an arbitrary network graph is generally difficult to obtain, but may be evaluated or bounded based on the network graph’s particular structure. In this paper, we provide rigorous characterizations of the LoP factor in large networks modeled as Erdős–Renyi (ER) and random geometric (RG) graphs under the primary interference model. We employ threshold functions to establish critical values for either the edge probability or communication radius to yield useful bounds on the range and expectation of the LoP factor as the network grows large. For sufficiently dense random graphs, we find that the LoP factor is between 1 2 and 2 3, while sufficiently sparse random graphs permit GMS optimality (the LoP factor is 1) with high probability. We then place LoP within a larger context of commonly studied random graph properties centered around connectedness. We observe that edge densities permitting connectivity generally admit cycle subgraphs that form the basis for the LoP factor upper bound of 2 3. We conclude with simulations to explore the regime of small networks, which suggest the probability that an ER or RG graph satisfies LoP and is connected decays quickly in network size. | For graphs that do not satisfy local pooling, Joo @cite_30 @cite_5 provide a generalization of LoP, called ( )-LoP. The LoP factor of a graph, ( ), is formulated from the original LPs of Dimakis and Walrand @cite_12 . Joo @cite_9 show that the LoP factor is in fact GMS's largest achievable uniform scaling ( = ^* ) of the network's stability region. Li @cite_18 generalize LoP further to that of ( )-LoP, which includes a per-link LoP factor ( ) that scales each dimension of ( ) independently and recovers a superset of the provable GMS stability region under the single parameter LoP factor. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_9",
"@cite_5",
"@cite_12"
],
"mid": [
"2106717724",
"2063925036",
"1995120351",
"",
"2009508617"
],
"abstract": [
"Greedy maximal matching (GMM) is an important scheduling scheme for multi-hop wireless networks. It is computationally simple, and has often been numerically shown to achieve throughput that is close to optimal. However, to date the performance limits of GMM have not been well understood. In particular, although a lower bound on its performance has been well known, this bound has been empirically found to be quite loose. In this paper, we focus on the well-established node-exclusive interference model and provide new analytical results that characterize the performance of GMM through a topological notion called the local-pooling factor. We show that for a given network graph with single-hop traffic, the efficiency ratio of GMM (i.e., the worst-case ratio of the throughput of GMM to that of the optimal) is equal to its local-pooling factor. Further, we estimate the local-pooling factor for arbitrary network graphs under the node-exclusive interference model and show that the efficiency ratio of GMM is no smaller than d* 2d* - 1 in a network topology of maximum node-degree d*. Using these results, we identify specific network topologies for which the efficiency ratio of GMM is strictly less than 1. We also extend the results to the more general scenario with multi-hop traffic, and show that GMM can achieve similar efficiency ratios when a flow-regulator is used at each hop.",
"One of the major challenges in wireless networking is how to optimize the link scheduling decisions under interference constraints. Recently, a few algorithms have been introduced to address the problem. However, solving the problem to optimality for general wireless interference models is known to be NP-hard. The research community is currently focusing on finding simpler suboptimal scheduling algorithms and on characterizing the algorithm performance. In this paper, we address the performance of a specific scheduling policy called Longest Queue First (LQF), which has gained significant recognition lately due to its simplicity and high efficiency in empirical studies. There has been a sequence of studies characterizing the guaranteed performance of the LQF schedule, culminating at the construction of the σ-local pooling concept by In this paper, we refine the notion of σ-local pooling and use the refinement to capture a larger region of guaranteed performance.",
"In this paper, we characterize the performance of an important class of scheduling schemes, called greedy maximal scheduling (GMS), for multihop wireless networks. While a lower bound on the throughput performance of GMS has been well known, empirical observations suggest that it is quite loose and that the performance of GMS is often close to optimal. In this paper, we provide a number of new analytic results characterizing the performance limits of GMS. We first provide an equivalent characterization of the efficiency ratio of GMS through a topological property called the local-pooling factor of the network graph. We then develop an iterative procedure to estimate the local-pooling factor under a large class of network topologies and interference models. We use these results to study the worst-case efficiency ratio of GMS on two classes of network topologies. We show how these results can be applied to tree networks to prove that GMS achieves the full capacity region in tree networks under the K -hop interference model. Then, we show that the worst-case efficiency ratio of GMS in geometric unit-disk graphs is between 1 6 and 1 3.",
"",
"We consider the stability of the longest-queue-first scheduling policy (LQF), a natural and low-complexity scheduling policy, for a generalized switch model. Unlike that of common scheduling policies, the stability of LQF depends on the variance of the arrival processes in addition to their average intensities. We identify new sufficient conditions for LQF to be throughput optimal for independent, identically distributed arrival processes. Deterministic fluid analogs, proved to be powerful in the analysis of stability in queueing networks, do not adequately characterize the stability of LQF. We combine properties of diffusion-scaled sample path functionals and local fluid limits into a sharper characterization of stability."
]
} |
1409.0932 | 2201701396 | The study of the optimality of low-complexity greedy scheduling techniques in wireless communications networks is a very complex problem. The Local Pooling (LoP) factor provides a single-parameter means of expressing the achievable capacity region (and optimality) of one such scheme, greedy maximal scheduling (GMS). The exact LoP factor for an arbitrary network graph is generally difficult to obtain, but may be evaluated or bounded based on the network graph’s particular structure. In this paper, we provide rigorous characterizations of the LoP factor in large networks modeled as Erdős–Renyi (ER) and random geometric (RG) graphs under the primary interference model. We employ threshold functions to establish critical values for either the edge probability or communication radius to yield useful bounds on the range and expectation of the LoP factor as the network grows large. For sufficiently dense random graphs, we find that the LoP factor is between 1 2 and 2 3, while sufficiently sparse random graphs permit GMS optimality (the LoP factor is 1) with high probability. We then place LoP within a larger context of commonly studied random graph properties centered around connectedness. We observe that edge densities permitting connectivity generally admit cycle subgraphs that form the basis for the LoP factor upper bound of 2 3. We conclude with simulations to explore the regime of small networks, which suggest the probability that an ER or RG graph satisfies LoP and is connected decays quickly in network size. | As mentioned, checking LoP conditions can be computationally prohibitive, particularly under arbitrary interference models. Therefore, algorithms to easily estimate or bound ( ) and ( ) are of interest and immediate use in studying GMS stability. Joo @cite_9 provide a lower bound on ( ) by the inverse of the largest interference degree of a nested sequence of increasing subsets of links in (G ), and provide an algorithm for computing the bound. Li @cite_18 refine this algorithm to provide individual per-link bounds on ( ). Under the primary interference model, Joo @cite_30 show that ( (2 -1) ) is a lower bound for ( ). Leconte @cite_31 , Li @cite_18 , and Birand @cite_8 note that a lower bound for ( ) is derived from the ratio of the min- to max-cardinality maximal schedules. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_8",
"@cite_9",
"@cite_31"
],
"mid": [
"2106717724",
"2063925036",
"2039917366",
"1995120351",
"2017161886"
],
"abstract": [
"Greedy maximal matching (GMM) is an important scheduling scheme for multi-hop wireless networks. It is computationally simple, and has often been numerically shown to achieve throughput that is close to optimal. However, to date the performance limits of GMM have not been well understood. In particular, although a lower bound on its performance has been well known, this bound has been empirically found to be quite loose. In this paper, we focus on the well-established node-exclusive interference model and provide new analytical results that characterize the performance of GMM through a topological notion called the local-pooling factor. We show that for a given network graph with single-hop traffic, the efficiency ratio of GMM (i.e., the worst-case ratio of the throughput of GMM to that of the optimal) is equal to its local-pooling factor. Further, we estimate the local-pooling factor for arbitrary network graphs under the node-exclusive interference model and show that the efficiency ratio of GMM is no smaller than d* 2d* - 1 in a network topology of maximum node-degree d*. Using these results, we identify specific network topologies for which the efficiency ratio of GMM is strictly less than 1. We also extend the results to the more general scenario with multi-hop traffic, and show that GMM can achieve similar efficiency ratios when a flow-regulator is used at each hop.",
"One of the major challenges in wireless networking is how to optimize the link scheduling decisions under interference constraints. Recently, a few algorithms have been introduced to address the problem. However, solving the problem to optimality for general wireless interference models is known to be NP-hard. The research community is currently focusing on finding simpler suboptimal scheduling algorithms and on characterizing the algorithm performance. In this paper, we address the performance of a specific scheduling policy called Longest Queue First (LQF), which has gained significant recognition lately due to its simplicity and high efficiency in empirical studies. There has been a sequence of studies characterizing the guaranteed performance of the LQF schedule, culminating at the construction of the σ-local pooling concept by In this paper, we refine the notion of σ-local pooling and use the refinement to capture a larger region of guaranteed performance.",
"Efficient operation of wireless networks and switches requires using simple (and in some cases distributed) scheduling algorithms. In general, simple greedy algorithms (known as Greedy Maximal Scheduling, or GMS) are guaranteed to achieve only a fraction of the maximum possible throughput (e.g., 50 throughput in switches). However, it was recently shown that in networks in which the Local Pooling conditions are satisfied, GMS achieves 100 throughput. Moreover, in networks in which the σ-Local Pooling conditions hold, GMS achieves σ throughput. In this paper, we focus on identifying the specific network topologies that satisfy these conditions. In particular, we provide the first characterization of all the network graphs in which Local Pooling holds under primary interference constraints (in these networks, GMS achieves 100 throughput). This leads to a linear-time algorithm for identifying Local-Pooling-satisfying graphs. Moreover, by using similar graph-theoretical methods, we show that in all bipartite graphs (i.e., input-queued switches) of size up to 7 × n, GMS is guaranteed to achieve 66 throughput, thereby improving upon the previously known 50 lower bound. Finally, we study the performance of GMS in interference graphs and show that in certain specific topologies, its performance could be very bad. Overall, the paper demonstrates that using graph-theoretical techniques can significantly contribute to our understanding of greedy scheduling algorithms.",
"In this paper, we characterize the performance of an important class of scheduling schemes, called greedy maximal scheduling (GMS), for multihop wireless networks. While a lower bound on the throughput performance of GMS has been well known, empirical observations suggest that it is quite loose and that the performance of GMS is often close to optimal. In this paper, we provide a number of new analytic results characterizing the performance limits of GMS. We first provide an equivalent characterization of the efficiency ratio of GMS through a topological property called the local-pooling factor of the network graph. We then develop an iterative procedure to estimate the local-pooling factor under a large class of network topologies and interference models. We use these results to study the worst-case efficiency ratio of GMS on two classes of network topologies. We show how these results can be applied to tree networks to prove that GMS achieves the full capacity region in tree networks under the K -hop interference model. Then, we show that the worst-case efficiency ratio of GMS in geometric unit-disk graphs is between 1 6 and 1 3.",
"In this paper, we derive new bounds on the throughput efficiency of Greedy Maximal Scheduling (GMS) for wireless networks of arbitrary topology under the general k -hop interference model. These results improve the known bounds for networks with up to 26 nodes under the 2-hop interference model. We also prove that GMS is throughput-optimal in small networks. In particular, we show that GMS achieves 100 throughput in networks with up to eight nodes under the 2-hop interference model. Furthermore, we provide a simple proof to show that GMS can be implemented using only local neighborhood information in networks of any size."
]
} |
1409.0932 | 2201701396 | The study of the optimality of low-complexity greedy scheduling techniques in wireless communications networks is a very complex problem. The Local Pooling (LoP) factor provides a single-parameter means of expressing the achievable capacity region (and optimality) of one such scheme, greedy maximal scheduling (GMS). The exact LoP factor for an arbitrary network graph is generally difficult to obtain, but may be evaluated or bounded based on the network graph’s particular structure. In this paper, we provide rigorous characterizations of the LoP factor in large networks modeled as Erdős–Renyi (ER) and random geometric (RG) graphs under the primary interference model. We employ threshold functions to establish critical values for either the edge probability or communication radius to yield useful bounds on the range and expectation of the LoP factor as the network grows large. For sufficiently dense random graphs, we find that the LoP factor is between 1 2 and 2 3, while sufficiently sparse random graphs permit GMS optimality (the LoP factor is 1) with high probability. We then place LoP within a larger context of commonly studied random graph properties centered around connectedness. We observe that edge densities permitting connectivity generally admit cycle subgraphs that form the basis for the LoP factor upper bound of 2 3. We conclude with simulations to explore the regime of small networks, which suggest the probability that an ER or RG graph satisfies LoP and is connected decays quickly in network size. | Joo @cite_9 define the worst-case LoP over a class of graphs, and in particular find bounds on the worst-case ( ) for geometric-unit-disk graphs with a (k )-distance interference model. Birand @cite_8 list particular topologies that admit arbitrarily low ( ), and provide upper and lower bounds on ( ) for several classes of interference graphs. The body of work by Brzezinski @cite_23 @cite_19 @cite_15 brings some attention to multi-hop (routing) definitions for LoP. Brzezinski @cite_1 investigate scheduling on arbitrary graphs by decomposing, or pre-partitioning, the graph topology into multiple orthogonal' trees and then applying known LoP results about GMS optimality on trees. Both Joo @cite_30 and Kang @cite_32 also treat the case of multi-hop traffic and LoP conditions. | {
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_32",
"@cite_19",
"@cite_23",
"@cite_15"
],
"mid": [
"2106717724",
"2039917366",
"1995120351",
"2152964475",
"2078792120",
"",
"2086783464",
""
],
"abstract": [
"Greedy maximal matching (GMM) is an important scheduling scheme for multi-hop wireless networks. It is computationally simple, and has often been numerically shown to achieve throughput that is close to optimal. However, to date the performance limits of GMM have not been well understood. In particular, although a lower bound on its performance has been well known, this bound has been empirically found to be quite loose. In this paper, we focus on the well-established node-exclusive interference model and provide new analytical results that characterize the performance of GMM through a topological notion called the local-pooling factor. We show that for a given network graph with single-hop traffic, the efficiency ratio of GMM (i.e., the worst-case ratio of the throughput of GMM to that of the optimal) is equal to its local-pooling factor. Further, we estimate the local-pooling factor for arbitrary network graphs under the node-exclusive interference model and show that the efficiency ratio of GMM is no smaller than d* 2d* - 1 in a network topology of maximum node-degree d*. Using these results, we identify specific network topologies for which the efficiency ratio of GMM is strictly less than 1. We also extend the results to the more general scenario with multi-hop traffic, and show that GMM can achieve similar efficiency ratios when a flow-regulator is used at each hop.",
"Efficient operation of wireless networks and switches requires using simple (and in some cases distributed) scheduling algorithms. In general, simple greedy algorithms (known as Greedy Maximal Scheduling, or GMS) are guaranteed to achieve only a fraction of the maximum possible throughput (e.g., 50 throughput in switches). However, it was recently shown that in networks in which the Local Pooling conditions are satisfied, GMS achieves 100 throughput. Moreover, in networks in which the σ-Local Pooling conditions hold, GMS achieves σ throughput. In this paper, we focus on identifying the specific network topologies that satisfy these conditions. In particular, we provide the first characterization of all the network graphs in which Local Pooling holds under primary interference constraints (in these networks, GMS achieves 100 throughput). This leads to a linear-time algorithm for identifying Local-Pooling-satisfying graphs. Moreover, by using similar graph-theoretical methods, we show that in all bipartite graphs (i.e., input-queued switches) of size up to 7 × n, GMS is guaranteed to achieve 66 throughput, thereby improving upon the previously known 50 lower bound. Finally, we study the performance of GMS in interference graphs and show that in certain specific topologies, its performance could be very bad. Overall, the paper demonstrates that using graph-theoretical techniques can significantly contribute to our understanding of greedy scheduling algorithms.",
"In this paper, we characterize the performance of an important class of scheduling schemes, called greedy maximal scheduling (GMS), for multihop wireless networks. While a lower bound on the throughput performance of GMS has been well known, empirical observations suggest that it is quite loose and that the performance of GMS is often close to optimal. In this paper, we provide a number of new analytic results characterizing the performance limits of GMS. We first provide an equivalent characterization of the efficiency ratio of GMS through a topological property called the local-pooling factor of the network graph. We then develop an iterative procedure to estimate the local-pooling factor under a large class of network topologies and interference models. We use these results to study the worst-case efficiency ratio of GMS on two classes of network topologies. We show how these results can be applied to tree networks to prove that GMS achieves the full capacity region in tree networks under the K -hop interference model. Then, we show that the worst-case efficiency ratio of GMS in geometric unit-disk graphs is between 1 6 and 1 3.",
"This paper considers the interaction between channel assignment and distributed scheduling in multi-channel multi-radio Wireless Mesh Networks (WMNs). Recently, a number of distributed scheduling algorithms for wireless networks have emerged. Due to their distributed operation, these algorithms can achieve only a fraction of the maximum possible throughput. As an alternative to increasing the throughput fraction by designing new algorithms, we present a novel approach that takes advantage of the inherent multi-radio capability of WMNs. We show that this capability can enable partitioning of the network into subnetworks in which simple distributed scheduling algorithms can achieve 100 throughput. The partitioning is based on the notion of Local Pooling. Using this notion, we characterize topologies in which 100 throughput can be achieved distributedly. These topologies are used in order to develop a number of centralized channel assignment algorithms that are based on a matroid intersection algorithm. These algorithms pre-partition a network in a manner that not only expands the capacity regions of the subnetworks but also allows distributed algorithms to achieve these capacity regions. We evaluate the performance of the algorithms via simulation and show that they significantly increase the distributedly achievable capacity region. We note that while the identified topologies are of general interference graphs, the partitioning algorithms are designed for networks with primary interference constraints.",
"We consider the stability of the longest-queue-first (LQF) scheduling policy in wireless networks with multihop traffic under the one-hop interference model. Although it is well known that the back-pressure algorithm achieves the maximal stability, its computational complexity is prohibitively high. In this paper, we consider LQF, a low-complexity scheduling algorithm, which has been shown to have near-optimal throughput performance in many networks with single-hop traffic flows. We are interested in the performance of LQF for multihop traffic flows. In this scenario, the coupling between queues due to multihop traffic flows makes the local-pooling-factor analysis difficult to perform. Using the fluid-limit techniques, we show that LQF achieves the maximal stability for linear networks with multihop traffic and a single destination on the boundary of the network under the one-hop interference model.",
"",
"A major challenge in the design and operation of wireless networks is to jointly route packets and schedule transmissions to efficiently share the common spectrum among links in the same area. Due to the lack of central control in wireless networks, these algorithms have to be decentralized. It was recently shown that distributed (greedy) algorithms can usually guarantee only fractional throughput. It was also recently shown that if a set of conditions regarding the network topology (known as Local Pooling) is satisfied, simple distributed maximal weight (greedy) scheduling algorithms achieve 100 throughput. In this paper, we focus on networks in which packets have to undergo multihop routing and derive multihop local pooling conditions for that setting. In networks satisfying these conditions, a backpressure-based joint routing and scheduling algorithm employing maximal weight scheduling achieves 100 throughput.",
""
]
} |
1409.0980 | 2033663037 | We present a complete logic for reasoning with functional dependencies (FDs) with semantics defined over classes of commutative integral partially ordered monoids and complete residuated lattices. The dependencies allow us to express stronger relationships between attribute values than the ordinary FDs. In our setting, the dependencies not only express that certain values are determined by others but also express that similar values of attributes imply similar values of other attributes. We show complete axiomatization using a system of Armstrong-like rules, comment on related computational issues, and the relational vs. propositional semantics of the dependencies. A logic for reasoning with similarity-preserving dependencies is proposed.Its semantics is defined using commutative integral partially ordered monoids.The logic is complete and decidable.A closure-like algorithm for non-contracting sets of dependencies is presented.Two kinds of semantics are discussed: a propositional and a relational one. | First, let us note that there exists a vast amount of papers on fuzzy functional dependencies'', often with questionable technical quality, which combine (in various ways) the concepts of fuzzy sets and functional dependencies in order to formalize vague dependencies between attributes. While this idea is tempting and close to what we present here, our objection is that most of these papers are purely definitional or just experimental and are not interested in the underlying logic in the narrow sense of it (i.e., in logic as a study of consequence). From one viewpoint this is not surprising since a number of papers in this category predate the beginning of systematic formalization of various types of fuzzy logics which appeared in the late 90's, see @cite_21 as a standard reference and a historical overview. One of the most influential early approaches that enjoyed interest in the database community is @cite_12 , further papers dealing with fuzzy functional dependencies and related phenomena include @cite_29 @cite_17 @cite_33 . Since our paper is not a survey, we do not write further details on such approaches and refer interested readers to @cite_39 where they can find further comments. | {
"cite_N": [
"@cite_33",
"@cite_29",
"@cite_21",
"@cite_39",
"@cite_12",
"@cite_17"
],
"mid": [
"2074622901",
"2072960837",
"2110182592",
"2139040790",
"2017978889",
"1991924287"
],
"abstract": [
"Abstract This paper deals with relational databases which are extended in the sense that fuzzily known values are allowed for attributes. Precise as well as partial (imprecise, uncertain) knowledge concerning the value of the attributes are represented by means of [0,1]-valued possibility distributions in Zadeh's sense. Thus, we have to manipulate ordinary relations on Cartesian products of sets of fuzzy subsets rather than fuzzy relations. Besides, vague queries whose contents are also represented by possibility distributions can be taken into account. The basic operations of relational algebra, union, intersection, Cartesian product, projection, and selection are extended in order to deal with partial information and vague queries. Approximate equalities and inequalities modeled by fuzzy relations can also be taken into account in the selection operation. Then, the main features of a query language based on the extended relational algebra are presented. An illustrative example is provided. This approach, which enables a very general treatment of relational databases with fuzzy attribute values, makes an extensive use of dual possibility and necessity measures.",
"A structure for representing inexact information in the form of a relational database is presented. The structure differs from ordinary relational databases in two important respects: Components of tuples need not be single values and a similarity relation is required for each domain set of the database. Two critical properties possessed by ordinary relational databases are proven to exist in the fuzzy relational structure. These properties are (1) no two tuples have identical interpretations, and (2) each relational operation has a unique result.",
"Preface. 1. Preliminaries. 2. Many-Valued Propositional Calculi. 3. Lukasiewicz Propositional Logic. 4. Product Logic, Godel Logic. 5. Many-Valued Predicate Logics. 6. Complexity and Undecidability. 7. On Approximate Inference. 8. Generalized Quantifiers and Modalities. 9. Miscellanea. 10. Historical Remarks. References. Index.",
"The article deals with Codd's relational model of data and its fuzzy logic extensions. Our main purpose is to examine, from the point of view of fuzzy logic in the narrow sense, some of the extensions proposed in the literature and the relationships between them. We argue that fuzzy logic in the narrow sense is important for the fuzzy logic extensions because it provides conceptual and methodological foundations, clarity and simplicity. We present several comparative observations as well as new technical results.",
"This paper deals with the application of fuzzy logic in a relational database environment with the objective of capturing more meaning of the data. It is shown that with suitable interpretations for the fuzzy membership functions, a fuzzy relational data model can be used to represent ambiguities in data values as well as impreciseness in the association among them. Relational operators for fuzzy relations have been studied, and applicability of fuzzy logic in capturing integrity constraints has been investigated. By introducing a fuzzy resemblance measure EQUAL for comparing domain values, the definition of classical functional dependency has been generalized to fuzzy functional dependency (ffd). The implication problem of ffds has been examined and a set of sound and complete inference axioms has been proposed. Next, the problem of lossless join decomposition of fuzzy relations for a given set of fuzzy functional dependencies is investigated. It is proved that with a suitable restriction on EQUAL, the design theory of a classical relational database with functional dependencies can be extended to fuzzy relations satisfying fuzzy functional dependencies.",
"The need to incorporate and treat information given in fuzzy terms in Relational Databases has concentrated a great effort in the last years. This article focuses on the treatment of functional dependencies (f.d.) between attributes of a relation scheme. We review other approaches to this problem and present some of its missfunctions concerning intuitive properties a fuzzy extension of f.d. should verify. Then we introduce a fuzzy extension of this concept to overcome the previous anomalous behaviors and study its properties. of primary interest is the completeness of our fuzzy version of Armstrong axioms in order to derive all the fuzzy functional dependencies logically implied by a set of f.f.d. just using these axioms. © 1994 John Wiley & Sons, Inc."
]
} |
1409.0980 | 2033663037 | We present a complete logic for reasoning with functional dependencies (FDs) with semantics defined over classes of commutative integral partially ordered monoids and complete residuated lattices. The dependencies allow us to express stronger relationships between attribute values than the ordinary FDs. In our setting, the dependencies not only express that certain values are determined by others but also express that similar values of attributes imply similar values of other attributes. We show complete axiomatization using a system of Armstrong-like rules, comment on related computational issues, and the relational vs. propositional semantics of the dependencies. A logic for reasoning with similarity-preserving dependencies is proposed.Its semantics is defined using commutative integral partially ordered monoids.The logic is complete and decidable.A closure-like algorithm for non-contracting sets of dependencies is presented.Two kinds of semantics are discussed: a propositional and a relational one. | Note that recently, probabilistic databases @cite_43 aiming at representation and querying of uncertain data are gaining popularity. Our approach is not directly related because it does not involve uncertainty in the probabilistic sense---like in the classic relational model, our data is certain. Also note that the degrees (the elements of integral commutative pomonoids) we use are not and shall not be interpreted as degrees of belief or evidence (even if @math , cf. the frequentist's temptation'' in @cite_21 and also @cite_40 . | {
"cite_N": [
"@cite_43",
"@cite_40",
"@cite_21"
],
"mid": [
"2093149131",
"2064858680",
"2110182592"
],
"abstract": [
"A wide range of applications have recently emerged that need to manage large, imprecise data sets. The reasons for imprecision in data are as diverse as the applications themselves: in sensor and RFID data, imprecision is due to measurement errors [15, 34]; in information extraction, imprecision comes from the inherent ambiguity in natural-language text [20, 26]; and in business intelligence, imprecision is tolerated because of the high cost of data cleaning [5]. In some applications, such as privacy, it is a requirement that the data be less precise. For example, imprecision is purposely inserted to hide sensitive attributes of individuals so that the data may be published [30]. Imprecise data has no place in traditional, precise database applications like payroll and inventory, and so, current database management systems are not prepared to deal with it. In contrast, the newly emerging applications offer value precisely because they query, search, and aggregate large volumes of imprecise data to find the “diamonds in the dirt”. This wide-variety of new applications points to the need for generic tools to manage imprecise data. In this paper, we survey the state of the art of techniques that handle imprecise data, by modeling it as probabilistic data [2–4,7,12,15,23,27,36]. A probabilistic database management system, or ProbDMS, is a system that stores large volumes of probabilistic data and supports complex queries. A ProbDMS may also need to perform some additional tasks, such as updates or recovery, but these do not differ from those in conventional database management systems and will not be discussed here. The major challenge in a ProbDMS is that it needs both to scale to large data volumes, a core competence of database management systems, and to do probabilistic inference, which is a problem studied in AI. While many scalable data management systems exists, probabilistic inference is a hard problem [35], and current systems do not scale to the same extent as data management systems do. To address this challenge, researchers have focused on the specific",
"",
"Preface. 1. Preliminaries. 2. Many-Valued Propositional Calculi. 3. Lukasiewicz Propositional Logic. 4. Product Logic, Godel Logic. 5. Many-Valued Predicate Logics. 6. Complexity and Undecidability. 7. On Approximate Inference. 8. Generalized Quantifiers and Modalities. 9. Miscellanea. 10. Historical Remarks. References. Index."
]
} |
1409.0964 | 1859687398 | This paper aims at constructing a good graph for discovering intrinsic data structures in a semi-supervised learning setting. Firstly, we propose to build a non-negative low-rank and sparse (referred to as NNLRS) graph for the given data representation. Specifically, the weights of edges in the graph are obtained by seeking a nonnegative low-rank and sparse matrix that represents each data sample as a linear combination of others. The so-obtained NNLRS-graph can capture both the global mixture of subspaces structure (by the low rankness) and the locally linear structure (by the sparseness) of the data, hence is both generative and discriminative. Secondly, as good features are extremely important for constructing a good graph, we propose to learn the data embedding matrix and construct the graph jointly within one framework, which is termed as NNLRS with embedded features (referred to as NNLRS-EF). Extensive experiments on three publicly available datasets demonstrate that the proposed method outperforms the state-of-the-art graph construction method by a large margin for both semi-supervised classification and discriminative analysis, which verifies the effectiveness of our proposed method. | Conceptually, a good graph should reveal the intrinsic complexity or dimensionality of data (say through local linear relationship) and also capture certain global structures of data as a whole (i.e., multiple clusters, subspaces, or manifolds). Traditional methods (such as @math -nearest neighbors and Locally Linear Reconstruction @cite_9 ) mainly rely on pair-wise Euclidean distances and construct a graph by a family of overlapped local patches. The so-obtained graph only captures the local structures and cannot capture the global structures of the whole data (i.e. the clusters). Moreover, these methods cannot produce data-adaptive neighborhoods because of using fixed global parameters to determinate the graph structure and their weights. Finally, these methods are sensitive to local data noise and errors. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2141923507"
],
"abstract": [
"Graph based semi-supervised learning (SSL) methods play an increasingly important role in practical machine learning systems. A crucial step in graph based SSL methods is the conversion of data into a weighted graph. However, most of the SSL literature focuses on developing label inference algorithms without extensively studying the graph building method and its effect on performance. This article provides an empirical study of leading semi-supervised methods under a wide range of graph construction algorithms. These SSL inference algorithms include the Local and Global Consistency (LGC) method, the Gaussian Random Field (GRF) method, the Graph Transduction via Alternating Minimization (GTAM) method as well as other techniques. Several approaches for graph construction, sparsification and weighting are explored including the popular k-nearest neighbors method (kNN) and the b-matching method. As opposed to the greedily constructed kNN graph, the b-matched graph ensures each node in the graph has the same number of edges and produces a balanced or regular graph. Experimental results on both artificial data and real benchmark datasets indicate that b-matching produces more robust graphs and therefore provides significantly better prediction accuracy without any significant change in computation time."
]
} |
1409.0035 | 2137724742 | Closeness centrality, first considered by Bavelas (1948), is an importance measure of a node in a network which is based on the distances from the node to all other nodes. The classic definition, proposed by Bavelas (1950), Beauchamp (1965), and Sabidussi (1966), is (the inverse of) the average distance to all other nodes. We propose the first highly scalable (near linear-time processing and linear space overhead) algorithm for estimating, within a small relative error, the classic closeness centralities of all nodes in the graph. Our algorithm applies to undirected graphs, as well as for centrality computed with respect to round-trip distances in directed graphs. For directed graphs, we also propose an efficient algorithm that approximates generalizations of classic closeness centrality to outbound and inbound centralities. Although it does not provide worst-case theoretical approximation guarantees, it is designed to perform well on real networks. We perform extensive experiments on large networks, demonstrating high scalability and accuracy. | Closeness centrality is only one of several common definitions of importance rankings. These include degree centrality, intended to capture activity level, betweenness centrality, which captures power, and eigenvalue centralities, which capture reputation @cite_23 @cite_36 . We only consider the classic definition of closeness centrality. A well-studied alternative is distance-decay closeness centrality, where the contribution of each node to the centrality of another is discounted (is non-increasing) with distance @cite_1 @cite_33 @cite_25 @cite_24 @cite_40 @cite_37 . The subtle difference between distance-decay and classic closeness centrality is that the latter emphasizes the penalties for far nodes, whereas the distance-decay measures instead emphasize the reward from closer nodes. Distance-decay centrality is well defined on disconnected or directed graphs. In terms of scalable computation, efficient algorithms with a small relative error guarantee were known for two decades and engineered to handle graphs with billions of edges @cite_20 @cite_33 @cite_35 @cite_51 @cite_27 @cite_28 @cite_44 @cite_37 . These algorithms, however, provide no guarantees for estimating classic closeness centrality. The intuitive reason is that they are based on sampling that is biased towards closer nodes, whereas correctly estimating classic closeness centrality requires accounting for distant nodes, which can be missed by such a sample. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_33",
"@cite_36",
"@cite_28",
"@cite_1",
"@cite_24",
"@cite_44",
"@cite_40",
"@cite_27",
"@cite_23",
"@cite_51",
"@cite_25",
"@cite_20"
],
"mid": [
"",
"2090914728",
"2126356486",
"2061901927",
"1839664781",
"2048095866",
"2134784378",
"2111023939",
"2070472314",
"2151095918",
"2056944867",
"1526020079",
"2112615110",
"1965996575"
],
"abstract": [
"",
"Graph datasets with billions of edges, such as social and Web graphs, are prevalent. To be feasible, computation on such large graphs should scale linearly with graph size. All-distances sketches (ADSs) are emerging as a powerful tool for scalable computation of some basic properties of individual nodes or the whole graph. ADSs were first proposed two decades ago (Cohen 1994) and more recent algorithms include ANF (Palmer, Gibbons, and Faloutsos 2002) and hyperANF (Boldi, Rosa, and Vigna 2011). A sketch of logarithmic size is computed for each node in the graph and the computation in total requires only a near linear number of edge relaxations. From the ADS of a node, we can estimate neighborhood cardinalities (the number of nodes within some query distance) and closeness centrality. More generally we can estimate the distance distribution, effective diameter, similarities, and other parameters of the full graph. We make several contributions which facilitate a more effective use of ADSs for scalable analysis of massive graphs. We provide, for the first time, a unified exposition of ADS algorithms and applications. We present the Historic Inverse Probability (HIP) estimators which are applied to the ADS of a node to estimate a large natural class of queries including neighborhood cardinalities and closeness centralities. We show that our HIP estimators have at most half the variance of previous neighborhood cardinality estimators and that this is essentially optimal. Moreover, HIP obtains a polynomial improvement over state of the art for more general domain queries and the estimators are simple, flexible, unbiased, and elegant. The ADS generalizes Min-Hash sketches, used for approximating cardinality (distinct count) on data streams. We obtain lower bounds on Min-Hash cardinality estimation using classic estimation theory. We illustrate the power of HIP, both in terms of ease of application and estimation quality, by comparing it to the HyperLogLog algorithm ( 2007), demonstrating a significant improvement over this state-of-the-art practical algorithm. We also study the quality of ADS estimation of distance ranges, generalizing the near-linear time factor-2 approximation of the diameter.",
"Data items are often associated with a location in which they are present or collected, and their relevance or influence decays with their distance. Aggregate values over such data thus depend on the observing location, where the weight given to each item depends on its distance from that location. We term such aggregation spatially-decaying. Spatially-decaying aggregation has numerous applications: Individual sensor nodes collect readings of an environmental parameter such as contamination level or parking spot availability; the nodes then communicate to integrate their readings so that each location obtains contamination level or parking availability in its neighborhood. Nodes in a p2p network could use a summary of content and properties of nodes in their neighborhood in order to guide search. In graphical databases such as Web hyperlink structure, properties such as subject of pages that can reach or be reached from a page using link traversals provide information on the page. We formalize the notion of spatially-decaying aggregation and develop efficient algorithms for fundamental aggregation functions, including sums and averages, random sampling, heavy hitters, quantiles, and L\"p norms.",
"Part I. Introduction: Networks, Relations, and Structure: 1. Relations and networks in the social and behavioral sciences 2. Social network data: collection and application Part II. Mathematical Representations of Social Networks: 3. Notation 4. Graphs and matrixes Part III. Structural and Locational Properties: 5. Centrality, prestige, and related actor and group measures 6. Structural balance, clusterability, and transitivity 7. Cohesive subgroups 8. Affiliations, co-memberships, and overlapping subgroups Part IV. Roles and Positions: 9. Structural equivalence 10. Blockmodels 11. Relational algebras 12. Network positions and roles Part V. Dyadic and Triadic Methods: 13. Dyads 14. Triads Part VI. Statistical Dyadic Interaction Models: 15. Statistical analysis of single relational networks 16. Stochastic blockmodels and goodness-of-fit indices Part VII. Epilogue: 17. Future directions.",
"Given a social network, which of its nodes have a stronger impact in determining its structure? More formally: which node-removal order has the greatest impact on the network structure? We approach this well-known problem for the first time in a setting that combines both web graphs and social networks, using datasets that are orders of magnitude larger than those appearing in the previous literature, thanks to some recently developed algorithms and software tools that make it possible to approximate accurately the number of reachable pairs and the distribution of distances in a graph. Our experiments highlight deep differences in the structure of social networks and web graphs, show significant limitations of previous experimental results, and at the same time reveal clustering by label propagation as a new and very effective way of locating nodes that are important from a structural viewpoint.",
"A new characteristic (residual closeness) which can measure the network resistance is presented. It evaluates closeness after removal of vertices or links, hence two types are considered—vertices and links residual closeness. This characteristic is more sensitive than the well-known measures of vulnerability—it captures the result of actions even if they are small enough not to disconnect the graph. A definition for closeness is modified so it still can be used for unconnected graphs but the calculations are easier.",
"AbstractGiven a social network, which of its nodes are more central? This question has been asked many times in sociology, psychology, and computer science, and a whole plethora of centrality measures (a.k.a. centrality indices, or rankings) were proposed to account for the importance of the nodes of a network. In this study, we try to provide a mathematically sound survey of the most important classic centrality measures known from the literature and propose an axiomatic approach to establish whether they are actually doing what they have been designed to do. Our axioms suggest some simple, basic properties that a centrality measure should exhibit.Surprisingly, only a new simple measure based on distances, harmonic centrality, turns out to satisfy all axioms; essentially, harmonic centrality is a correction to Bavelas’s classic closeness centrality [Bavelas 50] designed to take unreachable nodes into account in a natural way.As a sanity check, we examine in turn each measure under the lens of information...",
"Frigyes Karinthy, in his 1929 short story \"Lancszemek\" (in English, \"Chains\") suggested that any two persons are distanced by at most six friendship links.1 Stanley Milgram in his famous experiments challenged people to route postcards to a fixed recipient by passing them only through direct acquaintances. Milgram found that the average number of intermediaries on the path of the postcards lay between 4:4 and 5:7, depending on the sample of people chosen. We report the results of the first world-scale social-network graph-distance computations, using the entire Facebook network of active users (≈ 721 million users, ≈ 69 billion friendship links). The average distance we observe is 4:74, corresponding to 3:74 intermediaries or \"degrees of separation\", prompting the title of this paper. More generally, we study the distance distribution of Facebook and of some interesting geographic subgraphs, looking also at their evolution over time. The networks we are able to explore are almost two orders of magnitude larger than those analysed in the previous literature. We report detailed statistical metadata showing that our measurements (which rely on probabilistic algorithms) are very accurate.",
"Given a social network, which of its nodes are more central? This question has been asked many times in sociology, psychology and computer science, and a whole plethora of centrality measures (a.k.a. centrality indices, or rankings) were proposed to account for the importance of the nodes of a network. In this paper, we approach the problem of computing geometric centralities, such as closeness [1] and harmonic centrality [2], on very large graphs; traditionally this task requires an all-pairs shortest-path computation in the exact case, or a number of breadth-first traversals for approximated computations, but these techniques yield very weak statistical guarantees on highly disconnected graphs. We rather assume that the graph is accessed in a semi-streaming fashion, that is, that adjacency lists are scanned almost sequentially, and that a very small amount of memory (in the order of a dozen bytes) per node is available in core memory. We leverage the newly discovered algorithms based on HyperLogLog counters [3], making it possible to approximate a number of geometric centralities at a very high speed and with high accuracy. While the application of similar algorithms for the approximation of closeness was attempted in the MapReduce [4] framework [5], our exploitation of HyperLogLog counters reduces exponentially the memory footprint, paving the way for in-core processing of networks with a hundred billion nodes using \"just\" 2TiB of RAM. Moreover, the computations we describe are inherently parallelizable, and scale linearly with the number of available cores.",
"The neighbourhood function NG(t) of a graph G gives, for each t ∈ N, the number of pairs of nodes x, y such that y is reachable from x in less that t hops. The neighbourhood function provides a wealth of information about the graph [10] (e.g., it easily allows one to compute its diameter), but it is very expensive to compute it exactly. Recently, the ANF algorithm [10] (approximate neighbourhood function) has been proposed with the purpose of approximating NG(t) on large graphs. We describe a breakthrough improvement over ANF in terms of speed and scalability. Our algorithm, called HyperANF, uses the new HyperLogLog counters [5] and combines them efficiently through broadword programming [8]; our implementation uses talk decomposition to exploit multi-core parallelism. With HyperANF, for the first time we can compute in a few hours the neighbourhood function of graphs with billions of nodes with a small error and good confidence using a standard workstation. Then, we turn to the study of the distribution of the distances between reachable nodes (that can be efficiently approximated by means of HyperANF), and discover the surprising fact that its index of dispersion provides a clear-cut characterisation of proper social networks vs. web graphs. We thus propose the spid (Shortest-Paths Index of Dispersion) of a graph as a new, informative statistics that is able to discriminate between the above two types of graphs. We believe this is the first proposal of a significant new non-local structural index for complex networks whose computation is highly scalable.",
"Abstract The intuitive background for measures of structural centrality in social networks is reviewed and existing measures are evaluated in terms of their consistency with intuitions and their interpretability. Three distinct intuitive conceptions of centrality are uncovered and existing measures are refined to embody these conceptions. Three measures are developed for each concept, one absolute and one relative measure of the centrality of positions in a network, and one reflecting the degree of centralization of the entire network. The implications of these measures for the experimental study of small groups is examined.",
"The distance for a pair of vertices in a graph G is the length of the shortest path between them. The distance distribution for G specifies how many vertex pairs are at distance h, for all feasible values h. We study three fast randomized algorithms to approximate the distance distribution in large graphs. The Eppstein-Wang (EW) algorithm exploits sampling through a limited (logarithmic) number of Breadth-First Searches (BFSes). The Size-Estimation Framework (SEF) by Cohen employs random ranking and least-element lists to provide several estimators. Finally, the Approximate Neighborhood Function (ANF) algorithm by Palmer, Gibbons, and Faloutsos makes use of the probabilistic counting technique introduced by Flajolet and Martin, in order to estimate the number of distinct elements in a large multiset. We investigate how good is the approximation of the distance distribution, when the three algorithms are run in similar settings. The analysis of ANF derives from the results on the probabilistic counting method, while the one of sef is given by Cohen. For what concerns EW (originally designed for another problem), we extend its simple analysis in order to bound its error with high probability and to show its convergence. We then perform an experimental study on 30 real-world graphs, showing that our implementation of ew combines the accuracy of sef with the performance of ANF.",
"Abstract Ties often have a strength naturally associated with them that differentiate them from each other. Tie strength has been operationalized as weights. A few network measures have been proposed for weighted networks, including three common measures of node centrality: degree, closeness, and betweenness. However, these generalizations have solely focused on tie weights, and not on the number of ties, which was the central component of the original measures. This paper proposes generalizations that combine both these aspects. We illustrate the benefits of this approach by applying one of them to Freeman’s EIES dataset.",
"Computing the transitive closure in directed graphs is a fundamental graph problem. We consider the more restricted problem of computing the number of nodes reachable from every node and the size of the transitive closure. The fastest known transitive closure algorithms run inO(min mn,n2.38 ) time, wherenis the number of nodes andmthe number of edges in the graph. We present anO(m) time randomized (Monte Carlo) algorithm that estimates, with small relative error, the sizes of all reachability sets and the transitive closure. Another ramification of our estimation scheme is a O(m) time algorithm for estimating sizes of neighborhoods in directed graphs with nonnegative edge lengths. Our size-estimation algorithms are much faster than performing the respective explicit computations."
]
} |
1409.0085 | 2289201555 | Localization is one of the most important factor in wireless sensor networks as many applications demand position information of sensors. Recently there is an increasing interest on the use of mobile anchors for localizing sensors. Most of the works available in the literature either looks into the aspect of reducing path length of mobile anchor or tries to increase localization accuracy. The challenge is to design a movement strategy for a mobile anchor that reduces path length while meeting the requirements of a good range-free localization technique. In this paper we propose two cost-effective movement strategies i.e., path planning for a mobile anchor so that localization can be done using the localization scheme [10]. In one strategy we use a hexagonal movement pattern for the mobile anchor to localize all sensors inside a bounded rectangular region with lesser movement compared to the existing works in literature. In other strategy we consider a connected network in an unbounded region where the mobile anchor moves in the hexagonal pattern to localize the sensors. In this approach, we guarantee localization of all sensors within r 2 error-bound where r is the communication range of the mobile anchor and sensors. Our simulation results support theoretical results along with localization accuracy. | Path planning algorithms set path for mobile anchor along which it moves in the network while localization process goes on. First we look at a brief overview of the existing range free localization schemes which provides good accuracy and can be used for localization. proposed a localization scheme in @cite_13 where the sensor's position is estimated as the intersection of perpendicular bisector of two calculated chords. However this scheme suffers from short chord length problem. @cite_17 improved over that scheme using pre-arrival and post-departure points along with the beacon points to localize a sensor. Later used beacon distance more effectively as another geometric constraint and proposed a more accurate localization scheme in @cite_7 . | {
"cite_N": [
"@cite_13",
"@cite_7",
"@cite_17"
],
"mid": [
"2113411424",
"2140205575",
"2143565584"
],
"abstract": [
"Localization is one of the substantial issues in wireless sensor networks. Several approaches, including range-based and range-free, have been proposed to calculate positions for randomly deployed sensor nodes. With specific hardware, the range-based schemes typically achieve high accuracy based on either node-to-node distances or angles. On the other hand, the range-free mechanisms support coarse positioning accuracy with the less expense. This paper describes a range-free localization scheme using mobile anchor points. Each anchor point equipped with the GPS moves in the sensing field and broadcasts its current position periodically. The sensor nodes obtaining the information are able to compute their locations. With the scheme, no extra hardware or data communication is needed for the sensor nodes. Moreover, obstacles in the sensing fields can be tolerated. The localization mechanism has been implemented in the network simulator ns-2. The simulation results show that our scheme performed better than other range-free mechanisms.",
"Localization schemes using a mobile beacon have similar effects as the use of many static beacons in terms of improving localization accuracy. Specifically, the localization scheme with mobile beacons proposed by has finegrained accuracy, scalability, and power efficiency without requiring measured distance or angle information. However, this scheme often has large location errors in ill-conditioned cases. To improve the localization accuracy in Ssu's scheme, this letter proposes a localization scheme that estimates sensor location from possible areas by using geometric constraints. During simulations, the proposed scheme was shown to provide higher localization accuracy than Ssu's scheme and other schemes using a mobile beacon.",
"The localization of sensor nodes is a fundamental problem in sensor networks and can be implemented using powerful and expensive beacons. Beacons, the fewer the better, can acquire their position knowledge either from GPS devices or by virtue of being manually placed. In this paper, we propose a distributed method to localization of sensor nodes using a single moving beacon, where sensor nodes compute their position estimate based on the range-free technique. Two parameters are critical to the location accuracy of sensor nodes: the radio transmission range of the beacon and how often the beacon broadcasts its position. Theoretical analysis shows that these two parameters determine the upper bound of the estimation error when the traverse route of the beacon is a straight line. We extend the position estimate when the traverse route of the beacon is randomly chosen in a real-world situation, where the radio irregularity might cause a node to miss some crucial coordinate information from the beacon. We further point out that the movement pattern of the beacon plays a pivotal role in the localization task for sensors. To minimize estimation errors, sensor nodes can carry out a variety of algorithms in accordance with the movement of the beacon. Simulation results compare variants of the distributed method in a variety of testing environments. Real experiments show that the proposed method is feasible and can estimate the location of sensor nodes accurately, given a single moving beacon."
]
} |
1409.0348 | 1973891872 | In this paper, we analyze the adequacy and applicability of readership statistics recorded in social reference management systems for creating knowledge domain visualizations. First, we investigate the distribution of subject areas in user libraries of educational technology researchers on Mendeley. The results show that around 69 of the publications in an average user library can be attributed to a single subject area. Then, we use co-readership patterns to map the field of educational technology. The resulting visualization prototype, based on the most read publications in this field on Mendeley, reveals 13 topic areas of educational technology research. The visualization is a recent representation of the field: 80 of the publications included were published within ten years of data collection. The characteristics of the readers, however, introduce certain biases to the visualization. Knowledge domain visualizations based on readership statistics are therefore multifaceted and timely, but it is important that the characteristics of the underlying sample are made transparent. | Traditionally, knowledge domain visualizations are based on citations. @cite_14 and @cite_10 proposed co-citation as a measure of subject similarity and co-occurrence of ideas (see Figure , left side, for a graphical representation of the relationship). This relationship can be employed to cluster documents, authors, or journals from a certain field and to map them in a two-dimensional space. Co-citation analysis has been used to map many fields, for instance information management [p. 48] Schlogl2001 , hypertext [] Chen1999 , and educational technology [] Chen2011 to name just a few. Furthermore, co-citation analysis has also been used to map out all of science [] Small1999, Boyack2005 . | {
"cite_N": [
"@cite_14",
"@cite_10"
],
"mid": [
"2005207065",
"2076915037"
],
"abstract": [
"A new form of document coupling called co-citation is defined as the frequency with which two documents are cited together. The co-citation frequency of two scientific papers can be determined by comparing lists of citing documents in the Science Citation Index and counting identical entries. Networks of co-cited papers can be generated for specific scientific specialties, and an example is drawn from the literature of particle physics. Co-citation patterns are found to differ significantly from bibliographic coupling patterns, but to agree generally with patterns of direct citation. Clusters of co-cited papers provide a new way to study the specialty structure of science. They may provide a new approach to indexing and to the creation of SDI profiles.",
"Mendeley's crowd-sourced catalogue of research papers forms the basis of features such as the ability to search for papers, finding papers related to one currently being viewed and personalised recommendations. In order to generate this catalogue it is necessary to deduplicate the records uploaded from users' libraries and imported from external sources such as PubMed and arXiv. This task has been achieved at Mendeley via an automated system. However the quality of the deduplication needs to be improved. \"Ground truth\" data sets are thus needed for evaluating the system's performance but existing datasets are very small. In this paper, the problem of generating large scale data sets from Mendeley's database is tackled. An approach based purely on random sampling produced very easy data sets so approaches that focus on more difficult examples were explored. We found that selecting duplicates and non duplicates from documents with similar titles produced more challenging datasets. Additionally we established that a Solr-based deduplication system can achieve a similar deduplication quality to the fingerprint-based system currently employed. Finally, we introduce a large scale deduplication ground truth dataset that we hope will be useful to others tackling deduplication."
]
} |
1409.0348 | 1973891872 | In this paper, we analyze the adequacy and applicability of readership statistics recorded in social reference management systems for creating knowledge domain visualizations. First, we investigate the distribution of subject areas in user libraries of educational technology researchers on Mendeley. The results show that around 69 of the publications in an average user library can be attributed to a single subject area. Then, we use co-readership patterns to map the field of educational technology. The resulting visualization prototype, based on the most read publications in this field on Mendeley, reveals 13 topic areas of educational technology research. The visualization is a recent representation of the field: 80 of the publications included were published within ten years of data collection. The characteristics of the readers, however, introduce certain biases to the visualization. Knowledge domain visualizations based on readership statistics are therefore multifaceted and timely, but it is important that the characteristics of the underlying sample are made transparent. | Bibliographic coupling is based on outgoing citations available at the time of publication and can therefore be used to map the research front. One difference between bibliographic coupling and co-citation analysis is that the former is a retrospective method [] Garfield2001 , which means that the relationship between two documents cannot change over time. For an overview of the properties and the accuracy of the two citation-based mapping techniques refer to [chap. III.4] Egghe1990 and @cite_13 . | {
"cite_N": [
"@cite_13"
],
"mid": [
"2017562693"
],
"abstract": [
"In the past several years studies have started to appear comparing the accuracies of various science mapping approaches. These studies primarily compare the cluster solutions resulting from different similarity approaches, and give varying results. In this study we compare the accuracies of cluster solutions of a large corpus of 2,153,769 recent articles from the biomedical literature (2004–2008) using four similarity approaches: co-citation analysis, bibliographic coupling, direct citation, and a bibliographic coupling-based citation-text hybrid approach. Each of the four approaches can be considered a way to represent the research front in biomedicine, and each is able to successfully cluster over 92p of the corpus. Accuracies are compared using two metrics—within-cluster textual coherence as defined by the Jensen-Shannon divergence, and a concentration measure based on the grant-to-article linkages indexed in MEDLINE. Of the three pure citation-based approaches, bibliographic coupling slightly outperforms co-citation analysis using both accuracy measures; direct citation is the least accurate mapping approach by far. The hybrid approach improves upon the bibliographic coupling results in all respects. We consider the results of this study to be robust given the very large size of the corpus, and the specificity of the accuracy measures used. © 2010 Wiley Periodicals, Inc."
]
} |
1409.0348 | 1973891872 | In this paper, we analyze the adequacy and applicability of readership statistics recorded in social reference management systems for creating knowledge domain visualizations. First, we investigate the distribution of subject areas in user libraries of educational technology researchers on Mendeley. The results show that around 69 of the publications in an average user library can be attributed to a single subject area. Then, we use co-readership patterns to map the field of educational technology. The resulting visualization prototype, based on the most read publications in this field on Mendeley, reveals 13 topic areas of educational technology research. The visualization is a recent representation of the field: 80 of the publications included were published within ten years of data collection. The characteristics of the readers, however, introduce certain biases to the visualization. Knowledge domain visualizations based on readership statistics are therefore multifaceted and timely, but it is important that the characteristics of the underlying sample are made transparent. | In contrast to citations, usage statistics have been almost exclusively used in evaluative scientometrics [see e.g.][] Darmoni2002, Bollen2007, Schloegl2010 . There are only a handful of examples in relational scientometrics and knowledge domain visualization. One of the first are @cite_12 , who propose to use co-occurrences of document requests for clustering and mapping. @cite_17 use consecutive accesses to journal articles as a measure of journal relationships. They derive clusters of journals which are statistically significantly related to ISI subject categories. In a later study, @cite_18 create an overview map of all of science. The authors collect hundreds of millions of user interactions with digital libraries and bibliographic databases. Then, they re-create click-streams for each user, aggregated by journal, and apply network analysis to them. Among the challenges of the approach, the authors name that clickstreams need to be aggregated from various data sources. The varying user interfaces and the difference between reader and author population may introduce biases to the visualization [] Bollen2008b . | {
"cite_N": [
"@cite_18",
"@cite_12",
"@cite_17"
],
"mid": [
"2095972207",
"2054991574",
"2056530653"
],
"abstract": [
"Background: Intricate maps of science have been created from citation data to visualize the structure of scientific activity. However, most scientific publications are now accessed online. Scholarly web portals record detailed log data at a scale that exceeds the number of all existing citations combined. Such log data is recorded immediately upon publication and keeps track of the sequences of user requests (clickstreams) that are issued by a variety of users across many different domains. Given these advantages of log datasets over citation data, we investigate whether they can produce high-resolution, more current maps of science. Methodology: Over the course of 2007 and 2008, we collected nearly 1 billion user interactions recorded by the scholarly web portals of some of the most significant publishers, aggregators and institutional consortia. The resulting reference data set covers a significant part of world-wide use of scholarly web portals in 2006, and provides a balanced coverage of the humanities, social sciences, and natural sciences. A journal clickstream model, i.e. a first-order Markov chain, was extracted from the sequences of user interactions in the logs. The clickstream model was validated by comparing it to the Getty Research Institute’s Architecture and Art Thesaurus. The resulting model was visualized as a journal network that outlines the relationships between various scientific domains and clarifies the connection of the social sciences and humanities to the natural sciences. Conclusions: Maps of science resulting from large-scale clickstream data provide a detailed, contemporary view of scientific activity and correct the underrepresentation of the social sciences and humanities that is commonly found in citation data.",
"We present a new kind of statistical analysis of science and technical information (STI) in the Web context. We propose a battery of indicators about Web users, used bibliographic records and e-commercial transactions. In addition, we introduce two Web usage factors and we give an overview of the co-usage analysis. For these tasks, we present a computer-based system, called Miri@d, which produces descriptive statistical information about Web users' searching behaviour, and what is effectively used from a free-access digital bibliographical database.",
"Science has traditionally been mapped on the basis of authorship and citation data. Due to publication and citation delays such data represents the structure of science as it existed in the past. We propose to map science by proxy of journal relationships derived from usage data to determine research trends as they presently occur. This mapping is performed by applying a principal components analysis superimposed with a k-means cluster analysis on networks of journal relationships derived from a large set of article usage data collected for the Los Alamos National Laboratory research community. Results indicate that meaningful maps of the interests of a local scientific community can be derived from usage data. Subject groupings in the mappings corresponds to Thomson ISI subject categories. A comparison to maps resulting from the analysis of 2003 Thomson ISI Journal Citation Report data reveals interesting differences between the features of local usage and global citation data."
]
} |
1408.6891 | 2044343734 | The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases-QoS-aware bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and discuss the research challenges and opportunities in this emerging area. | Recent research on system virtualization focused on optimizing the technology for cloud data centers, in order to improve its security @cite_6 , or providing scalable management systems for the VMs in the data center @cite_43 . Network virtualization has been extensively studied to augment the standard network technologies stack, which is hard to modify @cite_2 . Chowdhury and Boutaba @cite_2 present an extensive survey in the area. More recently, a survey by Jain and Paul @cite_41 focused on the challenges of network virtualization and SDNs in the specific context of cloud computing. @cite_0 presented a system enabling network virtualization in multi-tenant data centers such as cloud data centers. The technology is based on the concept of and it provides one of the possible building blocks for our proposed architecture. | {
"cite_N": [
"@cite_41",
"@cite_6",
"@cite_0",
"@cite_43",
"@cite_2"
],
"mid": [
"",
"2043501224",
"2147802358",
"2140919237",
"2060898162"
],
"abstract": [
"",
"Multi-tenant cloud, which usually leases resources in the form of virtual machines, has been commercially available for years. Unfortunately, with the adoption of commodity virtualized infrastructures, software stacks in typical multi-tenant clouds are non-trivially large and complex, and thus are prone to compromise or abuse from adversaries including the cloud operators, which may lead to leakage of security-sensitive data. In this paper, we propose a transparent, backward-compatible approach that protects the privacy and integrity of customers' virtual machines on commodity virtualized infrastructures, even facing a total compromise of the virtual machine monitor (VMM) and the management VM. The key of our approach is the separation of the resource management from security protection in the virtualization layer. A tiny security monitor is introduced underneath the commodity VMM using nested virtualization and provides protection to the hosted VMs. As a result, our approach allows virtualization software (e.g., VMM, management VM and tools) to handle complex tasks of managing leased VMs for the cloud, without breaking security of users' data inside the VMs. We have implemented a prototype by leveraging commercially-available hardware support for virtualization. The prototype system, called CloudVisor, comprises only 5.5K LOCs and supports the Xen VMM with multiple Linux and Windows as the guest OSes. Performance evaluation shows that CloudVisor incurs moderate slow-down for I O intensive applications and very small slowdown for other applications.",
"Multi-tenant datacenters represent an extremely challenging networking environment. Tenants want the ability to migrate unmodified workloads from their enterprise networks to service provider datacenters, retaining the same networking configurations of their home network. The service providers must meet these needs without operator intervention while preserving their own operational flexibility and efficiency. Traditional networking approaches have failed to meet these tenant and provider requirements. Responding to this need, we present the design and implementation of a network virtualization solution for multi-tenant datacenters.",
"Cloud computing systems fundamentally provide access to large pools of data and computational resources through a variety of interfaces similar in spirit to existing grid and HPC resource management and programming systems. These types of systems offer a new programming target for scalable application developers and have gained popularity over the past few years. However, most cloud computing systems in operation today are proprietary, rely upon infrastructure that is invisible to the research community, or are not explicitly designed to be instrumented and modified by systems researchers. In this work, we present Eucalyptus -- an open-source software framework for cloud computing that implements what is commonly referred to as Infrastructure as a Service (IaaS); systems that give users the ability to run and control entire virtual machine instances deployed across a variety physical resources. We outline the basic principles of the Eucalyptus design, detail important operational aspects of the system, and discuss architectural trade-offs that we have made in order to allow Eucalyptus to be portable, modular and simple to use on infrastructure commonly found within academic settings. Finally, we provide evidence that Eucalyptus enables users familiar with existing Grid and HPC systems to explore new cloud computing functionality while maintaining access to existing, familiar application development software and Grid middle-ware.",
"Due to the existence of multiple stakeholders with conflicting goals and policies, alterations to the existing Internet architecture are now limited to simple incremental updates; deployment of any new, radically different technology is next to impossible. To fend off this ossification, network virtualization has been propounded as a diversifying attribute of the future inter-networking paradigm. By introducing a plurality of heterogeneous network architectures cohabiting on a shared physical substrate, network virtualization promotes innovations and diversified applications. In this paper, we survey the existing technologies and a wide array of past and state-of-the-art projects on network virtualization followed by a discussion of major challenges in this area."
]
} |
1408.6891 | 2044343734 | The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases-QoS-aware bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and discuss the research challenges and opportunities in this emerging area. | Regardless the specific approach for realization of virtual networking in a cloud data center (network hypervisors or SDNs), the problem of mapping computing and network elements to physical resources, as well as mapping virtual links into physical paths, needs to be addressed. A survey on this problem------has been proposed by Fisher at al. @cite_33 . | {
"cite_N": [
"@cite_33"
],
"mid": [
"2132238781"
],
"abstract": [
"Network virtualization is recognized as an enabling technology for the future Internet. It aims to overcome the resistance of the current Internet to architectural change. Application of this technology relies on algorithms that can instantiate virtualized networks on a substrate infrastructure, optimizing the layout for service-relevant metrics. This class of algorithms is commonly known as \"Virtual Network Embedding (VNE)\" algorithms. This paper presents a survey of current research in the VNE area. Based upon a novel classification scheme for VNE algorithms a taxonomy of current approaches to the VNE problem is provided and opportunities for further research are discussed."
]
} |
1408.6891 | 2044343734 | The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases-QoS-aware bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and discuss the research challenges and opportunities in this emerging area. | Mobile cloud computing is an emerging research area. Honeybee @cite_4 is a framework to enable mobile devices to offload tasks, utilize resources from devices, and perform human-aided computations. Huerta-Canepa @cite_5 proposed an architecture to offload computation to nearby devices using P2P techniques. Flores and Srirama @cite_50 proposed an approach for mobile cloud by exploring a middleware component between the devices and the cloud. @cite_48 proposed an approach to offload parts of a computation to the cloud to reduce energy consumption on the device. | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_50",
"@cite_48"
],
"mid": [
"2073777289",
"",
"1986936489",
"2023380813"
],
"abstract": [
"A mobile device like a smart phone is becoming one of main information processing devices for users these days. Using it, a user not only receives and makes calls, but also performs information tasks. However, a mobile device is still resource constrained, and some applications, especially work related ones, usually demand more resources than a mobile device can afford. To alleviate this, a mobile device should get resources from an external source. One of such sources is cloud computing platforms. Nevertheless an access to these platforms is not always guaranteed to be available and or is too expensive to access them. We envision a way to overcome this issue by creating a virtual cloud computing platform using mobile phones. We argue that due to the pervasiveness of mobile phones and the enhancement in their capabilities this idea is feasible. We show prior evaluation results to support our concept and discuss future developments.",
"",
"Abstract Mobile Cloud Computing (MCC) is arising as a prominent research area that is seeking to bring the massive advantages of the cloud to the constrained smartphones. Mobile devices are looking towards cloud-aware techniques, driven by their growing interest to provide ubiquitous PC-like functionality to mobile users. These functionalities mainly target at increasing storage and computational capabilities. Smartphones may integrate those functionalities from different cloud levels, in a service oriented manner within the mobile applications, so that a mobile task can be delegated by direct invocation of a service. However, developing these kind of mobile cloud applications requires to integrate and consider multiple aspects of the clouds, such as resource-intensive processing, programmatically provisioning of resources (Web APIs) and cloud intercommunication. To overcome these issues, we have developed a Mobile Cloud Middleware (MCM) framework, which addresses the issues of interoperability across multiple clouds, asynchronous delegation of mobile tasks and dynamic allocation of cloud infrastructure. MCM also fosters the integration and orchestration of mobile tasks delegated with minimal data transfer. A prototype of MCM is developed and several applications are demonstrated in different domains. To verify the scalability of MCM, load tests are also performed on the hybrid cloud resources. The detailed performance analysis of the middleware framework shows that MCM improves the quality of service for mobiles and helps in maintaining soft-real time responses for mobile cloud applications.",
"Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices. At the same time, such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds. This paper presents the design and implementation of CloneCloud, a system that automatically transforms mobile applications to benefit from the cloud. The system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud. CloneCloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment. At runtime, the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud, executing there for the remainder of the partition, and re-integrating the migrated thread back to the mobile device. Our evaluation shows that CloneCloud can adapt application partitioning to different environments, and can help some applications achieve as much as a 20x execution speed-up and a 20-fold decrease of energy spent on the mobile device."
]
} |
1408.6891 | 2044343734 | The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases-QoS-aware bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and discuss the research challenges and opportunities in this emerging area. | In relation to the problem of energy-efficient cloud computing, recent research investigated the utilization of migration and consolidation to this goal @cite_28 @cite_26 @cite_36 . These approaches disregard applications running on VMs, and thus they do not consider the impact of the consolidation and the migration on the performance of particular applications inside the VMs. @cite_23 developed a white box approach targeting HPC applications where applications' characteristics are inferred at runtime and measures for energy savings are applied based on the application characteristics. | {
"cite_N": [
"@cite_28",
"@cite_26",
"@cite_36",
"@cite_23"
],
"mid": [
"2068679048",
"2283588730",
"2110374615",
"2066655679"
],
"abstract": [
"Server consolidation is important for better resource utilization and efficient energy saving for cloud datacenters which host thousands of virtual machines (VMs) to support multitenant applications. The typical approach is to migrate VMs and reallocate workload among different servers in a way to minimize the total number of servers used. However, most existing works for server consolidation focus mainly on how to reduce the number of active servers and do not account for the migration overhead incurred to the applications on the migrating VMs (such as downtime). In this paper, we propose an adaptive mechanism to schedule VM allocation in cloud datacenters. Our solution takes into account the resource utilization and migration overheads, and adaptively allocates each VM to servers based on the estimated saturation level. As a result, the quality and overhead of consolidation is balanced and the total cost is minimized. The simulation results show that our mechanism could increase the average utilization on servers by up to 97 while reducing the total migration cost by about 60 , as compared with existing solutions.",
"The aim of Green Cloud Computing is to achieve a balance between the resource consumption and quality of service. In order to achieve this objective and to maintain the flexibility of the cloud, dynamic provisioning and allocation strategies are needed to regulate the internal settings of the cloud to address oscillatory peaks of workload. In this context, we propose strategies to optimize the use of the cloud resources without decreasing the availability. This work introduces two hybrid strategies based on a distributed system management model, describes the base strategies, operation principles, tests, and presents the results. We combine existing strategies to search their benefits. To test them, we extended CloudSim to simulate the organization model upon which we were based and to implement the strategies, using this improved version to validate our solution. Achieving a consumption reduction up to 87 comparing Standard Clouds with Green Clouds, and up to 52 comparing the proposed strategy with other Green Cloud Strategy.",
"The rapid growth in demand for computational power driven by modern service applications combined with the shift to the Cloud computing model have led to the establishment of large-scale virtualized data centers. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. Dynamic consolidation of virtual machines (VMs) using live migration and switching idle nodes to the sleep mode allows Cloud providers to optimize resource usage and reduce energy consumption. However, the obligation of providing high quality of service to customers leads to the necessity in dealing with the energy-performance trade-off, as aggressive consolidation may lead to performance degradation. Because of the variability of workloads experienced by modern applications, the VM placement should be optimized continuously in an online manner. To understand the implications of the online nature of the problem, we conduct a competitive analysis and prove competitive ratios of optimal online deterministic algorithms for the single VM migration and dynamic VM consolidation problems. Furthermore, we propose novel adaptive heuristics for dynamic consolidation of VMs based on an analysis of historical data from the resource usage by VMs. The proposed algorithms significantly reduce energy consumption, while ensuring a high level of adherence to the service level agreement. We validate the high efficiency of the proposed algorithms by extensive simulations using real-world workload traces from more than a thousand PlanetLab VMs. Copyright © 2011 John Wiley & Sons, Ltd.",
"The rising computing demands of scientific endeavors often require the creation and management of High Performance Computing (HPC) systems for running experiments and processing vast amounts of data. These HPC systems generally operate at peak performance, consuming a large quantity of electricity, even though their workload varies over time. Understanding the behavioral patterns (i.e., phases) of HPC systems during their use is key to adjust performance to resource demand and hence improve the energy efficiency. In this paper, we describe (i) a method to detect phases of an HPC system based on its workload, and (ii) a partial phase recognition technique that works cooperatively with on-the-fly dynamic management. We implement a prototype that guides the use of energy saving capabilities to demonstrate the benefits of our approach. Experimental results reveal the effectiveness of the phase detection method under real-life workload and benchmarks. A comparison with baseline unmanaged execution shows that the partial phase recognition technique saves up to 15 of energy with less than 1 performance degradation."
]
} |
1408.6736 | 2171637444 | In this paper, we present our spectrum sharing algorithm between a multi-input multi-output (MIMO) radar and Long Term Evolution (LTE) cellular system with multiple base stations (BS)s. We analyze the performance of MIMO radars in detecting the angle of arrival, propagation delay and Doppler angular frequency by projecting orthogonal waveforms onto the null-space of interference channel matrix. We compare and analyze the radar's detectable target parameters in the case of the original radar waveform and the case of null-projected radar waveform. Our proposed spectrum-sharing algorithm causes minimum loss in radar performance by selecting the best interference channel that does not cause interference to the i'th LTE base station due to the radar signal. We show through our analytical and simulation results that the loss in the radar performance in detecting the target parameters is minimal when our proposed spectrum sharing algorithm is used to select the best channel onto which radar signals are projected. | In @cite_11 , the authors proposed a technique to project radar waveforms onto the null space of an interference channel matrix between the radar and the communication system. In their proposed approach, the cognitive radar is assumed to have full knowledge of the interference channel and modifies its signal vectors in a way such that they are in the null space of the channel matrix. In order to avoid interference to the communication system, a radar signal projection onto the null space of interference channel between radar and communication systems is presented in @cite_6 . In @cite_1 , a novel signal processing approach is developed for coherent MIMO radar to minimize the arbitrary interferences generated by wireless systems from any direction while operating at the same frequency using cognitive radio technology. | {
"cite_N": [
"@cite_1",
"@cite_6",
"@cite_11"
],
"mid": [
"2164987783",
"2034926157",
"2167262128"
],
"abstract": [
"The theoretical feasibility is explored of spectrum-sharing between radar and wireless communications systems via an interference mitigation processing approach. The new approach allows radar and wireless systems to operate at the same carrier frequency if the radar possesses a multiple-input multiple-output (MIMO) structure. A novel signal processing approach is developed for coherent MIMO radar that effectively minimizes the arbitrary interferences generated by wireless systems from any direction, while operating at the same frequency using cognitive radio technology. Various theoretical aspects of the new approach are investigated, and its effectiveness is further validated through simulation.",
"In this paper, we present a beampattern analysis of the MIMO radar coexistence algorithm proposed in [1]. We extend the previous work and analyze the performance of MIMO radars by projecting finite alphabet constant-envelope waveforms onto the null-space of interference channel matrix. First, we compare and analyze the Cramer-Rao bound (CRB) on angle direction estimation. Second, we compare and analyze beampatterns of the original radar waveform and the null-projected radar waveform. Analytical and simulation results show minimal degradation of a radar's angle estimation of a target and transmit-receive beampattern. We also propose methods to substantially improve angle estimation and beampatterns of a null projected radar waveform which will not only guarantee optimal performance of the radar but at the same time guarantee coexistence of the radar and communication systems.",
"We propose projecting radar waveform onto the null space of an interference channel matrix between the radar and a communication system as a solution for coexistence of radar and communication systems in the same band. This approach assumes that the cognitive radar has full knowledge of the interference channel and tries to modify its signal vectors in such a way that they fall in the null space of the channel matrix. We investigate the effects of null space projections on radar performance and target parameter identification both analytically and quantitatively by using maximum likelihood and Cramer-Rao bound performance bounds to estimate target direction in the two cases of no null space projection and null space projection. Through simulation we demonstrate that by optimal choice of the number of antennas, the performance and target identification capabilities of radar in our method are competitive with that of traditional radar waveforms, while simultaneously guaranteeing coexistence between radar and communication systems."
]
} |
1408.6804 | 2949929467 | Structural support vector machines (SSVMs) are amongst the best performing models for structured computer vision tasks, such as semantic image segmentation or human pose estimation. Training SSVMs, however, is computationally costly, because it requires repeated calls to a structured prediction subroutine (called ), which has to solve an optimization problem itself, e.g. a graph cut. In this work, we introduce a new algorithm for SSVM training that is more efficient than earlier techniques when the max-oracle is computationally expensive, as it is frequently the case in computer vision tasks. The main idea is to (i) combine the recent stochastic Block-Coordinate Frank-Wolfe algorithm with efficient hyperplane caching, and (ii) use an automatic selection rule for deciding whether to call the exact max-oracle or to rely on an approximate one based on the cached hyperplanes. We show experimentally that this strategy leads to faster convergence to the optimum with respect to the number of requires oracle calls, and that this translates into faster convergence with respect to the total runtime when the max-oracle is slow compared to the other steps of the algorithm. A publicly available C++ implementation is provided at this http URL . | Many algorithms have been proposed to solve the optimization problem or equivalent formulations. @cite_27 and @cite_0 , where the problem was originally introduced, the authors derive a quadratic program (QP) that is equivalent to but resembles the SVM optimization problem with slack variables and a large number of linear constraints. | {
"cite_N": [
"@cite_0",
"@cite_27"
],
"mid": [
"2105644991",
"2105842272"
],
"abstract": [
"In typical classification tasks, we seek a function which assigns a label to a single object. Kernel-based approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from the ability to use high-dimensional feature spaces, and from their strong theoretical guarantees. However, many real-world tasks involve sequential, spatial, or structured data, where multiple labels must be assigned. Existing kernel-based methods ignore structure in the problem, assigning labels independently to each object, losing much useful information. Conversely, probabilistic graphical models, such as Markov networks, can represent correlations between labels, by exploiting problem structure, but cannot handle high-dimensional feature spaces, and lack strong theoretical generalization guarantees. In this paper, we present a new framework that combines the advantages of both approaches: Maximum margin Markov (M3) networks incorporate both kernels, which efficiently deal with high-dimensional features, and the ability to capture correlations in structured data. We present an efficient algorithm for learning M3 networks based on a compact quadratic program formulation. We provide a new theoretical bound for generalization in structured domains. Experiments on the task of handwritten character recognition and collective hypertext classification demonstrate very significant gains over previous approaches.",
"Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary issue of designing classification algorithms that can deal with more complex outputs, such as trees, sequences, or sets. More generally, we consider problems involving multiple dependent output variables, structured output spaces, and classification problems with class attributes. In order to accomplish this, we propose to appropriately generalize the well-known notion of a separation margin and derive a corresponding maximum-margin formulation. While this leads to a quadratic program with a potentially prohibitive, i.e. exponential, number of constraints, we present a cutting plane algorithm that solves the optimization problem in polynomial time for a large class of problems. The proposed method has important applications in areas such as computational biology, natural language processing, information retrieval extraction, and optical character recognition. Experiments from various domains involving different types of output spaces emphasize the breadth and generality of our approach."
]
} |
1408.6804 | 2949929467 | Structural support vector machines (SSVMs) are amongst the best performing models for structured computer vision tasks, such as semantic image segmentation or human pose estimation. Training SSVMs, however, is computationally costly, because it requires repeated calls to a structured prediction subroutine (called ), which has to solve an optimization problem itself, e.g. a graph cut. In this work, we introduce a new algorithm for SSVM training that is more efficient than earlier techniques when the max-oracle is computationally expensive, as it is frequently the case in computer vision tasks. The main idea is to (i) combine the recent stochastic Block-Coordinate Frank-Wolfe algorithm with efficient hyperplane caching, and (ii) use an automatic selection rule for deciding whether to call the exact max-oracle or to rely on an approximate one based on the cached hyperplanes. We show experimentally that this strategy leads to faster convergence to the optimum with respect to the number of requires oracle calls, and that this translates into faster convergence with respect to the total runtime when the max-oracle is slow compared to the other steps of the algorithm. A publicly available C++ implementation is provided at this http URL . | The QP can be solved by a cutting-plane algorithm that alternates between calling the max-oracle once for each training example and solving a QP with a subset of constraints (cutting planes) obtained from the oracle. The algorithm was proved to reach a solution @math -close to the optimal one within @math step, @math calls to the max-oracle. Joachims al improved this bound in @cite_1 by introducing the formulation. It is also based on finding cutting planes, but keeps their number much smaller, achieving an improved convergence rate of @math . The same convergence rate can also be achieved using @cite_24 . | {
"cite_N": [
"@cite_24",
"@cite_1"
],
"mid": [
"2115364117",
"2031248101"
],
"abstract": [
"A wide variety of machine learning problems can be described as minimizing a regularized risk functional, with different algorithms using different notions of risk and different regularizers. Examples include linear Support Vector Machines (SVMs), Gaussian Processes, Logistic Regression, Conditional Random Fields (CRFs), and Lasso amongst others. This paper describes the theory and implementation of a scalable and modular convex solver which solves all these estimation problems. It can be parallelized on a cluster of workstations, allows for data-locality, and can deal with regularizers such as L1 and L2 penalties. In addition to the unified framework we present tight convergence bounds, which show that our algorithm converges in O(1 e) steps to e precision for general convex problems and in O(log (1 e)) steps for continuously differentiable problems. We demonstrate the performance of our general purpose solver on a variety of publicly available data sets.",
"Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent \"1-slack\" reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at www.joachims.org ."
]
} |
1408.6804 | 2949929467 | Structural support vector machines (SSVMs) are amongst the best performing models for structured computer vision tasks, such as semantic image segmentation or human pose estimation. Training SSVMs, however, is computationally costly, because it requires repeated calls to a structured prediction subroutine (called ), which has to solve an optimization problem itself, e.g. a graph cut. In this work, we introduce a new algorithm for SSVM training that is more efficient than earlier techniques when the max-oracle is computationally expensive, as it is frequently the case in computer vision tasks. The main idea is to (i) combine the recent stochastic Block-Coordinate Frank-Wolfe algorithm with efficient hyperplane caching, and (ii) use an automatic selection rule for deciding whether to call the exact max-oracle or to rely on an approximate one based on the cached hyperplanes. We show experimentally that this strategy leads to faster convergence to the optimum with respect to the number of requires oracle calls, and that this translates into faster convergence with respect to the total runtime when the max-oracle is slow compared to the other steps of the algorithm. A publicly available C++ implementation is provided at this http URL . | Ratliff al observed in @cite_12 that one can also apply the directly to the objective , which also allows for stochastic and online training. A drawback of this is that the speed of convergence depends crucially on the choice of a learning rate, which makes subgradient-based SSVM training often less appealing for practical tasks. | {
"cite_N": [
"@cite_12"
],
"mid": [
"1792316426"
],
"abstract": [
"Promising approaches to structured learning problems have recently been developed in the maximum margin framework. Unfortunately, algorithms that are computationally and memory efficient enough to solve large scale problems have lagged behind. We propose using simple subgradient-based techniques for optimizing a regularized risk formulation of these problems in both online and batch settings, and analyze the theoretical convergence, generalization, and robustness properties of the resulting techniques. These algorithms are are simple, memory efficient, fast to converge, and have small regret in the online setting. We also investigate a novel convex regression formulation of structured learning. Finally, we demonstrate the benefits of the subgradient approach on three structured prediction problems."
]
} |
1408.6328 | 2950108440 | Although cloud computing has been transformational to the IT industry, it is built on large data centres that often consume massive amounts of electrical power. Efforts have been made to reduce the energy clouds consume, with certain data centres now approaching a Power Usage Effectiveness (PUE) factor of 1.08. While this is an incredible mark, it also means that the IT infrastructure accounts for a large part of the power consumed by a data centre. Hence, means to monitor and analyse how energy is spent have never been so crucial. Such monitoring is required not only for understanding how power is consumed, but also for assessing the impact of energy management policies. In this article, we draw lessons from experience on monitoring large-scale systems and introduce an energy monitoring software framework called KiloWatt API (KWAPI), able to handle OpenStack clouds. The framework --- whose architecture is scalable, extensible, and completely integrated into OpenStack --- supports several wattmeter devices, multiple measurement formats, and minimises communication overhead. | A means to monitor the energy consumption is key to assess potential gains of techniques to improve software and cloud resource management systems. Cloud monitoring is not a new topic @cite_17 as tools to monitor computing infrastructure @cite_0 @cite_18 as well as ways to address some of the usual issues of management systems have been introduced @cite_19 @cite_20 . Moreover, several systems for measuring the power consumed by compute clusters have been described in the literature @cite_1 . As traditional system and network monitoring techniques lack the capability to interface with wattmeters, most approaches for measuring energy consumption have been tailored to the needs of projects for which they were conceived. | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"",
"2047659792",
"2024316641",
"2052266701",
"154256836"
],
"abstract": [
"",
"",
"Although cloud computing has become an important topic over the last couple of years, the development of cloud-specific monitoring systems has been neglected. This is surprising considering their importance for metering services and, thus, being able to charge customers. In this paper we introduce a monitoring architecture that was developed and is currently implemented in the EASI-CLOUDS project. The demands on cloud monitoring systems are manifold. Regular checks of the SLAs and the precise billing of the resource usage, for instance, require the collection and converting of infrastructure readings in short intervals. To ensure the scalability of the whole cloud, the monitoring system must scale well without wasting resources. In our approach, the monitoring data is therefore organized in a distributed and easily scalable tree structure and it is based on the Device Management Specification of the OMA and the DMT Admin Specification of the OSGi. Its core component includes the interface, the root of the tree and extension points for sub trees which are implemented and locally managed by the data suppliers themselves. In spite of the variety and the distribution of the data, their access is generic and location-transparent. Besides simple suppliers of monitoring data, we outline a component that provides the means for storing and preprocessing data. The motivation for this component is that the monitoring system can be adjusted to its subscribers - while it usually is the other way round. In EASI-CLOUDS, the so-called Context Stores aggregate and prepare data for billing and other cloud components.",
"Monitoring is an essential aspect of maintaining and developing computer systems which increases in difficulty proportional to the size of the system. The need for robust monitoring tools has become more evident with the advent of cloud computing. Infrastructure as a Service (IaaS) clouds allow end users to deploy vast numbers of virtual machines as part of dynamic and transient architectures. Current monitoring solutions, including many of those in the open-source domain, rely on outdated concepts including manual configuration and centralised data collection and adapt poorly to membership churn. In this paper we propose the development of a cloud monitoring system to provide scalable and robust lookup, data collection and analysis services for large-scale cloud systems. In lieu of centrally managed monitoring we propose a multi-tier architecture using a layered gossip protocol to aggregate monitoring information and facilitate lookup, information collection and the identification of redundant capacity. This allows for a resource aware data collection and storage architecture that operates over the system being monitored. This in turn enables monitoring to be done in situ without the need for significant additional infrastructure to facilitate monitoring services. We evaluate this approach against alternative monitoring paradigms and demonstrate how our solution is well adapted to usage in a cloud-computing context.",
"Large-scale hosting infrastructures have become the fundamental platforms for many real-world systems such as cloud computing infrastructures, enterprise data centers, and massive data processing systems. However, it is a challenging task to achieve both scalability and high precision while monitoring a large number of intranode and internode attributes (e.g., CPU usage, free memory, free disk, internode network delay). In this paper, we present the design and implementation of a Resilient self-Compressive Monitoring (RCM) system for large-scale hosting infrastructures. RCM achieves scalable distributed monitoring by performing online data compression to reduce remote data collection cost. RCM provides failure resilience to achieve robust monitoring for dynamic distributed systems where host and network failures are common. We have conducted extensive experiments using a set of real monitoring data from NCSU's virtual computing lab (VCL), PlanetLab, a Google cluster, and real Internet traffic matrices. The experimental results show that RCM can achieve up to 200 percent higher compression ratio and several orders of magnitude less overhead than the existing approaches.",
"Nowadays, Cloud Computing is widely used to deliver services over the Internet for both technical and economical reasons. The number of Cloud-based services has increased rapidly and strongly in the last years, and so is increased the complexity of the infrastructures behind these services. To properly operate and manage such complex infrastructures effective and efficient monitoring is constantly needed. Many works in literature have surveyed Cloud properties, features, underlying technologies (e.g. virtualization), security and privacy. However, to the best of our knowledge, these surveys lack a detailed analysis of monitoring for the Cloud. To fill this gap, in this paper we provide a survey on Cloud monitoring. We start analyzing motivations for Cloud monitoring, providing also definitions and background for the following contributions. Then, we carefully analyze and discuss the properties of a monitoring system for the Cloud, the issues arising from such properties and how such issues have been tackled in literature. We also describe current platforms, both commercial and open source, and services for Cloud monitoring, underlining how they relate with the properties and issues identified before. Finally, we identify open issues, main challenges and future directions in the field of Cloud monitoring."
]
} |
1408.5809 | 2035890727 | Abbott, Altenkirch, Ghani and others have taught us that many parameterized datatypes (set functors) can be usefully analyzed via container representations in terms of a set of shapes and a set of positions in each shape. This paper builds on the observation that datatypes often carry additional structure that containers alone do not account for. We introduce directed containers to capture the common situation where every position in a data-structure determines another data-structure, informally, the sub-data-structure rooted by that position. Some natural examples are non-empty lists and node-labelled trees, and data-structures with a designated position (zippers). While containers denote set functors via a fully-faithful functor, directed containers interpret fully-faithfully into comonads. But more is true: every comonad whose underlying functor is a container is represented by a directed container. In fact, directed containers are the same as containers that are comonads. We also describe some constructions of directed containers. We have formalized our development in the dependently typed programming language Agda. | Brookes and Geva @cite_25 and later Uustalu with coauthors @cite_8 @cite_9 @cite_3 @cite_26 have used comonads to analyze notions of context-dependent computation such as dataflow computation, attribute grammars, tree transduction and cellular automata. Uustalu and Vene's @cite_23 observation of a connection between bottom-up tree relabellings and containers with extra structure started our investigation into directed containers. | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_23",
"@cite_25"
],
"mid": [
"1776874095",
"",
"178941037",
"2311486323",
"2114991657",
"1488428340"
],
"abstract": [
"In programming language semantics, it has proved to be fruitful to analyze context-dependent notions of computation, e.g., dataflow computation and attribute grammars, using comonads. We explore the viability and value of similar modeling of cellular automata. We identify local behaviors of cellular automata with coKleisli maps of the exponent comonad on the category of uniform spaces and uniformly continuous functions and exploit this equivalence to conclude some standard results about cellular automata as instances of basic category-theoretic generalities. In particular, we recover Ceccherini-Silberstein and Coornaert's version of the Curtis-Hedlund theorem.",
"",
"We have previously demonstrated that dataflow computation is comonadic. Here we argue that attribute evaluation has a lot in common with dataflow computation and admits a similar analysis. We claim that this yields a new, modular way to organize both attribute evaluation programs written directly in a functional language as well as attribute grammar processors. This is analogous to the monadic approach to effects. In particular, we advocate it as a technology of executable specification, not as one of efficient implementation.",
"Computations on trees form a classical topic in computing. These computations can be described in terms of machines (typically called tree transducers), or in terms of functions. This paper focuses on three flavors of bottom-up computations, of increasing generality. It brings categorical clarity by identifying a category of tree transducers together with two different behavior functors. The first sends a tree transducer to a coKleisli or biKleisli map (describing the contribution of each local node in an input tree to the global transformation) and the second to a tree function (the global tree transformation). The first behavior functor has an adjoint realization functor, like in Goguen's early work on automata. Further categorical structure, in the form of Hughes's Arrows, appears in properly parameterized versions of these structures.",
"We argue that symmetric (semi)monoidal comonads provide a means to structure context-dependent notions of computation such as notions of dataflow computation (computation on streams) and of tree relabelling as in attribute evaluation. We propose a generic semantics for extensions of simply typed lambda calculus with context-dependent operations analogous to the Moggi-style semantics for effectful languages based on strong monads. This continues the work in the early 90s by Brookes, Geva and Van Stone on the use of computational comonads in intensional semantics.",
""
]
} |
1408.5925 | 2087386549 | Computing platforms equipped with accelerators like GPUs have proven to provide great computational power. However, exploiting such platforms for existing scientific applications is not a trivial task. Current GPU programming frameworks such as CUDA C C++ require low-level programming from the developer in order to achieve high performance code. As a result porting of applications to GPUs is typically limited to time-dominant algorithms and routines, leaving the remainder not accelerated which can open a serious Amdahl's law issue. The Lattice QCD application Chroma allows us to explore a different porting strategy. The layered structure of the software architecture logically separates the data-parallel from the application layer. The QCD Data-Parallel software layer provides data types and expressions with stencil-like operations suitable for lattice field theory. Chroma implements algorithms in terms of this high-level interface. Thus by porting the low-level layer one effectively ports the whole application layer in one swing. The QDP-JIT PTX library, our reimplementation of the low-level layer, provides a framework for Lattice QCD calculations for the CUDA architecture. The complete software interface is supported and thus applications can be run unaltered on GPU-based parallel computers. This reimplementation was possible due to the availability of a JIT compiler which translates an assembly language (PTX) to GPU code. The existing expression templates enabled us to employ compile-time computations in order to build code generators and to automate the memory management for CUDA. Our implementation has allowed us to deploy the full Chroma gauge-generation program on large scale GPU-based machines such as Titan and Blue Waters and accelerate the calculation by more than an order of magnitude. | Development of an LQCD application using OpenCL was reported in @cite_23 . All operations involved in an HMC simulation were implemented separately as kernels. This work supports single GPUs only and reports sustaining between 77 | {
"cite_N": [
"@cite_23"
],
"mid": [
"2091407906"
],
"abstract": [
"Abstract We present an OpenCL-based Lattice QCD application using a heatbath algorithm for the pure gauge case and Wilson fermions in the twisted mass formulation. The implementation is platform independent and can be used on AMD or NVIDIA GPUs, as well as on classical CPUs. On the AMD Radeon HD 5870 our double precision ⁄ D implementation performs at 60 GFLOPS over a wide range of lattice sizes. The hybrid Monte Carlo presented reaches a speedup of four over the reference code running on a server CPU."
]
} |
1408.5920 | 2949333437 | In octilinear drawings of planar graphs, every edge is drawn as an alternating sequence of horizontal, vertical and diagonal ( @math ) line-segments. In this paper, we study octilinear drawings of low edge complexity, i.e., with few bends per edge. A @math -planar graph is a planar graph in which each vertex has degree less or equal to @math . In particular, we prove that every 4-planar graph admits a planar octilinear drawing with at most one bend per edge on an integer grid of size @math . For 5-planar graphs, we prove that one bend per edge still suffices in order to construct planar octilinear drawings, but in super-polynomial area. However, for 6-planar graphs we give a class of graphs whose planar octilinear drawings require at least two bends per edge. | Octilinear drawings can be considered as an extension of , which allow only horizontal and vertical segments (i.e., graphs of maximum degree @math admit such drawings). Tamassia @cite_10 showed that one can minimize the total number of bends in orthogonal drawings of embedded 4-planar graphs. However, minimizing the number of bends over all embeddings of a 4-planar graph is NP-hard @cite_4 . Note that the core of Tamassia's approach is a min-cost flow algorithm that first specifies the angles and the bends of the drawing, producing an , and then based on this representation computes the actual drawing by specifying the exact coordinates for the vertices and the bends of the edges. It is known that Tamassia's algorithm can be employed to produce a bend-minimum octilinear representation for any given embedded 8-planar graph. However, a bend-minimum octilinear representation may not be realizable by a corresponding planar octilinear drawing @cite_17 . Furthermore, the number of bends on a single edge might be very high, but can easily be bounded by applying appropriate capacity constraints to the flow-network. | {
"cite_N": [
"@cite_10",
"@cite_4",
"@cite_17"
],
"mid": [
"2002025203",
"2018091581",
"1986266847"
],
"abstract": [
"Given a planar graph G together with a planar representation P, a region preserving grid embedding of G is a planar embedding of G in the rectilinear grid that has planar representation isomorphic to P. In this paper, an algorithm is presented that computes a region preserving grid embedding with the minimum number of bends in edges. This algorithm makes use of network flow techniques, and runs in time @math , where n is the number of vertices of the graph. Constrained versions of the problem are also considered, and most results are extended to k-gonal graphs, i.e., graphs whose edges are sequences of segments with slope multiple of @math degrees. Applications of the above results can be found in several areas: VLSI circuit layout, architectural design, communication by light or microwave, transportation problems, and automatic layout of graphlike diagrams.",
"A directed graph is upward planar if it can be drawn in the plane such that every edge is a monotonically increasing curve in the vertical direction and no two edges cross. An undirected graph is rectilinear planar if it can be drawn in the plane such that every edge is a horizontal or vertical segment and no two edges cross. Testing upward planarity and rectilinear planarity are fundamental problems in the effective visualization of various graph and network structures. For example, upward planarity is useful for the display of order diagrams and subroutine-call graphs, while rectilinear planarity is useful for the display of circuit schematics and entity-relationship diagrams. We show that upward planarity testing and rectilinear planarity testing are NP-complete problems. We also show that it is NP-hard to approximate the minimum number of bends in a planar orthogonal drawing of an n-vertex graph with an @math error for any @math .",
"We connect two aspects of graph drawing, namely angular resolution, and the possibility to draw with all angles an integer multiple of 2π d. A planar graph with angular resolution at least π 2c an be drawn with all angles an integer multiple of π 2 (rectilinear). For d =4 , d> 2, an angular resolution of 2π d does not imply that the graph can be drawn with all angles an integer multiple of 2π d. We argue that the exceptional situation for d = 4 is due to the absence of triangles in the rectangular grid."
]
} |
1408.5574 | 1985417744 | To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data. | Hashing methods aim to preserve some notion of similarity (or distance) in the Hamming space. These methods can be roughly categorized as being either supervised or unsupervised. Unsupervised hashing methods ( @cite_23 @cite_45 @cite_10 @cite_5 @cite_1 @cite_2 @cite_12 @cite_3 ) try to preserve the similarity which is often calculated in the original feature space. For example, LSH @cite_23 generates random linear hash functions to approximate cosine similarity; SPH ( @cite_45 @cite_10 ) learns eigenfunctions that preserve Gaussian affinity; ITQ @cite_5 approximates the Euclidean distance in the Hamming space. Supervised hashing is designed to preserve the label-based similarity ( @cite_46 @cite_33 @cite_47 @cite_6 @cite_39 @cite_27 @cite_26 @cite_20 ). This might take place, for example, in the case where images from the same category are defined as being semantically similar to each other. Supervised hashing has received increasing attention recently (e.g., KSH @cite_27 , BRE @cite_33 ). Our method targets supervised hashing. Preliminary results of our work appeared in @cite_24 and @cite_22 . | {
"cite_N": [
"@cite_47",
"@cite_26",
"@cite_33",
"@cite_22",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_39",
"@cite_24",
"@cite_27",
"@cite_45",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_46",
"@cite_20",
"@cite_10",
"@cite_12"
],
"mid": [
"2221852422",
"2122205543",
"2164338181",
"2153273131",
"2029205712",
"2171700594",
"2345419310",
"2074668987",
"2143321506",
"1992371516",
"",
"1502916507",
"2251864938",
"1974647172",
"205159212",
"1705126064",
"189214596",
""
],
"abstract": [
"We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.",
"Fast nearest neighbor searching is becoming an increasingly important tool in solving many large-scale problems. Recently a number of approaches to learning data-dependent hash functions have been developed. In this work, we propose a column generation based method for learning data-dependent hash functions on the basis of proximity comparison information. Given a set of triplets that encode the pairwise proximity comparison information, our method learns hash functions that preserve the relative comparison relationships in the data as well as possible within the large-margin learning framework. The learning procedure is implemented using column generation and hence is named CGHash. At each iteration of the column generation procedure, the best hash function is selected. Unlike most other hashing methods, our method generalizes to new data points naturally; and has a training objective which is convex, thus ensuring that the global optimum can be identified. Experiments demonstrate that the proposed method learns compact binary codes and that its retrieval performance compares favorably with state-of-the-art methods when tested on a few benchmark datasets.",
"Fast retrieval methods are increasingly critical for many large-scale analysis tasks, and there have been several recent methods that attempt to learn hash functions for fast and accurate nearest neighbor searches. In this paper, we develop an algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings. We develop a scalable coordinate-descent algorithm for our proposed hashing objective that is able to efficiently learn hash functions in a variety of settings. Unlike existing methods such as semantic hashing and spectral hashing, our method is easily kernelized and does not require restrictive assumptions about the underlying distribution of the data. We present results over several domains to demonstrate that our method outperforms existing state-of-the-art techniques.",
"Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the Hamming space. Non-linear hash functions have demonstrated their advantage over linear ones due to their powerful generalization capability. In the literature, kernel functions are typically used to achieve non-linearity in hashing, which achieve encouraging retrieval perfor- mance at the price of slow evaluation and training time. Here we propose to use boosted decision trees for achieving non-linearity in hashing, which are fast to train and evalu- ate, hence more suitable for hashing with high dimensional data. In our approach, we first propose sub-modular for- mulations for the hashing binary code inference problem and an efficient GraphCut based block search method for solving large-scale inference. Then we learn hash func- tions by training boosted decision trees to fit the binary codes. Experiments demonstrate that our proposed method significantly outperforms most state-of-the-art methods in retrieval precision and training time. Especially for high- dimensional data, our method is orders of magnitude faster than many methods in terms of training time.",
"In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.",
"Learning based hashing methods have attracted considerable attention due to their ability to greatly increase the scale at which existing algorithms may operate. Most of these methods are designed to generate binary codes that preserve the Euclidean distance in the original space. Manifold learning techniques, in contrast, are better able to model the intrinsic structure embedded in the original high-dimensional data. The complexity of these models, and the problems with out-of-sample data, have previously rendered them unsuitable for application to large-scale embedding, however. In this work, we consider how to learn compact binary embeddings on their intrinsic manifolds. In order to address the above-mentioned difficulties, we describe an efficient, inductive solution to the out-of-sample data problem, and a process by which non-parametric manifold learning may be used as the basis of a hashing method. Our proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. We particularly show that hashing on the basis of t-SNE [29] outperforms state-of-the-art hashing methods on large-scale benchmark datasets, and is very effective for image classification with very short code lengths.",
"The ability of fast similarity search at large scale is of great importance to many Information Retrieval (IR) applications. A promising way to accelerate similarity search is semantic hashing which designs compact binary codes for a large number of documents so that semantically similar documents are mapped to similar codes (within a short Hamming distance). Since each bit in the binary code for a document can be regarded as a binary feature of it, semantic hashing is essentially a process of generating a few most informative binary features to represent the documents. Recently, we have proposed a novel Self-Taught Hashing (STH) approach to semantic hashing (that is going to be published in SIGIR-2010): we first find the optimal l-bit binary codes for all documents in the given corpus via unsupervised learning, and then train l classifiers via supervised learning to predict the l-bit code for any query document unseen before. In this paper, we present two further extensions to our STH technique: one is kernelisation (i.e., employing nonlinear kernels to achieve nonlinear hashing), and the other is supervision (i.e., exploiting the category label information to enhance the effectiveness of hashing). The advantages of these extensions have been shown through experiments on synthetic datasets and real-world datasets respectively.",
"Hashing-based approximate nearest neighbor (ANN) search in huge databases has become popular due to its computational and memory efficiency. The popular hashing methods, e.g., Locality Sensitive Hashing and Spectral Hashing, construct hash functions based on random or principal projections. The resulting hashes are either not very accurate or are inefficient. Moreover, these methods are designed for a given metric similarity. On the contrary, semantic similarity is usually given in terms of pairwise labels of samples. There exist supervised hashing methods that can handle such semantic similarity, but they are prone to overfitting when labeled data are small or noisy. In this work, we propose a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. Based on this framework, we present three different semi-supervised hashing methods, including orthogonal hashing, nonorthogonal hashing, and sequential hashing. Particularly, the sequential hashing method generates robust codes in which each hash function is designed to correct the errors made by the previous ones. We further show that the sequential learning paradigm can be extended to unsupervised domains where no labeled pairs are available. Extensive experiments on four large datasets (up to 80 million samples) demonstrate the superior performance of the proposed SSH methods over state-of-the-art supervised and unsupervised hashing techniques.",
"Most existing approaches to hashing apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of the method to respond to the data, and can result in complex optimization problems that are difficult to solve. Here we propose a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. This framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: hash bit learning and hash function learning based on the learned bits. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training standard binary classifiers. Both problems have been extensively studied in the literature. Our extensive experiments demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art.",
"Recent years have witnessed the growing popularity of hashing in large-scale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or often incur cumbersome model training. In this paper, we propose a novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing. The idea is to map the data to compact binary codes whose Hamming distances are minimized on similar pairs and simultaneously maximized on dissimilar pairs. Our approach is distinct from prior works by utilizing the equivalence between optimizing the code inner products and the Hamming distances. This enables us to sequentially and efficiently train the hash functions one bit at a time, yielding very short yet discriminative codes. We carry out extensive experiments on two image benchmarks with up to one million samples, demonstrating that our approach significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors, with accuracy gains ranging from 13 to 46 .",
"",
"The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).",
"Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method.",
"This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.",
"A dental model trimmer having an easily replaceable abrasive surfaced member. The abrasive surfaced member is contained within a housing and is releasably coupled onto a back plate assembly which is driven by a drive motor. The housing includes a releasably coupled cover plate providing access to the abrasive surfaced member. An opening formed in the cover plate exposes a portion of the abrasive surface so that a dental model workpiece can be inserted into the opening against the abrasive surface to permit work on the dental model workpiece. A tilting work table beneath the opening supports the workpiece during the operation. A stream of water is directed through the front cover onto the abrasive surface and is redirected against this surface by means of baffles positioned inside the cover plate. The opening includes a beveled boundary and an inwardly directed lip permitting angular manipulation of the workpiece, better visibility of the workpiece and maximum safety.",
"Hashing has proven a valuable tool for large-scale information retrieval. Despite much success, existing hashing methods optimize over simple objectives such as the reconstruction error or graph Laplacian related loss functions, instead of the performance evaluation criteria of interest—multivariate performance measures such as the AUC and NDCG. Here we present a general framework (termed StructHash) that allows one to directly optimize multivariate performance measures. The resulting optimization problem can involve exponentially or infinitely many variables and constraints, which is more challenging than standard structured output learning. To solve the StructHash optimization problem, we use a combination of column generation and cutting-plane techniques. We demonstrate the generality of StructHash by applying it to ranking prediction and image retrieval, and show that it outperforms a few state-of-the-art hashing methods.",
"With the growing availability of very large image databases, there has been a surge of interest in methods based on \"semantic hashing\", i.e. compact binary codes of data-points so that the Hamming distance between codewords correlates with similarity. In reviewing and comparing existing methods, we show that their relative performance can change drastically depending on the definition of ground-truth neighbors. Motivated by this finding, we propose a new formulation for learning binary codes which seeks to reconstruct the affinity between datapoints, rather than their distances. We show that this criterion is intractable to solve exactly, but a spectral relaxation gives an algorithm where the bits correspond to thresholded eigenvectors of the affinity matrix, and as the number of datapoints goes to infinity these eigenvectors converge to eigenfunctions of Laplace-Beltrami operators, similar to the recently proposed Spectral Hashing (SH) method. Unlike SH whose performance may degrade as the number of bits increases, the optimal code using our formulation is guaranteed to faithfully reproduce the affinities as the number of bits increases. We show that the number of eigenfunctions needed may increase exponentially with dimension, but introduce a \"kernel trick\" to allow us to compute with an exponentially large number of bits but using only memory and computation that grows linearly with dimension. Experiments shows that MDSH outperforms the state-of-the art, especially in the challenging regime of small distance thresholds.",
""
]
} |
1408.5574 | 1985417744 | To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data. | Various optimization techniques are proposed in existing methods. For example, random projection is used in LSH and Kernelized Locality-Sensitive Hashing (KLSH) @cite_38 ; spectral graph analysis for exploring the data manifold is used in SPH @cite_45 , MDSH @cite_10 , STH @cite_16 , Hashing with Graphs (AGH) @cite_2 , and inductive hashing @cite_3 ; vector quantization is used in ITQ @cite_5 , and K-means Hashing @cite_1 ; kernel methods are used in KSH @cite_27 and KLSH @cite_38 . MLH @cite_47 optimizes a hinge-like loss. The optimization techniques in most existing work are tightly coupled with their loss functions and hash functions. In contrast, our method breaks this coupling and easily incorporates various types of loss function and hash function. | {
"cite_N": [
"@cite_38",
"@cite_1",
"@cite_3",
"@cite_27",
"@cite_45",
"@cite_2",
"@cite_5",
"@cite_47",
"@cite_16",
"@cite_10"
],
"mid": [
"",
"2029205712",
"2171700594",
"1992371516",
"",
"2251864938",
"1974647172",
"2221852422",
"1835419070",
"189214596"
],
"abstract": [
"",
"In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.",
"Learning based hashing methods have attracted considerable attention due to their ability to greatly increase the scale at which existing algorithms may operate. Most of these methods are designed to generate binary codes that preserve the Euclidean distance in the original space. Manifold learning techniques, in contrast, are better able to model the intrinsic structure embedded in the original high-dimensional data. The complexity of these models, and the problems with out-of-sample data, have previously rendered them unsuitable for application to large-scale embedding, however. In this work, we consider how to learn compact binary embeddings on their intrinsic manifolds. In order to address the above-mentioned difficulties, we describe an efficient, inductive solution to the out-of-sample data problem, and a process by which non-parametric manifold learning may be used as the basis of a hashing method. Our proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. We particularly show that hashing on the basis of t-SNE [29] outperforms state-of-the-art hashing methods on large-scale benchmark datasets, and is very effective for image classification with very short code lengths.",
"Recent years have witnessed the growing popularity of hashing in large-scale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or often incur cumbersome model training. In this paper, we propose a novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing. The idea is to map the data to compact binary codes whose Hamming distances are minimized on similar pairs and simultaneously maximized on dissimilar pairs. Our approach is distinct from prior works by utilizing the equivalence between optimizing the code inner products and the Hamming distances. This enables us to sequentially and efficiently train the hash functions one bit at a time, yielding very short yet discriminative codes. We carry out extensive experiments on two image benchmarks with up to one million samples, demonstrating that our approach significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors, with accuracy gains ranging from 13 to 46 .",
"",
"Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method.",
"This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.",
"We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.",
"The ability of fast similarity search at large scale is of great importance to many Information Retrieval (IR) applications. A promising way to accelerate similarity search is semantic hashing which designs compact binary codes for a large number of documents so that semantically similar documents are mapped to similar codes (within a short Hamming distance). Although some recently proposed techniques are able to generate high-quality codes for documents known in advance, obtaining the codes for previously unseen documents remains to be a very challenging problem. In this paper, we emphasise this issue and propose a novel Self-Taught Hashing (STH) approach to semantic hashing: we first find the optimal l-bit binary codes for all documents in the given corpus via unsupervised learning, and then train l classifiers via supervised learning to predict the l-bit code for any query document unseen before. Our experiments on three real-world text datasets show that the proposed approach using binarised Laplacian Eigenmap (LapEig) and linear Support Vector Machine (SVM) outperforms state-of-the-art techniques significantly.",
"With the growing availability of very large image databases, there has been a surge of interest in methods based on \"semantic hashing\", i.e. compact binary codes of data-points so that the Hamming distance between codewords correlates with similarity. In reviewing and comparing existing methods, we show that their relative performance can change drastically depending on the definition of ground-truth neighbors. Motivated by this finding, we propose a new formulation for learning binary codes which seeks to reconstruct the affinity between datapoints, rather than their distances. We show that this criterion is intractable to solve exactly, but a spectral relaxation gives an algorithm where the bits correspond to thresholded eigenvectors of the affinity matrix, and as the number of datapoints goes to infinity these eigenvectors converge to eigenfunctions of Laplace-Beltrami operators, similar to the recently proposed Spectral Hashing (SH) method. Unlike SH whose performance may degrade as the number of bits increases, the optimal code using our formulation is guaranteed to faithfully reproduce the affinities as the number of bits increases. We show that the number of eigenfunctions needed may increase exponentially with dimension, but introduce a \"kernel trick\" to allow us to compute with an exponentially large number of bits but using only memory and computation that grows linearly with dimension. Experiments shows that MDSH outperforms the state-of-the art, especially in the challenging regime of small distance thresholds."
]
} |
1408.5574 | 1985417744 | To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data. | A number of existing hash methods have explicitly or implicitly employed two-step optimization based strategies for hash function learning, like Self-Taught Hashing (STH) @cite_16 , MLH @cite_47 , Hamming distance metric learning @cite_37 , ITQ @cite_5 and angular quantization based binary code learning @cite_14 . However, in these existing methods, the optimization techniques for binary inference and hash function learning are deeply coupled to their specific form of loss function and hash functions, and none of them is as general as our learning framework. | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_5",
"@cite_47",
"@cite_16"
],
"mid": [
"2113307832",
"2105572632",
"1974647172",
"2221852422",
"1835419070"
],
"abstract": [
"Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.",
"This paper focuses on the problem of learning binary codes for efficient retrieval of high-dimensional non-negative data that arises in vision and text applications where counts or frequencies are used as features. The similarity of such feature vectors is commonly measured using the cosine of the angle between them. In this work, we introduce a novel angular quantization-based binary coding (AQBC) technique for such data and analyze its properties. In its most basic form, AQBC works by mapping each non-negative feature vector onto the vertex of the binary hypercube with which it has the smallest angle. Even though the number of vertices (quantization landmarks) in this scheme grows exponentially with data dimensionality d, we propose a method for mapping feature vectors to their smallest-angle binary vertices that scales as O(d log d). Further, we propose a method for learning a linear transformation of the data to minimize the quantization error, and show that it results in improved binary codes. Experiments on image and text datasets show that the proposed AQBC method outperforms the state of the art.",
"This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.",
"We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.",
"The ability of fast similarity search at large scale is of great importance to many Information Retrieval (IR) applications. A promising way to accelerate similarity search is semantic hashing which designs compact binary codes for a large number of documents so that semantically similar documents are mapped to similar codes (within a short Hamming distance). Although some recently proposed techniques are able to generate high-quality codes for documents known in advance, obtaining the codes for previously unseen documents remains to be a very challenging problem. In this paper, we emphasise this issue and propose a novel Self-Taught Hashing (STH) approach to semantic hashing: we first find the optimal l-bit binary codes for all documents in the given corpus via unsupervised learning, and then train l classifiers via supervised learning to predict the l-bit code for any query document unseen before. Our experiments on three real-world text datasets show that the proposed approach using binarised Laplacian Eigenmap (LapEig) and linear Support Vector Machine (SVM) outperforms state-of-the-art techniques significantly."
]
} |
1408.5574 | 1985417744 | To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data. | STH @cite_16 explicitly employs a two-step learning scheme for optimizing the Laplacian affinity loss. The Laplacian affinity loss in STH only tries to pull together similar data pairs but does not push away dissimilar data pairs, which may lead to inferior performance @cite_44 . Moreover, STH employs a spectral method for binary code inference, which usually leads to inferior binary solutions due to its loose relaxation. Moreover, the spectral method does not scale well on large training data. In contrast, we are able to incorporate any hamming distance or affinity based loss function, and propose an efficient graph cut based method for large scale binary code inference. | {
"cite_N": [
"@cite_44",
"@cite_16"
],
"mid": [
"81594759",
"1835419070"
],
"abstract": [
"We propose a new dimensionality reduction method, the elastic embedding (EE), that optimises an intuitive, nonlinear objective function of the low-dimensional coordinates of the data. The method reveals a fundamental relation betwen a spectral method, Laplacian eigenmaps, and a nonlinear method, stochastic neighbour embedding; and shows that EE can be seen as learning both the coordinates and the affinities between data points. We give a homotopy method to train EE, characterise the critical value of the homotopy parameter, and study the method's behaviour. For a fixed homotopy parameter, we give a globally convergent iterative algorithm that is very effective and requires no user parameters. Finally, we give an extension to out-of-sample points. In standard datasets, EE obtains results as good or better than those of SNE, but more efficiently and robustly.",
"The ability of fast similarity search at large scale is of great importance to many Information Retrieval (IR) applications. A promising way to accelerate similarity search is semantic hashing which designs compact binary codes for a large number of documents so that semantically similar documents are mapped to similar codes (within a short Hamming distance). Although some recently proposed techniques are able to generate high-quality codes for documents known in advance, obtaining the codes for previously unseen documents remains to be a very challenging problem. In this paper, we emphasise this issue and propose a novel Self-Taught Hashing (STH) approach to semantic hashing: we first find the optimal l-bit binary codes for all documents in the given corpus via unsupervised learning, and then train l classifiers via supervised learning to predict the l-bit code for any query document unseen before. Our experiments on three real-world text datasets show that the proposed approach using binarised Laplacian Eigenmap (LapEig) and linear Support Vector Machine (SVM) outperforms state-of-the-art techniques significantly."
]
} |
1408.5574 | 1985417744 | To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data. | MLH @cite_47 learns hash functions by optimizing a convex-concave upper-bound of a hinge loss function (or BRE loss function). They need to solve a binary code inference problem during optimization, for which they propose a so-called loss-adjusted inference algorithm. A similar technique is also applied in @cite_37 . The training of ITQ @cite_5 also involves a two-step optimization strategy. ITQ iteratively generates the binary code and learns a rotation matrix by minimizing the quantization error against the binary code. They generate the binary code simply by thresholding. | {
"cite_N": [
"@cite_5",
"@cite_47",
"@cite_37"
],
"mid": [
"1974647172",
"2221852422",
"2113307832"
],
"abstract": [
"This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.",
"We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.",
"Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes."
]
} |
1408.5574 | 1985417744 | To build large-scale query-by-example image retrieval systems, embedding image features into a binary Hamming space provides great benefits. Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the binary Hamming space. Most existing approaches apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of those methods, and can result in complex optimization problems that are difficult to solve. In this work we proffer a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. The proposed framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes the hashing learning problem into two steps: binary code (hash bit) learning and hash function learning. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training a standard binary classifier. For solving large-scale binary code inference, we show how it is possible to ensure that the binary quadratic problems are submodular such that efficient graph cut methods may be used. To achieve efficiency as well as efficacy on large-scale high-dimensional data, we propose to use boosted decision trees as the hash functions, which are nonlinear, highly descriptive, and are very fast to train and evaluate. Experiments demonstrate that the proposed method significantly outperforms most state-of-the-art methods, especially on high-dimensional data. | The problem of similarity search on high-dimensional data is also addressed in @cite_43 . Their method extends vocabulary tree based search methods ( @cite_7 @cite_8 ) by replacing vocabulary trees with boosted trees. This type of search method represents the image as the evidence of a large number of visual words, which are vectors with thousands or even millions dimensions. Then this visual word representation is fed into an inverted index based search algorithm to output the final retrieval result. Clearly hashing methods are different from these inverted index based search methods. Our method is in the vein of supervised hashing methods: mapping data points into binary codes so that the hamming distance on binary codes reflects the label based similarity. | {
"cite_N": [
"@cite_43",
"@cite_7",
"@cite_8"
],
"mid": [
"2101618614",
"2128017662",
"2141362318"
],
"abstract": [
"High dimensional similarity search in large scale databases becomes an important challenge due to the advent of Internet. For such applications, specialized data structures are required to achieve computational efficiency. Traditional approaches relied on algorithmic constructions that are often data independent (such as Locality Sensitive Hashing) or weakly dependent (such as kd-trees, k-means trees). While supervised learning algorithms have been applied to related problems, those proposed in the literature mainly focused on learning hash codes optimized for compact embedding of the data rather than search efficiency. Consequently such an embedding has to be used with linear scan or another search algorithm. Hence learning to hash does not directly address the search efficiency issue. This paper considers a new framework that applies supervised learning to directly optimize a data structure that supports efficient large scale search. Our approach takes both search quality and computational cost into consideration. Specifically, we learn a boosted search forest that is optimized using pair-wise similarity labeled examples. The output of this search forest can be efficiently converted into an inverted indexing data structure, which can leverage modern text search infrastructure to achieve both scalability and efficiency. Experimental results show that our approach significantly outperforms the start-of-the-art learning to hash methods (such as spectral hashing), as well as state-of-the-art high dimensional search algorithms (such as LSH and k-means trees).",
"A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CDs. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.",
"In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora."
]
} |
1408.5777 | 1496840373 | The availability of high definition video content on the web has brought about a significant change in the characteristics of Internet video, but not many studies on characterizing video have been done after this change. Video characteristics such as video length, format, target bit rate, and resolution provide valuable input to design Adaptive Bit Rate (ABR) algorithms, sizing playout buffers in Dynamic Adaptive HTTP streaming (DASH) players, model the variability in video frame sizes, etc. This paper presents datasets collected in 2013 and 2014 that contains over 130,000 videos from YouTube's most viewed (or most popular) video charts in 58 countries. We describe the basic characteristics of the videos on YouTube for each category, format, video length, file size, and data rate variation, observing that video length and file size fit a log normal distribution. We show that three minutes of a video suffice to represent its instant data rate fluctuation and that we can infer data rate characteristics of different video resolutions from a single given one. Based on our findings, we design active measurements for measuring the performance of Internet video. | In @cite_22 , the authors use over 20 million randomly selected YouTube videos to show that the popularity of videos is constrained by geographical locations. Our methodology is in line with this as we gathered all available location-based charts from YouTube, giving our dataset regional representation. Furthermore, our proposal to LMAP for testing video streaming also recommends using location-based charts for measuring user experience. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2080318890"
],
"abstract": [
"One of the most popular user activities on the Web is watching videos. Services like YouTube, Vimeo, and Hulu host and stream millions of videos, providing content that is on par with TV. While some of this content is popular all over the globe, some videos might be only watched in a confined, local region. In this work we study the relationship between popularity and locality of online YouTube videos. We investigate whether YouTube videos exhibit geographic locality of interest, with views arising from a confined spatial area rather than from a global one. Our analysis is done on a corpus of more than 20 millions YouTube videos, uploaded over one year from different regions. We find that about 50 of the videos have more than 70 of their views in a single region. By relating locality to viralness we show that social sharing generally widens the geographic reach of a video. If, however, a video cannot carry its social impulse over to other means of discovery, it gets stuck in a more confined geographic region. Finally, we analyze how the geographic properties of a video's views evolve on a daily basis during its lifetime, providing new insights on how the geographic reach of a video changes as its popularity peaks and then fades away. Our results demonstrate how, despite the global nature of the Web, online video consumption appears constrained by geographic locality of interest: this has a potential impact on a wide range of systems and applications, spanning from delivery networks to recommendation and discovery engines, providing new directions for future research."
]
} |
1408.5777 | 1496840373 | The availability of high definition video content on the web has brought about a significant change in the characteristics of Internet video, but not many studies on characterizing video have been done after this change. Video characteristics such as video length, format, target bit rate, and resolution provide valuable input to design Adaptive Bit Rate (ABR) algorithms, sizing playout buffers in Dynamic Adaptive HTTP streaming (DASH) players, model the variability in video frame sizes, etc. This paper presents datasets collected in 2013 and 2014 that contains over 130,000 videos from YouTube's most viewed (or most popular) video charts in 58 countries. We describe the basic characteristics of the videos on YouTube for each category, format, video length, file size, and data rate variation, observing that video length and file size fit a log normal distribution. We show that three minutes of a video suffice to represent its instant data rate fluctuation and that we can infer data rate characteristics of different video resolutions from a single given one. Based on our findings, we design active measurements for measuring the performance of Internet video. | A crowdsourcing study in @cite_24 shows that the QoE for TCP video streaming is directly related to the number and duration of stalls during a video playout. In @cite_2 , the authors build a QoE model based on stalling events for YouTube. Research has also shown that actively measuring stall events (with the Pytomo tool @cite_10 ) in different ISPs helps predicting the user experience @cite_3 . The proposals in this paper can complement such a tool (like Pytomo) by selecting and categorizing videos for active measurements. | {
"cite_N": [
"@cite_24",
"@cite_10",
"@cite_3",
"@cite_2"
],
"mid": [
"",
"1640490335",
"1932121132",
"2013329400"
],
"abstract": [
"",
"In this work, we perform a controlled study on the perceived experience of viewing YouTube videos as observed from the end users' point of view through their residential ISPs in a metropolitan area. This study is conducted using our tool Pytomo, which we developed to emulate the end users' experience of viewing YouTube videos. Pytomo crawls and downloads YouTube videos to collect a number of measures, including information about the YouTube servers that are delivering them. This open-source tool was provided to a group of volunteers located in the Kansas City metropolitan area. These volunteers, who use different residential ISPs to access the Internet, were instructed to synchronously run the tool to collect the measurement data. Based on the data collected over specific time windows (separated by three months), we observed that there is a noticeable difference in the quality of experience depending on the residential ISPs. Furthermore, the content distribution policies for YouTube, for different residential ISPs vary and the round trip time is not the primary factor for choosing video servers.",
"This paper presents an in-depth study of YouTube video service delivery. We have designed a tool that crawls YouTube videos in order to precisely evaluate the quality of experience (QoE) as perceived by the user. We enrich the main QoE metric, the number of video stalls, with many network measurements and use multiple DNS servers to understand the main factors that impact QoS and QoE. This tool has been used in multiple configurations: first, to understand the main delivery policies of YouTube videos, then to understand the impact of the ISP on these policies and finally, to compare the US and Europe YouTube policies. Our main results are that: (i) geographical proximity does not matter inside Europe or the US, but link cost and ISP-dependent policies do; (ii) usual QoS metrics (RTT) have no impact on QoE (video stall); (iii) QoE is not impacted nowadays (with good access networks) by access capacity but by peering agreements between ISPs and CDNs, and by server load. We also indicate a network monitoring metric that can be used by ISPs to roughly evaluate the QoE of HTTP video streaming of a large set of clients at a reduced computational cost.",
"YouTube, the killer application of today's Internet, is changing the way ISPs and network operators manage quality monitoring and provisioning on their IP networks. YouTube is currently the most consumed Internet application, accounting for more than 30 of the overall Internet's traffic worldwide. Coupling such an overwhelming traffic volume with the ever intensifying competition among ISPs is pushing operators to integrate Quality of Experience (QoE) paradigms into their traffic management systems. The need for automatic QoE assessment solutions becomes even more critical in mobile broadband networks, where over-provisioning solutions can not be foreseen and bad user experience translates into churning clients. This paper presents a complete study on the problem of YouTube Quality of Experience monitoring and assessment in mobile networks. The paper considers not only the QoE analysis, modeling and assessment based on real users' experience, but also the passive monitoring of the quality provided by the ISP to its end-customers in a large mobile broadband network."
]
} |
1408.5777 | 1496840373 | The availability of high definition video content on the web has brought about a significant change in the characteristics of Internet video, but not many studies on characterizing video have been done after this change. Video characteristics such as video length, format, target bit rate, and resolution provide valuable input to design Adaptive Bit Rate (ABR) algorithms, sizing playout buffers in Dynamic Adaptive HTTP streaming (DASH) players, model the variability in video frame sizes, etc. This paper presents datasets collected in 2013 and 2014 that contains over 130,000 videos from YouTube's most viewed (or most popular) video charts in 58 countries. We describe the basic characteristics of the videos on YouTube for each category, format, video length, file size, and data rate variation, observing that video length and file size fit a log normal distribution. We show that three minutes of a video suffice to represent its instant data rate fluctuation and that we can infer data rate characteristics of different video resolutions from a single given one. Based on our findings, we design active measurements for measuring the performance of Internet video. | A more recent study was done for the characterization of an adult video streaming website @cite_9 : the authors' findings about the video durations is similar to what we observe in our dataset; however, we offer an additional in-depth analysis of formats, resolutions and variations in the instantaneous bit rate. Since YouTube dominates video traffic, our findings can serve as a good comparison point for similar studies on other video streaming services. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2092079672"
],
"abstract": [
"The Internet has evolved into a huge video delivery infrastructure, with websites such as YouTube and Netflix appearing at the top of most traffic measurement studies. However, most traffic studies have largely kept silent about an area of the Internet that (even today) is poorly understood: adult media distribution. Whereas ten years ago, such services were provided primarily via peer-to-peer file sharing and bespoke websites, recently these have converged towards what is known as Porn 2.0''. These popular web portals allow users to upload, view, rate and comment videos for free. Despite this, we still lack even a basic understanding of how users interact with these services. This paper seeks to address this gap by performing the first large-scale measurement study of one of the most popular Porn 2.0 websites: YouPorn. We have repeatedly crawled the website to collect statistics about 183k videos, witnessing over 60 billion views. Through this, we offer the first characterisation of this type of corpus, highlighting the nature of YouPorn's repository. We also inspect the popularity of objects and how they relate to other features such as the categories to which they belong. We find evidence for a high level of flexibility in the interests of its user base, manifested in the extremely rapid decay of content popularity over time, as well as high susceptibility to browsing order. Using a small-scale user study, we validate some of our findings and explore the infrastructure design and management implications of our observations."
]
} |
1408.5777 | 1496840373 | The availability of high definition video content on the web has brought about a significant change in the characteristics of Internet video, but not many studies on characterizing video have been done after this change. Video characteristics such as video length, format, target bit rate, and resolution provide valuable input to design Adaptive Bit Rate (ABR) algorithms, sizing playout buffers in Dynamic Adaptive HTTP streaming (DASH) players, model the variability in video frame sizes, etc. This paper presents datasets collected in 2013 and 2014 that contains over 130,000 videos from YouTube's most viewed (or most popular) video charts in 58 countries. We describe the basic characteristics of the videos on YouTube for each category, format, video length, file size, and data rate variation, observing that video length and file size fit a log normal distribution. We show that three minutes of a video suffice to represent its instant data rate fluctuation and that we can infer data rate characteristics of different video resolutions from a single given one. Based on our findings, we design active measurements for measuring the performance of Internet video. | In @cite_12 , the researchers study how YouTube's block-sending flow control can lead to TCP packet losses. The impact of location, devices and access technologies on user behavior and experience is discussed in @cite_13 . Distribution of YouTube's cache servers and their selection process was studied in @cite_21 . | {
"cite_N": [
"@cite_21",
"@cite_13",
"@cite_12"
],
"mid": [
"2150066695",
"1963903779",
"2079646068"
],
"abstract": [
"YouTube is one of the most popular video sharing websites in the world. In order to serve its globally distributed users, it requires a massive-scale video delivery system. A major part of the whole system is to decide exactly what server machine is going to serve a client request at any given time. In this paper, we analyze DNS resolutions and video playback traces collected by playing half a million YouTube videos from geographically distributed PlanetLab nodes to uncover load- balancing and server selection strategies used by YouTube. Our results indicate that YouTube is aggressively deploying cache servers of widely varying sizes at many different locations around the world with several of them located inside other ISPs to reduce cost and improve the end-user performance. We also find that YouTube tries to use local \"per-cache\" load-sharing before resorting to redirecting a user to bigger central cache locations.",
"In this paper we present a complete measurement study that compares YouTube traffic generated by mobile devices (smart-phones,tablets) with traffic generated by common PCs (desktops, notebooks, netbooks). We investigate the users' behavior and correlate it with the system performance. Our measurements are performed using unique data sets which are collected from vantage points in nation-wide ISPs and University campuses from two countries in Europe and the U.S. Our results show that the user access patterns are similar across a wide range of user locations, access technologies and user devices. Users stick with default player configurations, e.g., not changing video resolution or rarely enabling full screen playback. Furthermore it is very common that users abort video playback, with 60 of videos watched for no more than 20 of their duration. We show that the YouTube system is highly optimized for PC access and leverages aggressive buffering policies to guarantee excellent video playback. This however causes 25 -39 of data to be unnecessarily transferred, since users abort the playback very early. This waste of data transferred is even higher when mobile devices are considered. The limited storage offered by those devices makes the video download more complicated and overall less efficient, so that clients typically download more data than the actual video size. Overall, this result calls for better system optimization for both, PC and mobile accesses.",
"This paper presents the results of an investigation into the application flow control technique utilised by YouTube. We reveal and describe the basic properties of YouTube application flow control, which we term block sending, and show that it is widely used by YouTube servers. We also examine how the block sending algorithm interacts with the flow control provided by TCP and reveal that the block sending approach was responsible for over 40 of packet loss events in YouTube flows in a residential DSL dataset and the retransmission of over 1 of all YouTube data sent after the application flow control began. We conclude by suggesting that changing YouTube block sending to be less bursty would improve the performance and reduce the bandwidth usage of YouTube video streams."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.